doi
stringlengths
0
570
pub_date
stringclasses
355 values
sections
listlengths
1
245
abstract
stringlengths
0
5.25k
title
stringlengths
0
228
figures
listlengths
0
130
authors
stringlengths
0
11.9k
references
listlengths
0
835
formulas
listlengths
0
679
10.18653/v1/2022.acl-long.26
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b25", "b44", "b50", "b3", "b34", "b52", "b54", "b0", "b4", "b16", "b5", "b2", "b55", "b46", "b12", "b52", "b22", "b15", "b34", "b24", "b25", "b3", "b42", "b9", "b40", "b8", "b21", "b7", "b31", "b21", "b28", "b40", "b8" ], "table_ref": [], "text": "Large Language Models (LLMs) (Raffel et al., 2020;Xue et al., 2021;Zhang et al., 2022;Brown et al., 2020;Touvron et al., 2023;Scao et al., 2022;Zhao et al., 2023;Zhou et al., 2023) have achieved remarkable progress in recent years, especially with the release of ChatGPT 1 , which is widely acknowledged to revolutionize the world of natural language processing and to transform AI and society (Altman, 2023;Bubeck et al., 2023;Huang et al., 2023;Cao et al., 2023). Generally, LLMs are trained via self-supervised learning (Balestriero et al., 2023) on a huge amount of unlabeled data (Zhu et al., 2015;Liu et al., 2019b;Zellers et al., 2019;Gokaslan et al., 2019), which cover a wide range of genres, e.g., encyclopedias, news, books, social medias, etc. Many studies have demonstrated that LLMs are able to acquire broad knowledge of many types and subjects (Zhao et al., 2023;Paperno et al., 2016;Hoffmann et al., 2022;Touvron et al., 2023;Rae et al., 2021;Raffel et al., 2020;Du et al., 2022a).\nThe paradigms that elicit and apply the acquired knowledge in LLMs onto downstream tasks have shifted from fine-tuning to instruction-tuning. Early LLMs usually adopt fine-tuning, which, however, suffers from lack of cross-task generalization as the fine-tuned LLMs are often task-specific and not being parameter-efficient as all pre-trained LLM parameters are usually required to be updated on downstream tasks. As LLMs reach the scale of billions of parameters, a more efficient alternative to elicit knowledge, in-context Learning (ICL) (Brown et al., 2020;Xie et al., 2022;Dong et al., 2023) has emerged, which uses only a few demonstration examples concatenated in a prompt. In order to enhance the cross-task generalization of LLMs to a variety of downstream tasks, instructiontuning (Wei et al., 2022;Bach et al., 2022;Wang et al., 2022b), which is performed via multi-task learning (Chung et al., 2022;Liu et al., 2019a) has been proposed. In instruction-tuning, the instructions for different tasks are different, but in a unified form. Supervised Fine-tuning (SFT) (Ouyang et al., 2022) and Reinforcement Learning from Human Feedback (RLHF) (Christiano et al., 2017;Stiennon et al., 2020;Ouyang et al., 2022) are successful methods of instruction-tuning, which not only achieve generalization to unseen instructions but also align LLMs with human values and intents (Sanh et al., 2022;Wei et al., 2022;Chung et al " }, { "figure_ref": [], "heading": "2022).", "publication_ref": [ "b26", "b17", "b36", "b35", "b30", "b14", "b49", "b51", "b32", "b47", "b27", "b41", "b37", "b6", "b48", "b53" ], "table_ref": [ "tab_1" ], "text": "As the capability of knowledge acquisition and application in LLMs is constantly and rapidly evolving, a natural question which arises, is how we can assess such knowledge. Traditional singletask evaluation benchmarks (Rajpurkar et al., 2016;Khot et al., 2020) are no longer adequate for evaluating them. Multi-task benchmarks like GLUE (Wang et al., 2018), SuperGLUE (Wang et al., 2019) and BIG-bench (Srivastava et al., 2022) aggregate multiple NLP tasks to evaluate LLMs, which, however, are not sufficient either to assess knowledge acquired by LLMs. To address this issue, Hendrycks et al. (2021) propose MMLU, a widely used benchmark to test the knowledge acquisition and application capability of LLMs, which uses test questions across multiple subjects that humans lean to assess LLMs in zero-and fewshot settings. As MMLU is an English benchmark, it cannot be directly used for measuring LLMs trained with data in other languages. Even if it is translated into other languages, like the way used in evaluating GPT-4 (OpenAI, 2023), there are still gaps in knowledge across different languages as they usually have different education systems and knowledge structures.\nSimilar to LLMs in English, LLMs dedicated in Chinese have also achieved rapid advances recently (Du et al., 2022b;Zeng et al., 2021;Zhang et al., 2021;Sun et al., 2021;Zeng et al., 2022;Ren et al., 2023;Wu et al., 2021;Wang et al., 2021;Chen et al., 2023). However, a massive knowledge evaluation benchmark that measures Chinese LLMs in line with Chinese education system is a desideratum. To bridge this gap, we propose M3KE, a Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark, which is designed to measure the knowledge acquired by Chinese LLMs by testing their multitask accuracy in zero-and fewshot settings. M3KE contains 20,477 questions collected from 71 tasks. In particular, unlike recent benchmarks MMCU (Zeng, 2023) and AGIEval (Zhong et al., 2023), M3KE covers all major levels of Chinese education system, ranging from pri-mary school to college, as well as a wide variety of subjects, including humanities, history, politics, law, education, psychology, science, technology, art and religion. All questions are multiple-choice questions with four options, hence ensuring a standardized and unified assessment process. Table 1 shows the comparison between M3KE and other related benchmarks.\nWith M3KE, we have tested recently released Chinese LLMs , to track the progress of Chinese LLMs in knowledge acquisition and application. The evaluated models are either pre-trained on massive data or pre-trained + fine-tuned with SFT or RLHF. The model sizes vary from 335M to 130B parameters.\nWith extensive experiments, we observe that most evaluated Chinese LLMs have near randomchance accuracy, even for primary school tasks. The best performance is achieved by an SFT model built on the open-source BLOOM (Scao et al., 2022), which is 14.8 points lower than the accuracy of GPT-3.5-turbo.\nOur main contributions are summarized as follows.\n• We propose M3KE, a knowledge evaluation benchmark for Chinese LLMs, which to date covers the largest number of tasks in line with Chinese education system.\n• We have tested a wide range of open-source Chinese LLMs, with model sizes varying from 335M to 130B, against GPT-3.5-turbo.\n• We have analyzed the performance of each model on different subject clusters and education levels in both zero-and five-shot settings." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b24", "b34", "b49", "b41", "b32", "b33", "b23", "b6", "b36", "b35", "b30", "b43", "b14", "b48", "b53", "b13" ], "table_ref": [], "text": "Chinese Large Language Models. Recent years have witnessed a rapid development of Chinese LLMs, following the efforts of their English counterparts, e.g., GPT-3 (Brown et al., 2020), Gopher (Rae et al., 2021), LLaMA (Touvron et al., 2023). Chinese LLMs, such as Pangu-α with 200B parameters (Zeng et al., 2021), Yuan 1.0 with 245B parameters (Wu et al., 2021), ERNIE 3.0 Titan with 260B parameters (Sun et al., 2021), have been trained on Chinese textual data that contain tokens ranging from 180B to 329B. These models are developed in industry, which are usually not opensource. With the success of open-source LLMs (Taori et al., 2023;Peng et al., 2023) based on LLaMA, Chinese versions, such as ChatGLM-6B 2 , MOSS 3 , Phoenix (Chen et al., 2023), have emerged very recently. These models usually contain less than 20 billion parameters and are supervised finetuned on instructions that are either distilled from models of GPT-3.5 or learned in a self-instructing manner (Wang et al., 2022a).\nBenchmarks. The capability of eliciting and applying knowledge acquired during training is an important indicator for measuring LLMs. However, existing evaluation benchmarks (Wang et al., 2018(Wang et al., , 2019;;Srivastava et al., 2022;Xu et al., 2020) are normally designed to evaluate LLMs on various NLP tasks, not tailored for knowledge acquisition and application assessment. To comprehensively measure knowledge in LLMs, MMLU (Hendrycks et al., 2021) is proposed, which collects multiplechoice questions from 57 tasks that humans learn. As a different education system is used, on the one side, knowledge in Chinese LLMs may not exhibit in the translated-into-Chinese version of MMLU, e.g., Chinese Medicine, Chinese Legal System. On the other side, knowledge to be assessed in MMLU may be absent in Chinese textual data used to train Chinese LLMs.\nOur work is related to 3 datasets that have been developed concurrently with M3KE. MMCU (Zeng, 2023) is a Chinese benchmark that assesses knowledge in four domains: medicine, education, law, and psychology. AGIEval (Zhong et al., 2023) is a bilingual benchmark that measures the capability of LLMs on tasks of the Chinese college entrance exam and American college admission test, for high-school graduates. DomMa (Gu et al., 2023) is another Chinese benchmark that focuses on domain-specific knowledge. In contrast to these benchmarks, M3KE is a comprehensive Chinese benchmark that spans major stages of Chinese education system, from primary school to college with a broader range of subject categories, such as art, religion, traditional Chinese medicine, and classical literature." }, { "figure_ref": [ "fig_0" ], "heading": "M3KE", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "M3KE covers major Chinese education levels, including primary school, middle school, high school, college and professional exams, as well as multiple tasks as shown in Figure 1 while the detailed subjects are listed in Appendix A. We collect and organize multiple-choice questions from public websites. To ensure the quality and comprehensiveness of the questions, entrance exam questions are selected as much as possible. For the primary school, middle school and high school education level, we choose the subjects according to the corresponding entrance exams for Chinese students. For the college level, we select subjects according to the national entrance exam for master's degree in China.\nIn addition to subjects under the major Chinese education system, we also collect comprehensive tasks to expand the knowledge coverage in M3KE, including computer grade exam, ancient Chinese language, novels and Chinese national civil service exam which covers commonsense knowledge, arts, religion, etc.\nIn total, we have 71 tasks and 20,477 questions. We divide each task into a test set and a few-shot set, where the few-shot set includes 5 questions for each task for the few-shot evaluation setting. The test set includes 20,122 questions, and each task contains at least 100 questions. Instances of M3KE are listed in Table 2." }, { "figure_ref": [], "heading": "Arts & Humanities", "publication_ref": [], "table_ref": [], "text": "Arts & Humanities comprise a range of disciplines that cover Chinese, literature, arts and history. These disciplines focus on the analysis and interpretation of literary and cultural artifacts, rather than on practical applications. For instance, the Chinese in primary school aims to evaluate the students' proficiency in language use and literary apprecia-" }, { "figure_ref": [], "heading": "Arts & Humanities", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "下面关于拉斯科洞穴壁画说法错误的是?", "publication_ref": [], "table_ref": [], "text": "Which statement about the Lascaux cave murals is incorrect?\nA 这个壁画是在法国发现的 This fresco was found in France B 发现的动物形象有100多个\nThere are more than 100 animal images found C 发现的时间为1940年\nThe tion for ages 7 to 13, such as the usage of synonyms and antonyms. The historical studies cover both Chinese and world history from ancient to modern times. M3KE also incorporates artistic subjects, such as dance, fine arts, music and film, because we believe that art is an essential aspect of human culture and should be relevant to LLMs as well." }, { "figure_ref": [], "heading": "Social Sciences", "publication_ref": [], "table_ref": [], "text": "Social sciences differ from Arts & Humanities in that they emphasize practical aspects of humanistic studies, such as law, politics, education and psychology. These subjects are mainly taught at the college level. Although ideological and political courses are also part of the Chinese middle school and high school curriculum, they primarily involve moral education. Social sciences also encompass economic and management studies, which largely consist of questions from the joint exams for graduate students majoring in these fields in China. These studies include microeconomics, macroeconomics, management and logic at the undergraduate level." }, { "figure_ref": [], "heading": "Natural Sciences", "publication_ref": [], "table_ref": [], "text": "Natural sciences encompass engineering, science, medicine and fundamental disciplines such as math, physics, chemistry, biology and so on. These subjects often require a high degree of computation, analysis and logical reasoning skills. The same subject may assess different types of knowledge at different levels according to the Chinese education system. For instance, primary school math mainly tests the basic arithmetic operations, while high school math covers more advanced mathematical concepts, such as sequences, derivatives and geometry." }, { "figure_ref": [], "heading": "Other", "publication_ref": [], "table_ref": [], "text": "Other types of tasks include religion, Chinese civil service exam, and specialized tasks, like ancient Chinese language and novel reasoning task. These tasks require knowledge that is not limited to a single level or subject as described above. The Chinese civil service exam involves knowledge in commonsense, humanities, logic and other domains, which we can consider as an assessment of the comprehensive knowledge for LLMs. Similarly, in the novel task, these questions involve a lot of information from many classical novels." }, { "figure_ref": [], "heading": "Overall Statistics", "publication_ref": [], "table_ref": [], "text": "Table 3 shows the overall statistics of M3KE. The numbers of tasks in the four subject clusters described above are 12, 21, 31 and 7, respectively, while the numbers of questions in the four subject clusters are 3,612, 6,222, 8,162 and 2,126, respectively. The maximum number of questions is 425 while the minimum number is 100. Questions in social and natural sciences are usually longer than those in arts & humanities and other while their answer choices are shorter." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We assessed state-of-the-art large language models recently developed for Chinese on M3KE, attempting to understand and track the progress of Chinese LLMs in learning and applying knowledge from massive data." }, { "figure_ref": [], "heading": "Assessed Models", "publication_ref": [ "b47", "b20" ], "table_ref": [], "text": "The assessed Chinese LLMs can be divided into two categories: models being only pre-trained and models that are instruction-tuned with SFT/RLHF. For the former, we selected GLM-335M (Du et al., 2022b), GLM-10B (Du et al., 2022b), GLM-130B (Zeng et al., 2022) and BLOOM-7.1B (Scao et al., 2022). For the latter, we included ChatGLM-6B4 , MOSS-SFT-16B 2023), where BELLE-7B is the SFT version based on BLOOMZ-7.1B-MT (Muennighoff et al., 2022). We used the two variants of BELLE fine-tuned on 200K and 2M instructions, namely BELLE-7B-0.2M6 and BELLE-7B-2M7 . We also evaluated GPT-3.5-turbo8 from OpenAI as a reference." }, { "figure_ref": [], "heading": "Prompts", "publication_ref": [], "table_ref": [], "text": "All models were tested using the n-shot setting with a unified prompt, where n is an integer from 0 to 5. For the zero-shot setting (i.e., n = 0), the unified prompt provided to all models is \"Please choose the correct option from 'A', 'B', 'C', 'D' based on the following question\". For few-shot setting (i.e., n > 0), the unified prompt is \"Please choose the correct option from 'A', 'B', 'C', 'D' based on the following examples and question\". The input to all LLMs consists of the prompt, question, answer choices and suffix, which is \"the correct option is: \". Even we tell models to only output the correct answer choice indicator (i.e., ∈ {A, B, C, D}) in the prompt, not all models can follow this instruction. Sometimes they output both answer choice and rationale to the answer choice (the order of these two types of outputs are random). We hence keep only the output answer choice indicator as the final answer to calculate accuracy." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b47" ], "table_ref": [ "tab_4", "tab_5", "tab_6", "tab_7" ], "text": "We compared the zero-shot accuracy of each model in Table 4 in terms of subject clusters. For the pretrained models, there is a clear positive correlation between accuracy and model size, where the model with 130B parameters significantly outperforms the models with 335M/7B/10B parameters, even though they have different backbones. The accuracy of GPT-3.5-turbo is significantly higher than those of the evaluated Chinese LLMs, which currently provides an upper bound for open-source Chinese LLMs. All pretrained LLMs with ≤ 10B parameters achieve an accuracy lower than randomchance accuracy (i.e., 25%), indicating that knowledge acquired by these models is not adequate for M3KE. In addition, we observe that the number of instructions used for SFT is an important factor, as the BELLE model fine-tuned with 2M instructions is significantly better than that with 0.2M instructions. The zero-shot performance of GPT-3. Chinese LLMs, but still lower than 50% accuracy, suggesting that M3KE is a very challenging benchmark.\nWe further compared the accuracy of different models under the 5-shot setting. Results are shown in Table 5. For pre-trained models, ICL in the few-shot setting significantly improves the performance and the smaller the pretrained model is, the larger the achieved improvement is. The exception is GLM-130B, which performs significantly worse under the 5-shot setting than the zero-shot setting. We conjecture that GLM-130B already has the ability to understand questions without examples because it uses instances in the instruction format as part of the pre-training corpus (Zeng et al., 2022), and demonstrations may bring interference to the final prediction of the model. The 5-shot results of the SFT models are mixed in comparison to those in the zero-shot setting. We find that for ChatGLM-6B and BELLE-7B-2M, 5-shot is worse than zero-shot setting, similar to the results observed on GLM-130B. In contrast, 5-shot has a positive impact on MOSS-SFT-16B and BELLE-7B-0.2M. As these models are different from each other in terms of model size, training data, instruction data, etc., we leave the in-depth analysis on the mixed results to our future work.\nWe finally provide the results of each model on different education levels in Table 6 for the zeroshot setting and Table 7 for the few-shot setting. Interestingly, we observe that LLMs do not reach higher performance at lower education levels than higher education levels, even for GPT-3.5-turbo. This suggests that tasks from lower education levels remain challenging for these state-of-the-art Chinese LLMs." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have presented a new benchmark M3KE, to assess the capability of Chinese LLMs in learning and applying knowledge in multiple subjects at multiple levels of Chinese education system. M3KE contains 71 tasks and 20,447 questions. We find that all evaluated state-of-the-art open-source Chinese LLMs significantly lag behind GPT-3.5. We hope that this benchmark can be used to track and promote further progress in Chinese LLMs. " }, { "figure_ref": [], "heading": "A All Subjects", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "See Table 8 for all 71 tasks." }, { "figure_ref": [], "heading": "Data Structures", "publication_ref": [], "table_ref": [], "text": "" } ]
Large language models have recently made tremendous progress in a variety of aspects, e.g., cross-task generalization, instruction following. Comprehensively evaluating the capability of large language models in multiple tasks is of great importance. In this paper, we propose M3KE, a Massive Multi-Level Multi-Subject Knowledge Evaluation benchmark, which is developed to measure knowledge acquired by Chinese large language models by testing their multitask accuracy in zeroand few-shot settings. We have collected 20,477 questions from 71 tasks. Our selection covers all major levels of Chinese education system, ranging from the primary school to college, as well as a wide variety of subjects, including humanities, history, politics, law, education, psychology, science, technology, art and religion. All questions are multiple-choice questions with four options, hence guaranteeing a standardized and unified assessment process. We've assessed a number of state-of-theart open-source Chinese large language models on the proposed benchmark. The size of these models varies from 335M to 130B parameters. Experiment results demonstrate that they perform significantly worse than GPT-3.5 that reaches an accuracy of ∼ 48% on M3KE.
M3KE: A Massive Multi-Level Multi-Subject Knowledge Evaluation Benchmark for Chinese Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: The distribution of tasks in M3KE.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": ".,", "figure_data": "BenchmarkLanguage # Tasks # QuestionsMMLU (Hendrycks et al., 2021)En5715,908AGIEval (Zhong et al., 2023)En & Zh208,062MMCU (Zeng, 2023)Zh5111,900M3KEZh7120,477", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The comparison between M3KE and other related benchmarks.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples from M3KE. Bolded items represent correct answers. Examples from top to bottom are from Fine Arts, Criminal Jurisprudence, Animal Physiology and Chinese Civil Service Examination task, respectively.", "figure_data": "Arts & Humanities Social Sciences Natural Sciences OtherTasks1221317Q Numbers3,6126,2228,1622,126Avg.Q Numbers301296263303Max.Q Numbers352374347425Min.Q Numbers190190100129Avg.Q Tokens30.3338.7538.5433.21Avg.C Tokens53.9230.9944.5752.53Table 3: Overall statistics of M3KE. Q: question. C: answer choices", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Average zero-shot accuracy for each model on the four subject clusters.", "figure_data": "ModelsArts & Humanities Social Sciences Natural Sciences Other AverageGLM-335M0.0700.0460.0840.0440.062BLOOM-7.1B0.1630.1590.1610.1580.161GLM-10B0.1800.2290.2190.1500.197GLM-130B0.3260.3520.2740.3590.328ChatGLM-6B0.2460.2670.1680.2630.236MOSS-SFT-16B0.2600.2630.2070.2750.251BELLE-7B-0.2M0.2470.2960.2600.2600.266BELLE-7B-2M0.3280.3670.2820.3550.333GPT-3.5-turbo0.4600.5380.4440.4810.481ModelsArts & Humanities Social Sciences Natural Sciences Other AverageGLM-335M0.2200.2470.1930.1260.196BLOOM-7.1B0.2470.2600.2350.2460.247GLM-10B0.2940.3040.2320.2110.260GLM-130B0.2970.3290.2460.2280.275ChatGLM-6B0.1880.1750.1210.1980.171MOSS-SFT-16B0.2660.2640.2580.2840.268BELLE-7B-0.2M0.2920.3270.2730.3070.299BELLE-7B-2M0.2870.3090.2840.3130.298GPT-3.5-turbo0.4530.5400.4640.4760.483", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Average five-shot accuracy for each model on the four subject clusters.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Average zero-shot accuracy for each model on five major education levels.", "figure_data": "5-turbo", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Average five-shot accuracy for each model on five major education levels.", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Summary of all 71 tasks.", "figure_data": "Natural SciencesCollege", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Chuang Liu; Renren Jin; Yuqi Ren; Linhao Yu; Tianyu Dong; Xiaohan Peng; Shuting Zhang; Jianxiang Peng; Peiyi Zhang; Qingqing Lyu; Xiaowen Su; Qun Liu; Deyi Xiong
[ { "authors": "Sam Altman", "journal": "Ope-nAI Blog", "ref_id": "b0", "title": "Planning for agi and beyond", "year": "2023" }, { "authors": "H Stephen; Victor Bach; Zheng Xin Sanh; Albert Yong; Colin Webson; Raffel; V Nihal; Abheesht Nayak; Taewoon Sharma; M Kim; Saiful; Thibault Bari; Zaid Févry; Manan Alyafeai; Andrea Dey; Zhiqing Santilli; Srulik Sun; Canwen Ben-David; Gunjan Xu; Han Chhablani; Jason Wang; Alan Fries; Maged Saeed Alshaibani; Shanya Sharma; Urmish Thakker; Khalid Almubarak; Xiangru Tang; Dragomir R Radev; Mike Tian-Jian; Alexander M Jiang; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Promptsource: An integrated development environment and repository for natural language prompts", "year": "2022" }, { "authors": "Randall Balestriero; Mark Ibrahim; Vlad Sobal; Ari Morcos; Shashank Shekhar; Tom Goldstein; Florian Bordes; Adrien Bardes; Gregoire Mialon; Yuandong Tian", "journal": "", "ref_id": "b2", "title": "A cookbook of self-supervised learning", "year": "2023" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mc-Candlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020-12-06" }, { "authors": "Varun S'ebastien Bubeck; Ronen Chandrasekaran; Johannes Eldan; Eric Gehrke; Ece Horvitz; Peter Kamar; Yin Lee; Yuanzhi Tat Lee; Scott Li; Harsha Lundberg; Hamid Nori; Marco Palangi; Yi Tulio Ribeiro; Zhang", "journal": "", "ref_id": "b4", "title": "Sparks of artificial general intelligence: Early experiments with gpt-4", "year": "2023" }, { "authors": "Yihan Cao; Siyu Li; Yixin Liu; Zhiling Yan; Yutong Dai; Philip S Yu; Lichao Sun", "journal": "", "ref_id": "b5", "title": "A comprehensive survey of ai-generated content (aigc): A history of generative ai from gan to chatgpt", "year": "2023" }, { "authors": "Zhihong Chen; Feng Jiang; Junying Chen; Tiannan Wang; Fei Yu; Guiming Chen; Hongbo Zhang; Juhao Liang; Chen Zhang; Zhiyi Zhang", "journal": "", "ref_id": "b6", "title": "Phoenix: Democratizing chatgpt across languages", "year": "2023" }, { "authors": "Paul F Christiano; Jan Leike; Tom B Brown; Miljan Martic; Shane Legg; Dario Amodei", "journal": "", "ref_id": "b7", "title": "Deep reinforcement learning from human preferences", "year": "2017-09" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma; Albert Webson; Shane Shixiang; Zhuyun Gu; Mirac Dai; Xinyun Suzgun; Aakanksha Chen; Sharan Chowdhery; Gaurav Narang; Adams Mishra; Vincent Y Yu; Yanping Zhao; Andrew M Huang; Hongkun Dai; Slav Yu; Ed H Petrov; Jeff Chi; Jacob Dean; Adam Devlin; Denny Roberts; Quoc V Zhou; Jason Le; Wei", "journal": "", "ref_id": "b8", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Qingxiu Dong; Lei Li; Damai Dai; Ce Zheng; Zhiyong Wu; Baobao Chang; Xu Sun; Jingjing Xu; Lei Li; Zhifang Sui", "journal": "", "ref_id": "b9", "title": "A survey for in-context learning", "year": "2023" }, { "authors": "Nan Du; Yanping Huang; Andrew M Dai; Simon Tong; Dmitry Lepikhin; Yuanzhong Xu; Maxim Krikun; Yanqi Zhou; Adams Wei Yu; Orhan Firat; Barret Zoph; Liam Fedus; Maarten P Bosma; Zongwei Zhou; Tao Wang; Yu Emma Wang; Kellie Webster; Marie Pellat; Kevin Robinson; Kathleen S Meier-Hellstern; Toju Duke; Lucas Dixon; Kun Zhang; Quoc V Le; Yonghui Wu; Zhifeng Chen; Claire Cui; ; ", "journal": "", "ref_id": "b10", "title": "Glam: Efficient scaling of language models with mixture-of-experts", "year": "2022-07" }, { "authors": "Zhengxiao Du; Yujie Qian; Xiao Liu; Ming Ding; Jiezhong Qiu; Zhilin Yang; Jie Tang", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "GLM: General language model pretraining with autoregressive blank infilling", "year": "2022" }, { "authors": "Aaron Gokaslan; Vanya Cohen; Ellie Pavlick; Stefanie Tellex", "journal": "", "ref_id": "b12", "title": "Openwebtext corpus", "year": "2019" }, { "authors": "Zhouhong Gu; Xiaoxuan Zhu; Haoning Ye; Lin Zhang; Zhuozhi Xiong; Zihan Li; Qianyu He; Sihang Jiang; Hongwei Feng; Yanghua Xiao", "journal": "", "ref_id": "b13", "title": "Domain mastery benchmark: An ever-updating benchmark for evaluating holistic domain knowledge of large language model-a preliminary release", "year": "2023" }, { "authors": "Dan Hendrycks; Collin Burns; Steven Basart; Andy Zou; Mantas Mazeika; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b14", "title": "Measuring massive multitask language understanding", "year": "2021" }, { "authors": "Jordan Hoffmann; Sebastian Borgeaud; Arthur Mensch; Elena Buchatskaya; Trevor Cai; Eliza Rutherford; Diego De Las; Lisa Anne Casas; Johannes Hendricks; Aidan Welbl; Tom Clark; Eric Hennigan; Katie Noland; George Millican; Bogdan Van Den Driessche; Aurelia Damoc; Simon Guy; Karen Osindero; Erich Simonyan; Jack W Elsen; Oriol Rae; Laurent Vinyals; Sifre", "journal": "", "ref_id": "b15", "title": "Training compute-optimal large language models", "year": "2022" }, { "authors": "Shaohan Huang; Li Dong; Wenhui Wang; Yaru Hao; Saksham Singhal; Shuming Ma; Tengchao Lv; Lei Cui; Owais Khan Mohammed; Barun Patra; Qiang Liu; Kriti Aggarwal; Zewen Chi; Johan Bjorck; Vishrav Chaudhary; Subhojit Som; Xia Song; Furu Wei", "journal": "", "ref_id": "b16", "title": "Language is not all you need: Aligning perception with language models", "year": "2023" }, { "authors": "Tushar Khot; Peter Clark; Michal Guerquin; Peter Jansen; Ashish Sabharwal", "journal": "", "ref_id": "b17", "title": "QASC: A dataset for question answering via sentence composition", "year": "2020-02-07" }, { "authors": "Xiaodong Liu; Pengcheng He; Weizhu Chen; Jianfeng Gao; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Multi-task deep neural networks for natural language understanding", "year": "2019" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b19", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Niklas Muennighoff; Thomas Wang; Lintang Sutawika; Adam Roberts; Stella Biderman; Teven Le Scao; M Saiful; Sheng Bari; Zheng Xin Shen; Hailey Yong; Xiangru Schoelkopf; Dragomir Tang; Alham Radev; Khalid Fikri Aji; Samuel Almubarak; Zaid Albanie; Albert Alyafeai; Edward Webson; Colin Raff; Raffel", "journal": "OpenAI", "ref_id": "b20", "title": "Crosslingual generalization through multitask finetuning", "year": "2022" }, { "authors": "Long Ouyang; Jeff Wu; Xu Jiang; Diogo Almeida; Carroll L Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray; John Schulman; Jacob Hilton; Fraser Kelton; Luke Miller; Maddie Simens; Amanda Askell; Peter Welinder; Paul F Christiano; Jan Leike; Ryan Lowe", "journal": "", "ref_id": "b21", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Denis Paperno; Germán Kruszewski; Angeliki Lazaridou; Ngoc Quan; Raffaella Pham; Sandro Bernardi; Marco Pezzelle; Gemma Baroni; Raquel Boleda; Fernández", "journal": "", "ref_id": "b22", "title": "The LAMBADA dataset: Word prediction requiring a broad discourse context", "year": "2016" }, { "authors": "Baolin Peng; Chunyuan Li; Pengcheng He; Michel Galley; Jianfeng Gao", "journal": "", "ref_id": "b23", "title": "Instruction tuning with gpt-4", "year": "2023" }, { "authors": "Jack W Rae; Sebastian Borgeaud; Trevor Cai; Katie Millican; Jordan Hoffmann; H Francis Song; John Aslanides; Sarah Henderson; Roman Ring; Susannah Young; Eliza Rutherford; Tom Hennigan; Jacob Menick; Albin Cassirer; Richard Powell; George Van Den Driessche; Lisa Anne Hendricks; Maribeth Rauh; Po-Sen Huang; Amelia Glaese; Johannes Welbl; Sumanth Dathathri; Saffron Huang; Jonathan Uesato; John Mellor; Irina Higgins; Antonia Creswell; Nat Mcaleese; Amy Wu; Erich Elsen; M Siddhant; Elena Jayakumar; David Buchatskaya; Esme Budden; Karen Sutherland; Michela Simonyan; Laurent Paganini; Lena Sifre; Martens; Lorraine Xiang; Adhiguna Li; Aida Kuncoro; Elena Nematzadeh; Domenic Gribovskaya; Angeliki Donato; Arthur Lazaridou; Jean-Baptiste Mensch; Maria Lespiau; Nikolai Tsimpoukelli; Doug Grigorev; Thibault Fritz; Mantas Sottiaux; Toby Pajarskas; Zhitao Pohlen; Daniel Gong; Cyprien Toyama; Yujia De Masson D'autume; Tayfun Li; Vladimir Terzi; Igor Mikulik; Aidan Babuschkin; Diego Clark; De Las; Aurelia Casas; Chris Guy; James Jones; Matthew J Bradbury; Blake A Johnson; Laura Hechtman; Iason Weidinger; William S Gabriel; Edward Isaac; Simon Lockhart; Laura Osindero; Chris Rimell; Oriol Dyer; Kareem Vinyals; Jeff Ayoub; Lorrayne Stanway; Demis Bennett; Koray Hassabis; Geoffrey Kavukcuoglu; Irving", "journal": "", "ref_id": "b24", "title": "Scaling language models: Methods, analysis & insights from training gopher", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b25", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "", "ref_id": "b26", "title": "Squad: 100, 000+ questions for machine comprehension of text", "year": "2016-11-01" }, { "authors": "Xiaozhe Ren; Pingyi Zhou; Xinfan Meng; Xinjing Huang; Yadao Wang; Weichao Wang; Pengfei Li; Xiaoda Zhang; Alexander Podolskiy; Grigory Arshinov; Andrey Bout; Irina Piontkovskaya; Jiansheng Wei; Xin Jiang; Teng Su; Qun Liu; Jun Yao", "journal": "", "ref_id": "b27", "title": "Pangu-Σ: Towards trillion parameter language model with sparse heterogeneous computing", "year": "2023" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; V Nihal; Debajyoti Nayak; Jonathan Datta; Mike Chang; Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Févry; Alan Fries; Ryan Teehan; Le Teven; Stella Scao; Leo Biderman; Thomas Gao; Alexander M Wolf; Rush", "journal": "", "ref_id": "b28", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2022-04-25" }, { "authors": "Le Teven; Angela Scao; Christopher Fan; Ellie Akiki; Suzana Pavlick; Daniel Ilic; Roman Hesslow; Alexandra Castagné; François Sasha Luccioni; Matthias Yvon; Jonathan Gallé; Alexander M Tow; Stella Rush; Albert Biderman; Pawan Webson; Thomas Sasanka Ammanamanchi; Benoît Wang; Niklas Sagot; Albert Muennighoff; Olatunji Villanova Del Moral; Rachel Ruwase; Stas Bawden; Angelina Bekman; Iz Mcmillan-Major; Huu Beltagy; Lucile Nguyen; Samson Saulnier; Pedro Ortiz Tan; Victor Suarez; Hugo Sanh; Yacine Laurençon; Julien Jernite; Margaret Launay; Colin Mitchell; Aaron Raffel; Adi Gokaslan; Aitor Simhi; Alham Soroa; Amit Fikri Aji; Anna Alfassy; Ariel Kreisberg Rogers; Canwen Nitzav; Chenghao Xu; Chris Mou; Christopher Emezue; Colin Klamm; Leong; David Daniel Van Strien; Ifeoluwa Adelani", "journal": "", "ref_id": "b29", "title": "BLOOM: A 176b-parameter open-access multilingual language model", "year": "2022" }, { "authors": "Aarohi Srivastava; Abhinav Rastogi; Abhishek Rao; Abu Awal; Md Shoeb; Abubakar Abid; Adam Fisch; Adam R Brown; Adam Santoro; Aditya Gupta; Adrià Garriga-Alonso; Agnieszka Kluska; Aitor Lewkowycz; Akshat Agarwal; Alethea Power; Alex Ray; Alex Warstadt; Alexander W Kocurek; Ali Safaya; Ali Tazarv; Alice Xiang; Alicia Parrish; Allen Nie; Aman Hussain; Amanda Askell; Amanda Dsouza; Ameet Rahane; Anantharaman S Iyer; Anders Andreassen; Andrea Santilli; Andreas Stuhlmüller; Andrew M Dai; Andrew La; Andrew K Lampinen; Andy Zou; Angela Jiang; Angelica Chen; Anh Vuong; Animesh Gupta; Anna Gottardi; Antonio Norelli; Anu Venkatesh; Arash Gholamidavoodi; Arfa Tabassum; Arul Menezes; Arun Kirubarajan; Asher Mullokandov; Ashish Sabharwal; Austin Herrick; Avia Efrat; Aykut Erdem; Ayla Karakas", "journal": "", "ref_id": "b30", "title": "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models", "year": "2022" }, { "authors": "Nisan Stiennon; Long Ouyang; Jeff Wu; Daniel M Ziegler; Ryan Lowe; Chelsea Voss; Alec Radford; Dario Amodei; Paul F Christiano", "journal": "", "ref_id": "b31", "title": "Learning to summarize from human feedback", "year": "2020" }, { "authors": "Yu Sun; Shuohuan Wang; Shikun Feng; Siyu Ding; Chao Pang; Junyuan Shang; Jiaxiang Liu; Xuyi Chen; Yanbin Zhao; Yuxiang Lu; Weixin Liu; Zhihua Wu; Weibao Gong; Jianzhong Liang; Zhizhou Shang; Peng Sun; Wei Liu; Xuan Ouyang; Dianhai Yu; Hua Hao Tian; Haifeng Wu; Wang", "journal": "", "ref_id": "b32", "title": "ERNIE 3.0: Large-scale knowledge enhanced pretraining for language understanding and generation", "year": "2021" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b33", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurélien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "CoRR", "ref_id": "b34", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Alex Wang; Yada Pruksachatkun; Nikita Nangia; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b35", "title": "Superglue: A stickier benchmark for general-purpose language understanding systems", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "GLUE: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018-11-01" }, { "authors": "Shuohuan Wang; Yu Sun; Yang Xiang; Zhihua Wu; Siyu Ding; Weibao Gong; Shikun Feng; Junyuan Shang; Yanbin Zhao; Chao Pang; Jiaxiang Liu; Xuyi Chen; Yuxiang Lu; Weixin Liu; Xi Wang; Yangfan Bai; Qiuliang Chen; Li Zhao; Shiyong Li; Peng Sun; Dianhai Yu; Yanjun Ma; Hua Hao Tian; Tian Wu; Wei Wu; Ge Zeng; Wen Li; Haifeng Gao; Wang", "journal": "", "ref_id": "b37", "title": "ERNIE 3.0 titan: Exploring larger-scale knowledge enhanced pre-training for language understanding and generation", "year": "2021" }, { "authors": "Yizhong Wang; Yeganeh Kordi; Swaroop Mishra; Alisa Liu; Noah A Smith; Daniel Khashabi; Hannaneh Hajishirzi", "journal": "", "ref_id": "b38", "title": "Self-instruct: Aligning language model with self generated instructions", "year": "2022" }, { "authors": "Yizhong Wang; Swaroop Mishra; Pegah Alipoormolabashi; Yeganeh Kordi; Amirreza Mirzaei; Atharva Naik; Arjun Ashok; Arut Selvan Dhanasekaran; Anjana Arunkumar; David Stap; Eshaan Pathak; Giannis Karamanolakis; Gary Haizhi; Ishan Lai; Ishani Purohit; Jacob Mondal; Kirby Anderson; Krima Kuznia; Kuntal Doshi; Maitreya Kumar Pal; Mehrad Patel; Mihir Moradshahi; Mirali Parmar; Neeraj Purohit; Varshney; Rohitha Phani; Pulkit Kaza; Ravsehaj Verma; Rushang Singh Puri; Savan Karia; Doshi; Keyur Shailaja; Siddhartha Sampat; Sujan Mishra; A Reddy; Sumanta Patro; Tanay Dixit; Xudong Shen", "journal": "", "ref_id": "b39", "title": "Super-naturalinstructions: Generalization via declarative instructions on 1600+ NLP tasks", "year": "2022-12-07" }, { "authors": "Jason Wei; Maarten Bosma; Y Vincent; Kelvin Zhao; Adams Wei Guu; Brian Yu; Nan Lester; Andrew M Du; Quoc V Dai; Le", "journal": "", "ref_id": "b40", "title": "Finetuned language models are zero-shot learners", "year": "2022-04-25" }, { "authors": "Shaohua Wu; Xudong Zhao; Tong Yu; Rongguo Zhang; Chong Shen; Hongli Liu; Feng Li; Hong Zhu; Jiangang Luo; Liang Xu", "journal": "", "ref_id": "b41", "title": "Yuan 1.0: Large-scale pre-trained language model in zero-shot and few-shot learning", "year": "2021" }, { "authors": "Sang Michael Xie; Aditi Raghunathan; Percy Liang; Tengyu Ma", "journal": "", "ref_id": "b42", "title": "An explanation of in-context learning as implicit bayesian inference", "year": "2022-04-25" }, { "authors": "Liang Xu; Hai Hu; Xuanwei Zhang; Lu Li; Chenjie Cao; Yudong Li; Yechen Xu; Kai Sun; Dian Yu; Cong Yu; Yin Tian; Qianqian Dong; Weitang Liu; Bo Shi; Yiming Cui; Junyi Li; Jun Zeng; Rongzhao Wang; Weijian Xie; Yanting Li; Yina Patterson; Zuoyu Tian; Yiwen Zhang; He Zhou; Shaoweihua Liu; Zhe Zhao; Qipeng Zhao; Cong Yue; Xinrui Zhang; Zhengliang Yang; Kyle Richardson; Zhenzhong Lan", "journal": "International Committee on Computational Linguistics", "ref_id": "b43", "title": "CLUE: A chinese language understanding evaluation benchmark", "year": "2020" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b44", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2021-06-06" }, { "authors": "Yan Gong; Yiping Peng; Qiang Niu; Baochang Ma Yunjie; Ji ; Yong Deng; Xiangang Li", "journal": "", "ref_id": "b45", "title": "Belle: Be everyone's large language model engine", "year": "2023" }, { "authors": "Rowan Zellers; Ari Holtzman; Hannah Rashkin; Yonatan Bisk; Ali Farhadi; Franziska Roesner; Yejin Choi", "journal": "", "ref_id": "b46", "title": "Defending against neural fake news", "year": "2019-12-08" }, { "authors": "Aohan Zeng; Xiao Liu; Zhengxiao Du; Zihan Wang; Hanyu Lai; Ming Ding; Zhuoyi Yang; Yifan Xu; Wendi Zheng; Xiao Xia; Weng Lam Tam; Zixuan Ma; Yufei Xue; Jidong Zhai; Wenguang Chen; Peng Zhang; Yuxiao Dong; Jie Tang", "journal": "", "ref_id": "b47", "title": "GLM-130B: an open bilingual pre-trained", "year": "2022" }, { "authors": "Hui Zeng", "journal": "", "ref_id": "b48", "title": "Measuring massive multitask chinese understanding", "year": "2023" }, { "authors": "Wei Zeng; Xiaozhe Ren; Teng Su; Hui Wang; Yi Liao; Zhiwei Wang; Xin Jiang; Zhenzhang Yang; Kaisheng Wang; Xiaoda Zhang; Chen Li; Ziyan Gong; Yifan Yao; Xinjing Huang; Jun Wang; Jianfeng Yu; Qi Guo; Yue Yu; Yan Zhang; Jin Wang; Hengtao Tao; Dasen Yan; Zexuan Yi; Fang Peng; Fangqing Jiang; Han Zhang; Lingfeng Deng; Yehong Zhang; Zhe Lin; Chao Zhang; Shaojie Zhang; Mingyue Guo; Shanzhi Gu; Gaojun Fan; Yaowei Wang; Xuefeng Jin; Qun Liu; Yonghong Tian", "journal": "", "ref_id": "b49", "title": "Pangu-α: Large-scale autoregressive pretrained chinese language models with autoparallel computation", "year": "2021" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona T Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b50", "title": "OPT: open pre-trained transformer language models", "year": "2022" }, { "authors": "Zhengyan Zhang; Yuxian Gu; Xu Han; Shengqi Chen; Chaojun Xiao; Zhenbo Sun; Yuan Yao; Fanchao Qi; Jian Guan; Pei Ke; Yanzheng Cai; Guoyang Zeng; Zhixing Tan; Zhiyuan Liu; Minlie Huang; Wentao Han; Yang Liu; Xiaoyan Zhu; Maosong Sun", "journal": "", "ref_id": "b51", "title": "CPM-2: large-scale cost-effective pre-trained language models", "year": "2021" }, { "authors": "Kun Wayne Xin Zhao; Junyi Zhou; Tianyi Li; Xiaolei Tang; Yupeng Wang; Yingqian Hou; Beichen Min; Junjie Zhang; Zican Zhang; Yifan Dong; Chen Du; Yushuo Yang; Zhipeng Chen; Jinhao Chen; Ruiyang Jiang; Yifan Ren; Xinyu Li; Zikang Tang; Peiyu Liu; Jian-Yun Liu; Ji-Rong Nie; Wen", "journal": "", "ref_id": "b52", "title": "A survey of large language models", "year": "2023" }, { "authors": "Wanjun Zhong; Ruixiang Cui; Yiduo Guo; Yaobo Liang; Shuai Lu; Yanlin Wang; Amin Saied; Weizhu Chen; Nan Duan", "journal": "", "ref_id": "b53", "title": "Agieval: A humancentric benchmark for evaluating foundation models", "year": "2023" }, { "authors": "Ce Zhou; Qian Li; Chen Li; Jun Yu; Yixin Liu; Guangjing Wang; Kai Zhang; Cheng Ji; Qiben Yan; Lifang He; Hao Peng; Jianxin Li; Jia Wu; Ziwei Liu; Pengtao Xie; Caiming Xiong; Jian Pei; Philip S Yu; Lichao Sun", "journal": "", "ref_id": "b54", "title": "A comprehensive survey on pretrained foundation models: A history from BERT to chatgpt", "year": "2023" }, { "authors": "Yukun Zhu; Ryan Kiros; Richard S Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "IEEE Computer Society", "ref_id": "b55", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015-12-07" } ]
[]
10.48550/ARXIV.2212.02437
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b26" ], "table_ref": [], "text": "Recent work has shown that large language models (LLMs) exhibit impressive capabilities in performing various natural language generation tasks, even in the zero-shot paradigm. In particular, such models have shown interesting machine translation (MT) capabilities (Brown et al., 2020;Chowdhery et al., 2022;Vilar et al., 2022)-especially when translating into English, despite never having been explicitly and intentionally exposed to translation data in the way their supervised counterparts are. This raises the question: where do these translation capabilities come from?\nWe hypothesize that the translation capabilities of LLMs connect to incidental bilingualism: the unintentional consumption of bilingual text within a single training instance. To test this hypothesis, we take PaLM (Chowdhery et al., 2022)-a 540billion parameter Transformer language model-as a case study. We first conduct a large-scale analysis of its training data in order to characterize the nature and quantity of bilingual text, then perform experiments to assess the impact of this text on translation performance.\nTo measure incidental bilingualism at scale, we develop a processing pipeline that alternates between quantitative and qualitative analysis ( §3): first detect bilingual versus monolingual text using a language tagger, then qualitatively analyze the nature of bilingual text, and finally measure the amount of translation data within bilingual instances. Our analysis spans 44 languages, for which we study bilingualism paired with English. Our findings are:\n• In all, 1.4% of PALM's training instances are detected as bilingual, while 0.34% contain at least one translated sentence pair. We were able to mine such pairs across all languages studied; therefore, none of these languages is truly zero-shot in the context of translation.\n• The number of monolingual instances in a language is predictive of the number of instances containing bilingual or translation content for that language (paired with English).\nAfter establishing that both bilingual and translation content are incidentally consumed during PaLM's training, we study how they connect to its MT capabilities ( §4). We run a series of training and prompting experiments and found that:\n• Prompting the full PaLM model with alternative, data-driven prompts improves outof-English zero-shot translation by 14 chrF points on average across languages, indicating arXiv:2305.10266v1 [cs.CL] 17 May 2023 that its zero-shot translation capabilities were underestimated due to sub-optimal prompts.\n• Ablating detected translation pairs with smaller versions of PaLM has a dramatic effect on the translation capabilities of 1Bparameter models for high-resource languages, reducing average into-English zeroshot results by 7.4 BLEU and 5-shot results by 5.9 BLEU. The effect falls off but remains notable (+2-3 BLEU across several conditions) as we scale to 8B-parameter models." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b4", "b10", "b17", "b20", "b26", "b0", "b18", "b8", "b1", "b14", "b6", "b22", "b19" ], "table_ref": [], "text": "Translation Capabilities of LLMs Large-scale generative language models, such as GPT-3 (Brown et al., 2020), PaLM (Chowdhery et al., 2022), and XGLM (Lin et al., 2021) have been shown to exhibit translation capabilities, despite not being explicitly trained to translate. These capabilities are surprisingly strong, particularly when translating into English with few-shot examples. One explanation for this behavior is that it results from incidental multitask learning (Radford et al., 2018;Sanh et al., 2021). This hypothesis has not been explored for MT, where recent work has mostly focused on improving LLM translation capabilities by optimizing few-shot prompting strategies (Vilar et al., 2022;Agrawal et al., 2022). Rather than trying to improve translation quality for LLMs, our goal is to understand where their translation abilities stem from by tracing them back to the properties of the pretraining data.\nLarge-Scale Data Analysis LLMs rely on massive amounts of unlabeled corpora for training. These corpora are primarily acquired by combining heterogeneous online resources (e.g., Wikipedia, Web forums, Common Crawl, etc.)-whose properties are usually unknown. Recent work on largescale analysis has shed some light: Dodge et al.\n(2021) analyze C4 (Raffel et al., 2019)-a dataset created from a snapshot of Common Crawl-and show that it contains machine generated texts as well as evaluation samples from commonly used NLP benchmarks; Kreutzer et al. (2022) manually audit the quality of multilingual datasets and find systematic quality issues amongst popular pretraining datasets. Most related to our work, Blevins and Zettlemoyer (2022) show that popular corpora routinely used for training English-only LLMs contain a non-negligible amount of non-English text, which helps explain their cross-lingual capabilities.\nTheir manual analysis of corpus subsamples covers several bilingual categories, including a translation category. But where analysis of bilingualism is a side result of their work, it is our primary contribution. We extend their work by proposing automatic tools to quantify bilingualism at scale and directly relate it to LLM translation performance.\nEliciting Knowledge from LLMs Prompting language models to elicit knowledge acquired during pre-training has received a lot of research interest. Petroni et al. (2019) show that LLMs can recall factual knowledge by answering queries structured as cloze statements. Jiang et al. (2020) further show that query-based prompts outperform manually created cloze statements, suggesting that the latter provide a lower bound estimate on the actual abilities of LLMs. Follow-up work confirms those findings by suggesting better prompts with automatic generation methods (Shin et al., 2020) or prompt engineering (Reynolds and McDonell, 2021). We similarly explore how to extract translation knowledge from LLMs using data-driven prompts." }, { "figure_ref": [ "fig_0" ], "heading": "Measuring & Understanding Incidental Bilingualism", "publication_ref": [ "b23", "b5" ], "table_ref": [], "text": "We introduce a mixed-method approach (Creswell and Clark, 2017; Shorten and Smith, 2017) to measure and understand incidental bilingualism-the unintentional consumption of bilingual signalsat scale. Since we expect bilingual signals to be rare, we explore the huge data space by alternating between quantitative and qualitative steps, with results from each step complementing and informing one another (Figure 1). The quantitative steps play the role of inducing a smaller-scale focus space to study, while the qualitative steps provide insights into the nature of bilingual signals.\nPreliminaries PaLM's pretraining dataset consists of 780 billion tokens from a mixture of multilingual sources (social media conversations (50%), filtered webpages (27%), and Wikipedia (4%)), presumably English sources (books (13%) and news articles (1%)), and source code (5%). PaLM was trained on 2,048-subword-token examples formed by concatenating and truncating documents. As PaLM is a multi-source LM, a document may be a web page, a book, or a conversation, depending on the source. Our primary units for data analysis are instances we created by splitting training examples along document boundaries. As such, each instance is either a complete document or a contiguous fragment of one, up to 2,048 tokens in length. A more detailed discussion of instances is given in Appendix A.\nWe study bilingualism between English and 44 other languages. We choose language pairs that: a) are supported by our language identification models, and b) have FLORES-101 (Goyal et al., 2022) evaluation data. We divide languages into high, medium, and low-resource groups according to their monolingual instance counts, as shown below: " }, { "figure_ref": [ "fig_1" ], "heading": "Detecting Bilingual Instances", "publication_ref": [ "b27" ], "table_ref": [ "tab_8" ], "text": "Our first goal is to automatically detect all training instances that contain bilingual text without presupposing a specific granularity for bilingualism. To that end, we use CMX (Zhang et al., 2018)-a language identification model for codemixed texts-to produce a sequence of token-level language tags for each training instance. An instance is labeled as bilingual if it contains at least two contiguous segments in different languages, each consisting of at least N consecutive identical language tags. Instances with more than two languages are interpreted as bilingual, as discussed in Appendix B. One of the two languages must always be English, both to simplify our analysis and to work within the limits of the CMX tool.\nFindings Figure 2 presents the per-language monolingual and bilingual instance counts. We include raw counts per language in Table 7. We observe that across the languages studied, PaLM consumes bilingual instances that, in total, account for 1.4% of its training instances." }, { "figure_ref": [ "fig_2" ], "heading": "Characterizing Bilingual Instances", "publication_ref": [], "table_ref": [ "tab_9" ], "text": "Next, we turn to understanding the nature of bilingual instances detected by the above procedure.\nTo make manual analysis easier, we used the KnowYourData tool1 to highlight spans of the less frequent language in each bilingual instance.\nFindings Our qualitative analysis of a sample of 100 English-French bilingual instances reveals that bilingualism manifests in various cross-lingual phenomena (examples of bilingual instances are presented in Table 8 of Appendix E). Our detection approach is reasonably accurate: only 5% of instances correspond to errors mostly attributed to language identification issues (i.e., the detected instances are indeed bilingual, but at least one of the two languages is not English or French). Each correctly detected bilingual instance is annotated as belonging to one of five categories, with the typology shown in Figure 3. Most bilingual instances (55%) fall under the broader class of \"Not Translations\" and cover cases where the two languages encode information that does not correspond to translation content. This class is further decomposed into three sub-classes. First, we found a few instances (10%) of codeswitching where one or two speakers alternate between two languages in the context of a single conversation. As expected, most code-switching instances were spotted in social media conversations, as it is primarily used within multilingual communities in informal communication. Second, we observed that many bilingual instances (21%) are attributed to references, where named entities or bibliography entries are cited in their native language, such as instances drawn from Wikipedia. Third, we also found a considerable number of bilingual instances (24%) that include completely unrelated content in the two languages that just happened to co-exist within the same web page.\nThe remaining bilingual instances are evenly distributed (20%) across two categories that fall loosely under the rubric of \"Translations\". Here, we distinguish between cases where some amount of the text expresses a typical translation relation and cases where content across languages is semantically related, but not exactly by translation. The latter involves a rich spectrum of cross-lingual semantic relations, including cross-lingual entailment, summarization, and paraphrasing, mainly noticed within books in the genre of literary criticism and interpretation. We also spotted a few cases of forum discussions around explanations of translation or stylistic manipulation of translations." }, { "figure_ref": [ "fig_3", "fig_5" ], "heading": "Detecting Translation Pairs", "publication_ref": [ "b12", "b11" ], "table_ref": [], "text": "Our manual analysis exposed an opportunity to automatically extract and count translated sentence pairs (translation pairs for short). We cast the problem of within-instance translation detection as a local mining task following recent advances in parallel text acquisition. Concretely, for each bilingual instance from §3.1, we run a sentence breaker and extract two pools of candidate sentences x and y in the two languages. The language of each sentence is inferred by majority voting over token-level language tags. Whichever language has fewer sentences is labeled the embedded language and the other becomes the primary. Each candidate sentence is then encoded to a vector representation using the LABSE (Feng et al., 2022) cross-lingual sentence encoder. Translation pairs are extracted by finding the most similar primary sentence for each embedded sentence and then checking whether the cosine distance of their representations falls below a threshold. We choose a threshold of 0.6 on the cosine distance to mine plausible translation pairs, following Feng et al. (2022). We also apply a series of length-and-language-based heuristic data quality filters, adapted from Alibaba's WMT Data Filtering submissions (Lu et al., 2018(Lu et al., , 2020)), described in Appendix C.\nNote that this extraction process is oblivious to document structure: the instance may be formatted as parallel sentences, paragraphs, documents, or as a free-form discussion that happens to mention both a sentence and its translation. Our extraction is also incapable of detecting translation relations below the sentence level. If we can extract at least one translation pair from an instance, then we label it as a translation instance.\nFindings We find that 0.34% of PaLM's training instances contain at least one translation pair. Note that this number provides a lower bound on the amount of incidental bilingualism and translation that PaLM consumes, as we are restricted to a specific set of language pairs, and we only study bilingualism with English. Figure 4 presents the number of translation pairs we mined within PaLM's training instances between English and each language. At a minimum, PaLM consumes thousands of parallel texts for all language pairs studied, while for high-resource languages it sees more than a million translation pairs. Furthermore, we investigate the correlation between the number of monolingual instances in each language and their bilingual and translation counterparts. Our results in Figure 5 indicate that, surprisingly, the monolingual counts in each language correlate strongly with the bilingual (r=0.944) and translation (r=0.938) counts. This strong correlation implies that, when working at scale, we can predict the bilingual and translation sizes for a given language (within an error rate) by simply counting monolingual instances." }, { "figure_ref": [ "fig_6" ], "heading": "Discovering Natural Prompts", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "After identifying a smaller-scale set consisting of training instances that contain translation pairs, we further manually inspect them to understand how the translation task is naturally modeled by PaLM. We find that sentence-level translations are presented within a training instance in three ways. The majority of them appear across paragraphs and do not follow a canonical pattern. Among the remainder, we noticed two canonical patterns: translation pairs that belong to stacked translated paragraphs (e.g., {x 1 , x 2 , y 1 , y 2 }) and interleaved translations where a sentence and each translation are adjacent to each other (e.g., {x 1 , y 1 , x 2 , y 2 }). Among the latter, we saw an opportunity to extract natural prompts automatically. We do so by analyzing the prefixes of the translation pairs mined in §3.3. Drawing on our manual observations, we mine the most frequent prefixes per language pair that follow a simple colon prompt format: any sequence of non-whitespace characters followed by a colon. Finally, we manually filter the automatically mined prefix lists to look for consistent natural prompt patterns across languages.\nFindings Table 1 presents the results of our prompt discovery module followed by manual filtering to extract plausible translation prefixes. First, we found empirically that one of the most frequent translation prompts that naturally arises in the data is the default prompt adopted by most MT research with LLMs: source and target language names in English followed by a colon (e.g., \"French:\").\nWe also found three alternative prompts that are frequently presented within incidental translation pairs: i) code: source and target ISO language codes (e.g., \"FR:\"), ii) native: source and target language names in their respective languages (e.g., Table 2: Comparison of prompt selection on FLORES devtest, for zero-and few (5)-shot prompting. QUAL. corresponds to translation quality (chrF for EN→XX, BLEU for XX→EN), LANG.% represents PaLM's sentence-level accuracy in producing text in the correct target language, and δ gives the translation quality difference from the \"Default\" prompt. Native data-driven prompts improve zero-shot, out-of-English (EN→XX) translation quality largely by guiding PaLM to generate text in the correct target language.\n\"Français:\"), iii) translation: source language in English, and the word \"translation\" in the target language (e.g., \"Traduction:\"). Interestingly, prompt types are not evenly distributed across our language groups: language codes appear primarily with highresource languages, while low-resource languages favor prompts written in their native language. We include a complete list of prompt counts per language in Figure 6 of Appendix E." }, { "figure_ref": [], "heading": "Analyzing the Impact of Bilingualism", "publication_ref": [ "b5", "b13", "b15", "b16" ], "table_ref": [], "text": "We analyze the impact of bilingualism on the translation capabilities of PaLM with a series of MT experiments on the FLORES-101 (Goyal et al., 2022) evaluation set, which provides translations of a common set of English Wikipedia sentences into all of our 44 languages. We report results on the 1,012 sentence devtest set. We use the 997 sentence dev set primarily as a source of randomlydrawn exemplars when reporting 5-shot results. We report BLEU (Papineni et al., 2002) for into-English translation and chrF (Popović, 2015) for out-of-English translation, both computed by Sacrebleu (Post, 2018) with default settings. For LLMbased translation, we follow the template from Vilar et al. ( 2022) unless stated otherwise:\n[source]: [X]\n[target]:\nwhere [source], and [target] are the source and target language names (in English) and [X] is the source text. When present, few-shot exemplars are provided above the template in the same format, as detailed in Appendix D." }, { "figure_ref": [], "heading": "Prompting PaLM with Natural Prompts", "publication_ref": [ "b3" ], "table_ref": [ "tab_10" ], "text": "We prompt the original 540B parameter PaLM model with templates that use naturally-occurring prefixes of incidental translations, as discussed in §3.4. In our template, we replace [source] and [target] with each alternative, data-driven prompt. We experiment with zero-shot and 5-shot prompting.\nFindings Table 2 presents average translation quality results for different prompts across high, medium, and low resource settings. We present the complete, per language results in Table 9 of Appendix E. When translating into English (XX→EN), the default prompt yields the best results, while alternative prompts result in a small degradation in quality; overall, translating into English seems to be robust across different prompts supported by our data. On the other hand, PaLM's translation quality is surprisingly sensitive to the choice of prompt when translating out of English (EN→XX): simply changing the default prompt to its native variant improves quality by 14 chrF points, with most of the improvement reported in medium and low-resource languages. The \"translation\" prompt also yields consistent improvements over the default. Finally, prompting with language codes only improves translation out of English for the highresource group-this is expected as this prompt was only present for a few high-resource languages. Further analysis of out-of-English results reveals that native prompts trigger text in the desired language, while the default prompt results in high rates of generating the wrong target language (see gray percentages in Table 2). The output's target language is determined by a sequence-level languageidentification tool (Botha et al., 2017). Finally, although choosing natural prompts that arise from the data can help us better understand PaLM's zero-shot capabilities, large differences between prompts do not carry over to the few-shot setting (right-most columns of Table 2)." }, { "figure_ref": [], "heading": "Extrinsic Evaluation of Translation Pairs", "publication_ref": [ "b7", "b2", "b24", "b9" ], "table_ref": [ "tab_3", "tab_3" ], "text": "It is one thing to report counts of translation pairs mined from bilingual instances, but is the resulting bitext of high quality? We adopt the parallel text quality evaluation framework of the WMT Shared Task on Parallel Corpus Filtering and Alignment (Koehn et al., 2020) and train supervised neural machine translation models from scratch on the mined translations. This allows us to jointly assess the quality of PaLM's translation content and our extraction heuristics. We focus this analysis on FR→EN, PaLM's highest-resource language pair.\nData For PaLM translation pairs, we explore a number of thresholds on the LABSE distance. To put our results in perspective, we additionally train a model on all pairs from the WMT14 FR→EN task (Bojar et al., 2014) and on random samples thereof to establish fair data comparison points at notable LABSE thresholds. Sentence counts for all conditions are shown in Table 3.\nArchitecture We adopt the 6-layer encoderdecoder Transformer Base (Vaswani et al., 2017) architecture, with minimal hyper-parameter tuning. Shared sentence piece (Kudo and Richardson, 2018) vocabularies with 32K tokens are constructed from bitext for each scenario. Dropout is set to 0.3 for all systems except for the full WMT system, which uses 0.1. Systems are trained up to 450K steps with a batch size of 1,024. Checkpoints are selected by FLORES dev BLEU.\nFindings Table 3 presents the results of our analysis. In general, the mined translation pairs from our analysis pipeline provide useful signal for training supervised MT systems with reasonable translation quality (i.e., 37 to 38 BLEU across various thresholds, compared to 41 that we achieve using 40M translations from available WMT parallel corpora). Moreover, these results confirm that 0.6 seems to be the right threshold for detecting translation pairs that are useful, or at least not harmful in the presence of other positive signals (i.e., at 0.6 we are within 1 BLEU point of a system trained on the same amounts of WMT parallel text)." }, { "figure_ref": [], "heading": "Ablating Incidental Bilingualism", "publication_ref": [ "b9", "b21", "b1" ], "table_ref": [ "tab_5", "tab_5", "tab_4", "tab_11", "tab_11" ], "text": "We now explore the impact of bilingualism on the translation capabilities of PaLM. to measure the effect of removing various types of multilingual data.\nArchitecture Our 1B and 8B models are scaleddown versions of PaLM with small changes. Like PaLM, each is a decoder-only model trained with a causal language modeling objective, using a dense transformer architecture and a sentence piece tokenizer (Kudo and Richardson, 2018) that retains spacing information. Unlike PaLM, we do not share key and value tensors across attention heads (Shazeer, 2019), which should affect only decoding speed. We include a hyper-parameter summary in level proportions reported earlier, as these count examples, which are merged instances. Also, they will not match the multilinguality proportions reported by Chowdhery et al. ( 2022), as we have removed non-natural-language (code) data and any non-English text not in our 44-language set. We can now sample examples from our partitions to create a smaller training set with the same proportions of incidental bilingualism. No attempt is made to retain PaLM's original proportions for other aspects like data source or language. Counts for this sample are shown as FULL in Table 5.\nWe ablate each group in the following order: TRA, BIL and then NEN. At each step, we replace ablated examples with examples from the next group in the chain. The counts for all ablation conditions are shown in Table 5. The -NEN setting corresponds to the English-only setting studied by Blevins and Zettlemoyer (2022), but as they show, this will contain some non-English content due to language-identification errors. Analogous provisos exist for each ablation, as all our automatic tools make errors. We aim to measure the effect of removing most of a type of content, not all of it.\nFindings Table 4 presents the results of our ablation-the complete, per language, results are in Table 10 of Appendix E. Focusing on our 1B model, we note that examples containing translation pairs (TRA) have an outsized impact on translation quality for being only 0.5% of the training data. In the high-resource XX→EN, zero-shot scenario, replac-ing TRA examples with BIL results in a drop of 7.4 BLEU. With TRA removed, the additional impact of removing the remaining bilingual instances (BIL) is much smaller: 1.2 BLEU. One might expect the utility of translation data to fall off as we add 5-shot examples at inference time, but TRA is still quite important, with its removal resulting in a reduction of 5.9 BLEU. The importance of TRA holds throughout our 1B experiments, to the extent that the system cannot translate at all, i.e. for 5-shot versions of XX→EN MEDIUM and EN→XX HIGH.\nTurning to our 8B model, we see that translation content continues to have a substantial impact on translation quality, though the absolute score differences have diminished, hovering between 2-3 BLEU or 3-4 chrF, depending on the scenario. This result, where a 4x increase in parameters leads to a roughly 2x reduction in the absolute impact of TRA suggests that it would be interesting to build scaling laws to study the impact of incidental translation data, which we leave to future work. Also, for 5-shot scenarios, there is no longer such a big difference between the impact of BIL and TRA data. Given exemplars, the larger model seems to be able to make better use of weaker bilingual signals.\nSurprisingly, the 8B model that does not have access to multilingual content (-NEN), exhibits some translation capabilities for XX→EN HIGH (i.e., 17.3 and 25.9 BLEU for zero-and few-shot, respectively). A closer look at the per-language breakdown (see Table 10) reveals that those capabilities are restricted to languages written in Latin script. This adds evidence for larger models being better equipped to leverage either sparse signals (i.e., language-identification failures during ablation) and weak signals (i.e., language similarities from shared scripts). As expected, non-English content is critical for translation out of English.\nWe explore the role of incidental bilingualism-the unintentional consumption of bilingual signalsin PaLM's translation capabilities. We introduce a mixed-method approach that alternates between quantitative and qualitative analyses to measure and understand incidental bilingualism at scale by processing 780 billion tokens. Our work shows that PaLM consumes a significant amount of bilingual text: 1.4% of training instances in natural language are bilingual. At the same time, it is naturally exposed to translation signals, having seen more than 30 million translation pairs in 44 languages paired with English. Furthermore, we extrinsically evaluate the quality of these translations, showing that they can be used to train supervised models that roughly match the quality of equal amounts of WMT data. Finally, we show that incidental bilingualism connects to the machine translation capabilities of PaLM. First, we show that data-driven prompts extracted from incidental translations can improve the zero-shot abilities of PaLM when translating out of English by 14 chrF on average. Second, we provide empirical evidence that bilingual and translation signals can partially explain the translation capabilities of smaller-scale LLMs." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Our findings should be interpreted considering a series of problem definitions and design choices. First, our quantitative results on measuring incidental bilingualism at scale are subject to language identification, sentence splitting, and mining errors. Our qualitative analysis for the English-French language pair revealed that those errors are reasonably small (see §3.2). However, we expect the accuracy of our tools to vary across languages and, crucially, exhibit unanticipated failure modes on web text and low-resource languages (Caswell et al., 2020). Second, our findings are restricted to quantifying bilingualism and translations within a limited set of language pairs and only paired with English. Thus, by problem definition, we are limited to computing a lower-bound estimate on incidental bilingualism of PaLM. The above limitations should also be taken into consideration when interpreting our ablation results. Although we attempted to remove most bilingual signals in our series of MT experiments, it is still possible that bilingualism slips through due to either model errors or due to bilin-gual signals beyond our focus set of languages. Finally, any results and findings of our work are restricted to PaLM; the single LLM studied in this work. However, our finer-grained analysis (see Table 11 of Appendix E) reveals that incidental bilingualism, including translation signals, is observed across various data sources (e.g., webpages, books, etc.) that are commonly included in the training data of other popular LLMs." }, { "figure_ref": [], "heading": "A Units of Analysis of Training text", "publication_ref": [], "table_ref": [], "text": "Throughout this paper we have adopted special meanings for the common (often interchangeable) terms document, example and instance. Here we make those terms concrete and justify our use of the instance as our primary unit of analysis. Document A document is a logical unit of text from one of our source corpora: a web page or wiki page from a web-crawled corpus, a conversation from a chat or forum corpus, or a book from a books corpus. Example Each PaLM training example is exactly 2,048 subword tokens. These are assembled by concatenating and/or splitting documents to the appropriate length. As such, an example may contain several short documents, and a long document may be spread over several examples. Multiple documents concatenated into a single example are separated by special document-boundary tokens. The relevant features of examples that make them more useful for analysis than documents are:\n• We know exactly which examples PaLM saw during training.\n• Examples reflect when co-occurring textual information (for example, a translation pair) was lost due to a document being split into multiple examples.\nHowever, examples can also introduce spurious co-occurrences from merged documents. We assume that a language model can and will ignore any merge-induced co-occurrences due to the presence of document separator tokens; therefore, we should ignore them as well. This leads us to our next and final unit.\nInstance Instances are created by splitting examples according to document-separator tokens. Therefore, each instance is either a complete document or a fragment of a single document, and is up to 2,048 tokens in length. Instances have all of the advantages of examples, without introducing spurious co-occurrences, hence why they are our primary unit of analysis." }, { "figure_ref": [], "heading": "B Bilingual Detection Pipeline Details", "publication_ref": [ "b27" ], "table_ref": [], "text": "CodeMixer Model Details We use the CMX (CodeMixer) model (Zhang et al., 2018)-a tokenlevel language identification model, to detect bilingual instances. CMX is a simple feed-forward model that takes as input a set of character and word-level features and produces a distribution over a set of languages for each token. The entire sequence of language tags is obtained using constrained decoding over a pre-defined set of permitted languages. The model is trained on a combination of synthetic and real-world translation data (both monolingual and code-mixed with English) for 100 languages. Note that CMX predicts codemixing between a pair of languages, as a result, it does not reliably predict language tags for multilingual instances involving more than two languages. For example, if an instance actually contains English, French, and German text, with German being the least frequent, it will be tagged as containing only English and French; all German words will be mislabeled as one of the other two languages or as \"undefined.\"" }, { "figure_ref": [], "heading": "Algorithmic Description of Bilingual Detection", "publication_ref": [], "table_ref": [], "text": "Given a training instance t = {t i } n i=1 , a focus set L of the 44 studied languages, and a threshold N , we detect bilingual instances based on the following steps: (i) We start by extracting a sequence of language tags, using the CMX model. (ii) We mark the most frequent language as the primary language, and the other (if exists) as the embedded. (iii) If the primary and the embedded languages do not fall under our focus set L, we exclude it from our analysis. (iv) If a training instance contains more than 10% of \"undefined\" predictions (e.g., resulting from non-linguistic content), it is not annotated as bilingual. (v) Finally, if a training instance contains at least two contiguous segments-consisting of at least N consecutive identical language tags-in different languages, it is annotated as bilingual.\nGiven that the CMX model is known to overpredict English tags, we employ a stricter threshold on defining contiguous segments for English (N = 10) compared to the rest of the languages (N = 5). For all languages we operate at the tokenlevel, with the exception of Chinese, Japanese, and Korean for which we apply the above algorithm at the character-level." }, { "figure_ref": [], "heading": "C Heuristic Translation Pair Filters", "publication_ref": [ "b12", "b11", "b3" ], "table_ref": [], "text": "When extracting translation pairs found within a bilingual instance, our primary quality signal is from the cosine distance between cross-lingual LABSE sentence embeddings. However, we also apply a suite of heuristic filters which help catch non-translations that slip through this primary fil-ter. These filters are adapted from Alibaba's WMT Data Filtering submissions (Lu et al., 2018(Lu et al., , 2020)). When a tokenization is required for token counts or edit distance, we use tokens from the mBERT tokenizer (Devlin et al., 2019). The filters are as follows: 1. both sentences must respect a min (3) and max (200) token length; 2. we enforce a max length ratio (2x) between sentences; 3. we enforce a min edit distance (2) and a min edit distance ratio (0.1) between sentences; 4. we apply a secondary, sequence-level language-identification tool (Botha et al., 2017) to re-identify each side of the pair and ensure that the two halves are written in different languages. When extracting sentences to train Transformer Base MT systems in §4.2, the different-language check is replaced by a check to ensure that the translation pair respects the language pair being studied, i.e.: one sentence is in English and the other is in French." }, { "figure_ref": [], "heading": "D Prompting Details", "publication_ref": [], "table_ref": [], "text": "For 5-shot prompting experiments we used the following format (e.g., for French to English translation):\nFrench: [X 1 ] English: [ Y 1 ] ... French: [X 5 ] English: [ Y 5 ] French: [ X ] English:\nEach slot (X i , Y i ) is filled with five translation examples that are randomly sampled from the devtest split of the FLORES dataset, while the final slot X, is filled with the source text that comes from the test split of FLORES. " }, { "figure_ref": [], "heading": "E Additional Tables and Figures", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NOT TRANSLATION SIGNAL", "publication_ref": [], "table_ref": [], "text": "Code-Switching Voilà j'ai un problème avec certaines cinématiques du jeu. Je ne peux pas voir l'introduction ni les présentations de races par contre je peux voir les présentations de classes... Si quelqu'un pouvait m'aider ce serait sympa. Merci d'avance. I can understand french only a bit... Can you see this folder and if yes is anything into this folder? J'ai bien un dossier raw/fr mais à l'intérieur il n'y a pas introcinematic. Well, could take a look into the folder \"raw/en\" or/and \"raw/de\", is there a folder called \"introcinematic\"? Dans raw/de je n'ai rien non plus mais dans raw/en j'ai bien le dossier." }, { "figure_ref": [], "heading": "References", "publication_ref": [], "table_ref": [], "text": "Lagrange derives the integrals of momentum, moment of momentum, and energy, use of special properties of the potential function tends to conceal their meanings. For three bodies, the results are given in § II of his \"Essai sur le problcme des trois corps,\" Prix de Vacad. sci. Paris Finally, the principle of virtual work for dynamics, on which the entire Micbanique Analitique is founded, had been given more than twenty years earlier in §IV of his \"Recherchcs sur la libration de la lune, dans lesquelles on tache dc rcsoudre la question proposce par l'Academie royale des sciences pour le prix de 1'annee 1764,\" Prix de Vacad. sci. Paris 9, 1764-Euvres 6, 5 -61)." }, { "figure_ref": [], "heading": "Unrelated", "publication_ref": [], "table_ref": [], "text": ". . . PICASSO (1881PICASSO ( -1973) ) Autoportrait, 15 ans Né en 1881 à Malaga, il passe sa jeunesse en Espagne. En 1891, son père, peintre, accepte un poste d' enseignant à l'école de dessin \"La Corogne\", Picasso a 10 ans et il s'exerce au dessin alors qu'il sait à peine lire. En 1895, il s'installe avec sa famille à Barcelone, son père enseigne à l'école très académique des... This pragmatic viewpoint has been the subject of quite a few post-holiday discussions at Rubberbond. We wanted to explore this in greater depth and find a resolution to the debates we'd had over the years..." }, { "figure_ref": [], "heading": "TRANSLATION SIGNAL", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Translation Pairs", "publication_ref": [], "table_ref": [], "text": "In 1910 E. Cartan constructed the canonical frame and found the most symmetric case for maximally nonholonomic rank 2 distributions in R5. We solve the analogous problems for rank 2 distributions in Rn for arbitrary n>5. Our method is a kind of symplectification of the problem and it is completely different from the Cartan method of equivalence. En 1910 E. Cartan a construit un repère canonique et a trouvé le cas le plus symétrique des distributions de rang 2 et non holonômes de manière maximale dans R5. Nous résolvons ici des problèmes analogues pour les distributions de rang 2 dans Rn avec n>5 arbitraire. Notre méthode est une sorte de symplectification du problème et est complètement différente de la méthode par équivalence de Cartan." }, { "figure_ref": [], "heading": "Entailment", "publication_ref": [], "table_ref": [], "text": "Angels, according to Consuelo's own view, no longer intervene directly in human affairs, making it necessary for humans to help one another: \"Dans un temps ou Ton ne croit plus a la reVelation directe et a la manifestation sensible de la Divinite, la protec-tion et le secours du ciel se traduisent sous la forme d'assistance, d'affection et de devouement de la part de nos semblables\" (3: 265). Consuelo is a supreme example of this transfer of the divine role of care and love to man, or more accurately, to woman. Women also play a central role in the other spiritual force celebrated in the novel: art, in particular music: \"La musique et la poesie sont les plus hautes expressions de la foi, et la femme douee de genie et de beaute est preteresse, sibylle et iniatiatrice\"" }, { "figure_ref": [], "heading": "Explanation", "publication_ref": [], "table_ref": [], "text": "Can someone suggest how I can say Sorry, I have been very disorganized recently as I have been busy Thanks. I'm not sure to get what you mean. Do you mean that you've been quite chaotic because of being busy? If yes, I would maybe simply say: \"Désolé, j'ai été très désorganisé récemment, du fait d'avoir été occupé\". Sounds however quite \"negative\". Yes that is what I mean. I have been been very busy and have therefore only just got round to answering a colleagues question. I want to express my apologies and explain that I've been disorganised as things have been choatic in the office. Thanks very much Hmm I don't know how to say it, but désorganisé when referencing a human being sounds more like a personality trait than like a temporary state, and thus would give a negative image of yourself like mentionned above. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Jiaming Luo, Julia Kreutzer, Orhan Firat, Xavier Garcia, Markus Freitag, Sweta Agrawal, Marine Carpuat, Elijah Rippeth, and the anonymous reviewers for their helpful and constructive comments." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "FR 109,994,921 6,743,637 1,929,032 6,618,381 German DE 100,952,945 7,258,561 1,826,701 5,780,856 Spanish ES 75,311,571 5,860,634 1,538,549 5,717,352 Italian IT 42,071,597 2,204,919 591,329 2,128,730 Portuguese PT 23,175,895 2,685,160 317,735 1,048,717 Russian RU 18,307,304 2,045,770 527,159 2,142,065 Chinese ZH 16,196,482 2,075,947 271,496 706,948 Japanese JA 11,364,144 1,271,193 222,164 601,810 Arabic AR 11,239,689 689,215 160,554 420,851 Indonesian ID 9,294,576 1,157,443 211,183 738,329 Korean KO 8,777,321 465,821 120,648 518,738 Vietnamese VI 8,588,200 767,309 91,666 268,573" }, { "figure_ref": [], "heading": "SOURCE", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "EN", "publication_ref": [], "table_ref": [], "text": "NEN BIL TRA\nRaw counts (tokens) Social media conversations (multilingual) 50% 756,378,913,006 169,908,649,039 6,404,486,427 1,448,443,476 Filtered webpages (multilingual) 27% 459,437,466,428 38,653,502,458 7,387,577,398 4,260,754 " } ]
Large, multilingual language models exhibit surprisingly good zero-or few-shot machine translation capabilities, despite having never seen the intentionally-included translation examples provided to typical neural translation systems. We investigate the role of incidental bilingualism-the unintentional consumption of bilingual signals, including translation examples-in explaining the translation capabilities of large language models, taking the Pathways Language Model (PaLM) as a case study. We introduce a mixed-method approach to measure and understand incidental bilingualism at scale. We show that PaLM is exposed to over 30 million translation pairs across at least 44 languages. Furthermore, the amount of incidental bilingual content is highly correlated with the amount of monolingual in-language content for non-English languages. We relate incidental bilingual content to zero-shot prompts and show that it can be used to mine new prompts to improve PaLM's out-of-English zero-shot translation quality. Finally, in a series of small-scale ablations, we show that its presence has a substantial impact on translation capabilities, although this impact diminishes with model scale.
Searching for Needles in a Haystack: On the Role of Incidental Bilingualism in PaLM's Translation Capability
[ { "figure_caption": "Figure 1 :1Figure 1: A mixed-method approach to measure and understand incidental bilingualism at scale. We alternate between quantitative and qualitative steps to detect ( §3.1) and analyze ( §3.2) bilingual instances, then detect ( §3.3) and analyze ( §3.4) translation instances.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Number of monolingual, bilingual, and translation instances detected within PaLM's training data. PaLM consumes bilingual signals, including translation examples, across (at least) 44 languages.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Typology of bilingual instances, along with their distribution within an EN-FR annotated sample. Bilingual instances cover a range of cross-lingual phenomena, including cases of translated content.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Number of mined translation pairs within PaLM's training instances. PaLM consumes thousands of translation pairs across (at least) 44 languages.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Pearson correlations between counts of monolingual instances with (a) bilingual and (b) translation instances. The number of bilingual and translation instances correlates strongly with the number of monolingual instances.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Data-driven prompt counts within PaLM's translation pairs across 44 languages.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Default Code Native TranslationHIGH1,207506781831MEDIUM21962136352LOW38064122ALL1,4645689811,305", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "To do so, we conduct smaller-scale experiments by training 1B and 8B parameter models on different training samples BLEU scores for FR→EN NMT models trained on various translation pairs, evaluated on FLORES devtest. t corresponds to the LABSE threshold. PaLMmined translation pairs provide useful signal for training supervised NMT models.", "figure_data": "t#TRANSLATIONS PaLM (mined) WMTN/A40,836,87642.00.909,084,42933.70.807,056,44135.70.704,874,17336.40.603,341,18737.3 38.10.502,474,70337.20.401,948,82037.10.301,477,53538.4 36.50.20906,93737.80.15549,70536.3", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Translation results on the FLORES devtest for small-scale PaLM models trained on various ablation conditions. EN→XX translation quality is measured by chrF and XX→EN by BLEU. Ablating translation pairs (-TRA) has a significant impact on the translation capabilities of S=1B (5-shot) for HIGH resource pairs; this impact decreases with scale (i.e., S=8B model).", "figure_data": "Data To simulate PaLM's data conditions withsmaller models, we begin by partitioning PaLM'straining instances into four non-overlapping groups:ENG: English instances, NEN: non-English (ex-cluding bilingual) instances, BIL: bilingual (ex-cluding translation) instances, and TRA: transla-tion instances. We then merge instances within", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Data statistics for small-scale PaLM ablation experiments in number of 2,048 token examples.", "figure_data": "", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation hyper-parameters. FEED-FORWARD DIMENSION is always DIMENSION times 4. Training data size is measured in trillions (T) of subword tokens.", "figure_data": "LANGUAGE ISOMONOLINGUAL BILINGUAL TRANSLATION PARALLEL TEXTSEnglishEN2,086,622,555,000FrenchFarsiFA8,106,752145,49831,68579,731SerbianSR8,092,01870,90517,33349,316UkrainianUK5,392,948275,62365,468191,624PashtoPS2,481,25532,3046,20812,841ArmenianHY2,251,04192,78624,77765,745HebrewIW1,956,133123,64137,904111,172BulgarianBG1,702,418119,18830,99183,672KazakhKK1,681,55222,7845,82623,800BelarusianBE1,681,27247,28411,64635,535HindiHI1,356,198250,51242,737121,092UrduUR1,326,86746,97311,56432,654GreekEL1,256,535205,98652,194156,933ThaiTH1,169,86579,21111,15728,125Macedonian MK1,006,74159,53210,88538,521KyrgyzKY872,38479,95517,10737,484BengaliBN826,93364,01216,13843,046GeorgianKA757,14270,22015,45734,939TajikTG734,88840,1465,50327,889SindhiSD695,33136,7285,05411,373NepaliNE676,94059,15912,00930,789TamilTA667,14847,22513,40841,466MongolianMN541,74523,3284,18012,861PanjabiPA526,04243,19611,59256,377TeluguTE508,02624,4016,46227,349MalayalamML503,76236,6528,23518,412MarathiMR363,79114,5444,20915,684AmharicAM297,46333,6049,09829,355BurmeseMY278,93312,9892,5477,020KannadaKN231,30812,3863,43011,589SinhalaKM152,6309,65215,995,661GujaratiGU146,9905,6621,5145,333LaoLO130,28410,4785,80625,202", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Numbers of monolingual, bilingual, and translation instances across the 44 languages studied.", "figure_data": "", "figure_id": "tab_8", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Examples of bilingual instances detected within PaLM training data.", "figure_data": "", "figure_id": "tab_9", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Comparison of prompt selection on FLORES devtest, for zero-and few-shot prompting. QUAL. corresponds to translation quality (chrF for EN→XX and BLEU for XX→EN), LANG.% represents PaLM's accuracy in producing text in the correct target language, and δ gives the translation quality difference from the \"Default\" prompt.", "figure_data": "Default (zero)Code (zero)Native (zero)Translation (zero)Default (few)Native (few)QUAL. LANG.% QUAL.δLANG.%QUAL.δLANG.%QUAL.δLANG.% QUAL.LANG.%QUAL.δLANG.%EN→XXFR57.8 79.263.65.890.468.110.399.565.47.594.870.799.670.90.299.7DE52.3 76.759.57.292.663.010.799.762.29.997.865.499.965.3-0.099.9ES49.8 86.551.61.991.454.44.699.553.63.897.256.399.756.40.199.6IT51.1 83.452.21.184.457.76.699.355.03.994.859.299.759.70.599.7PT61.1 85.062.71.689.269.07.999.767.05.996.470.699.770.5-0.199.8RU32.4 58.143.210.877.555.322.999.851.318.990.057.699.957.5-0.199.9ZH20.3 76.024.84.583.529.28.999.931.311.099.637.0 100.036.9-0.1100.0JA22.2 75.113.9-8.349.133.811.6100.033.711.699.040.1 100.039.9-0.2100.0AR20.0 39.40.7-19.20.150.931.098.839.219.373.053.7 100.053.7-0.1100.0ID58.9 81.412.2-46.73.327.3-31.626.860.31.468.368.896.968.7-0.197.0KO16.4 63.118.31.964.429.212.799.830.013.596.933.6 100.034.20.6100.0VI41.5 68.910.1-31.40.055.814.399.555.514.098.157.9 100.057.8-0.1100.0FA24.7 51.21.1-23.60.347.522.798.342.918.285.151.4 100.051.2-0.2100.0SR3.3 48.51.0-2.30.455.352.198.729.125.82.059.9 100.060.00.1100.0UK35.5 66.00.6-34.90.054.018.599.950.414.990.556.6 100.056.5-0.1100.0PS20.0 64.71.2-18.80.028.58.599.930.810.799.033.699.934.40.8100.0HY30.5 62.11.1-29.41.650.019.599.547.216.792.854.7 100.054.4-0.3100.0IW23.0 46.61.0-22.00.051.828.999.143.420.488.155.999.655.9-0.099.8BG43.7 74.131.1-12.649.759.615.999.847.13.457.762.8 100.062.5-0.2100.0KK29.9 71.10.7-29.20.042.212.398.733.53.673.949.7 100.049.80.1100.0BE32.7 78.30.7-32.00.041.58.999.939.06.390.744.0 100.044.00.0100.0HI31.7 65.81.1-30.61.246.715.099.034.93.263.351.699.951.3-0.3100.0UR21.2 49.60.4-20.80.040.519.298.536.815.587.344.7 100.044.90.1100.0EL26.6 55.818.6-7.937.549.122.6100.046.219.792.651.1 100.051.20.1100.0TH34.8 81.13.4-31.45.748.713.999.950.515.899.952.4 100.052.70.4100.0MK47.6 81.31.9-45.72.358.110.599.640.1-7.530.060.699.960.80.299.9KY18.4 54.80.7-17.70.133.014.787.734.816.585.943.2 100.042.9-0.3100.0BN27.8 66.50.5-27.30.243.515.799.540.612.890.447.2 100.047.1-0.1100.0KA29.5 73.70.8-28.60.243.113.699.640.010.589.548.1 100.048.20.1100.0TG29.6 70.40.8-28.70.044.114.697.844.014.494.749.1 100.049.0-0.099.9SD24.1 65.30.7-23.40.039.515.397.933.69.581.045.1 100.045.30.2100.0NE26.4 63.40.8-25.60.041.314.994.623.2-3.211.448.499.848.50.199.8TA31.3 69.50.6-30.80.047.215.999.044.012.790.651.2 100.051.60.4100.0MN20.9 68.00.6-20.30.332.511.599.623.82.969.340.499.940.4-0.099.9PA20.6 50.30.6-20.00.041.320.699.540.920.394.845.1 100.045.10.0100.0TE34.9 84.21.3-33.60.042.87.999.737.02.184.050.3 100.050.40.0100.0ML30.8 73.00.5-30.20.043.212.599.742.611.995.848.9 100.049.00.0100.0MR26.3 67.30.5-25.80.036.09.794.633.47.174.643.499.943.70.2100.0AM15.2 76.60.6-14.60.023.68.497.216.10.960.530.699.930.2-0.4100.0MY23.4 67.70.6-22.80.138.014.799.838.315.098.543.8 100.043.90.1100.0KN30.5 71.60.7-29.90.144.213.7100.044.814.298.149.0 100.048.9-0.1100.0KM28.6 84.22.0-26.60.037.79.199.937.99.399.539.3 100.039.40.1100.0GU30.8 83.11.1-29.80.939.28.499.937.97.196.844.4 100.044.4-0.1100.0LO30.9 80.23.5-27.40.040.59.699.643.212.398.846.099.845.8-0.199.9XX→ENFR44.9 99.645.70.899.645.20.399.642.5-2.499.547.299.647.60.599.6DE43.7 99.744.20.599.544.10.599.841.5-2.199.145.999.846.00.199.8ES29.4 99.830.10.799.629.2-0.299.627.4-2.099.432.999.633.50.699.6IT32.5 99.734.11.699.632.2-0.399.630.2-2.498.536.499.636.2-0.199.6PT49.1 99.749.80.799.649.10.099.746.5-2.698.950.999.751.50.699.7RU34.8 99.636.11.399.635.30.599.533.2-1.697.938.599.738.2-0.499.6ZH28.5 99.126.5-2.092.329.20.898.927.4-1.195.231.399.531.40.099.6JA26.9 99.526.4-0.496.727.81.099.625.6-1.296.630.099.730.00.099.7AR39.4 99.639.50.195.237.2-2.298.838.8-0.698.243.099.743.20.199.5ID44.0 99.340.4-3.696.840.1-4.096.139.1-4.991.546.899.646.6-0.299.5KO28.9 99.727.0-1.994.429.40.599.327.8-1.195.831.799.531.4-0.299.4VI37.2 99.423.0-14.269.837.50.399.434.4-2.893.039.599.439.4-0.199.5FA35.5 99.633.3-2.293.334.3-1.199.534.8-0.795.939.399.639.3-0.099.6SR43.6 99.743.1-0.498.444.50.999.841.7-1.995.446.599.846.5-0.199.8UK38.5 99.637.7-0.897.638.60.299.737.0-1.594.042.099.742.30.299.7PS28.3 99.316.8-11.595.228.0-0.399.328.90.693.833.999.734.00.199.5HY37.7 99.431.6-6.292.617.9-19.997.636.6-1.193.840.999.541.10.299.5IW42.9 99.541.8-1.194.942.5-0.499.341.5-1.492.446.099.746.40.499.6BG40.6 99.640.70.199.441.20.699.538.4-2.297.042.999.643.40.599.6KK29.8 99.626.2-3.693.527.1-2.799.227.8-2.092.634.399.934.30.099.8BE20.4 99.622.31.999.419.9-0.699.617.8-2.683.124.299.724.1-0.199.6HI36.5 99.334.2-2.296.632.1-4.498.930.2-6.385.240.299.639.6-0.699.3UR31.3 99.530.2-1.297.330.2-1.299.429.9-1.492.535.799.735.4-0.399.8EL35.5 99.834.8-0.896.435.80.399.733.7-1.899.538.599.738.70.299.7TH28.1 99.125.6-2.586.628.0-0.198.927.1-1.091.433.099.733.20.299.5MK43.2 99.542.0-1.196.342.8-0.499.540.4-2.894.645.999.645.6-0.299.5KY21.1 99.619.1-2.195.820.6-0.599.516.9-4.284.825.299.824.6-0.699.7BN30.8 99.329.6-1.197.328.6-2.299.030.6-0.197.735.499.835.3-0.199.7KA26.7 99.521.9-4.983.522.6-4.199.524.5-2.290.230.499.830.40.099.6TG33.0 99.531.2-1.895.832.8-0.299.530.2-2.888.136.199.636.20.099.7SD33.2 98.929.7-3.485.134.00.899.325.7-7.578.839.499.839.60.299.7NE32.8 99.530.8-2.196.127.4-5.497.329.8-3.090.137.299.737.60.499.6TA29.0 99.326.6-2.494.326.7-2.399.528.3-0.794.533.199.533.20.199.7MN22.2 99.419.4-2.890.321.0-1.299.121.4-0.887.128.299.528.2-0.099.6PA34.9 99.531.9-3.096.228.0-6.997.031.8-3.189.439.599.739.3-0.299.7TE31.3 98.829.5-1.894.028.7-2.698.830.1-1.392.337.999.637.90.099.5ML28.5 99.527.0-1.594.426.8-1.799.029.20.795.034.399.734.50.299.7MR28.6 99.428.80.294.927.0-1.698.827.8-0.990.935.299.834.9-0.399.7AM28.1 99.425.4-2.895.424.4-3.897.328.70.694.832.899.732.90.199.5MY21.4 98.819.8-1.791.619.1-2.498.519.8-1.681.826.899.526.5-0.299.6KN27.2 98.724.5-2.788.024.8-2.498.126.5-0.792.932.399.732.2-0.199.7KM27.8 98.626.4-1.489.828.60.896.922.0-5.873.733.399.533.70.499.6GU32.6 99.428.7-3.993.027.1-5.598.831.3-1.392.537.599.737.3-0.399.6LO31.0 99.330.9-0.093.929.7-1.298.527.9-3.087.036.299.536.40.299.6", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Translation results between 44 languages and English on FLORES devtest for small-scale PaLM models. EN→XX results are reported in chrF, and XX→EN results are report in BLEU.", "figure_data": "", "figure_id": "tab_11", "figure_label": "10", "figure_type": "table" } ]
Eleftheria Briakou; Colin Cherry; George Foster; Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Won Chung; Charles Sutton; Sebastian Gehrmann; Parker Schuh; Kensen Shi; Sasha Tsvyashchenko; Joshua Maynez; Abhishek Rao; Parker Barnes; Yi Tay; Noam Shazeer; Vin- Odkumar Prabhakaran; Emily Reif; Nan Du; Ben Hutchinson; Reiner Pope; James Bradbury; Jacob Austin; Michael Isard; Guy Gur-Ari; Pengcheng Yin; Toju Duke; Anselm Levskaya; Sanjay Ghe- Mawat; Sunipa Dev; Henryk Michalewski; Xavier Garcia; Vedant Misra; Kevin Robinson; Liam Fe- Dus; Denny Zhou; Daphne Ippolito; David Luan; Hyeontaek Lim; Barret Zoph; Alexander Spiridonov; Ryan Sepassi; David Dohan; Shivani Agrawal; Mark Omernick; Andrew M Dai; Thanumalayan Sankara- Narayana Pillai; Marie Pellat; Aitor Lewkowycz; Erica Moreira; Rewon Child; Oleksandr Polozov; Katherine Lee; Zongwei Zhou; Xuezhi Wang; Bren- Nan Saeta; Mark Diaz; Orhan Firat; Michele Catasta; Jason Wei; Kathy Meier-Hellstern
[ { "authors": "Sweta Agrawal; Chunting Zhou; Mike Lewis; Luke Zettlemoyer; Marjan Ghazvininejad", "journal": "", "ref_id": "b0", "title": "Incontext examples selection for machine translation", "year": "2022" }, { "authors": "Terra Blevins; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Language contamination helps explains the cross-lingual capabilities of English pretrained models", "year": "2022" }, { "authors": "Ondřej Bojar; Christian Buck; Christian Federmann; Barry Haddow; Philipp Koehn; Johannes Leveling; Christof Monz; Pavel Pecina; Matt Post; Herve Saint-Amand; Radu Soricut; Lucia Specia; Aleš Tamchyna", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Findings of the 2014 workshop on statistical machine translation", "year": "2014" }, { "authors": "Jan A Botha; Emily Pitler; Ji Ma; Anton Bakalov; Alex Salcianu; David Weiss; Ryan Mcdonald; Slav Petrov", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Natural language processing with small feed-forward networks", "year": "2017" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melani E Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shy Am; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel H Erbert Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Chris Winter; Clemens Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mcca Ndlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b4", "title": "", "year": "2020" }, { "authors": "Naman Goyal; Cynthia Gao; Vishrav Chaudhary; Peng-Jen Chen; Guillaume Wenzek; Da Ju; Sanjana Krishnan; Marc'aurelio Ranzato; Francisco Guzmán; Angela Fan", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "The Flores-101 evaluation benchmark for low-resource and multilingual machine translation", "year": "2022" }, { "authors": "Zhengbao Jiang; Frank F Xu; Jun Araki; Graham Neubig", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "How can we know what language models know?", "year": "2020" }, { "authors": "Philipp Koehn; Vishrav Chaudhary; Ahmed El-Kishky; Naman Goyal; Peng-Jen Chen; Francisco Guzmán", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Findings of the WMT 2020 shared task on parallel corpus filtering and alignment", "year": "2020" }, { "authors": "Julia Kreutzer; Isaac Caswell; Lisa Wang; Ahsan Wahab; Daan Van Esch; Nasanbayar Ulzii-Orshikh; Allahsera Tapo; Nishant Subramani; Artem Sokolov; Claytone Sikasote; Monang Setyawan; Supheakmungkol Sarin; Sokhar Samb; Benoît Sagot; Clara Rivera; Annette Rios; Isabel Papadimitriou; Salomey Osei; Pedro Ortiz Suarez; Iroro Orife; Kelechi Ogueji; Andre Niyongabo Rubungo; Toan Q Nguyen; Mathias Müller; André Müller; Hassan Shamsuddeen; Nanda Muhammad; Ayanda Muhammad; Jamshidbek Mnyakeni; Tapiwanashe Mirzakhalov; Colin Matangira; Nze Leong; Sneha Lawson; Yacine Kudugunta; Mathias Jernite; Orhan Jenny; Firat; F P Bonaventure; Sakhile Dossou; Dlamini; Sakine Nisansa De Silva; Stella Çabuk Ballı; Alessia Biderman; Ahmed Battisti; Ankur Baruwa; Pallavi Bapna; Baljekar; Ayodele Israel Abebe Azime; Duygu Awokoya; Orevaoghene Ataman; Oghenefego Ahia; Sweta Ahia; Mofetoluwa Agrawal; Adeyemi", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b8", "title": "Quality at a glance: An audit of web-crawled multilingual datasets", "year": "2022" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Victoria Xi; Todor Lin; Mikel Mihaylov; Tianlu Artetxe; Shuohui Wang; Daniel Chen; Myle Simig; Naman Ott; Shruti Goyal; Jingfei Bhosale; Ramakanth Du; Sam Pasunuru; Punit Shleifer; Vishrav Singh Koura; Brian O' Chaudhary; Jeff Horo; Luke Wang; Zornitsa Zettlemoyer; Mona Kozareva; Ves Diab; Xian Stoyanov; Li", "journal": "", "ref_id": "b10", "title": "Few-shot learning with multilingual language models", "year": "2021" }, { "authors": "Jun Lu; Xin Ge; Yangbin Shi; Yuqi Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Alibaba submission to the WMT20 parallel corpus filtering task", "year": "2020" }, { "authors": "Jun Lu; Xiaoyu Lv; Yangbin Shi; Boxing Chen", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Alibaba submission to the WMT18 parallel corpus filtering task", "year": "2018" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Fabio Petroni; Tim Rocktäschel; Sebastian Riedel; Patrick Lewis; Anton Bakhtin; Yuxiang Wu; Alexander Miller", "journal": "", "ref_id": "b14", "title": "Language models as knowledge bases?", "year": "2019" }, { "authors": "Maja Popović", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "chrF: character n-gram F-score for automatic MT evaluation", "year": "2015" }, { "authors": "Matt Post", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "A call for clarity in reporting BLEU scores", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b17", "title": "Language models are unsupervised multitask learners", "year": "2018" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b18", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2019" }, { "authors": "Laria Reynolds; Kyle Mcdonell", "journal": "Association for Computing Machinery", "ref_id": "b19", "title": "Prompt programming for large language models: Beyond the few-shot paradigm", "year": "2021" }, { "authors": "Victor Sanh; Albert Webson; Colin Raffel; Stephen H Bach; Lintang Sutawika; Zaid Alyafeai; Antoine Chaffin; Arnaud Stiegler; Teven Le Scao; Arun Raja; Manan Dey; M Saiful Bari; Canwen Xu; Urmish Thakker; Shanya Sharma Sharma; Eliza Szczechla; Taewoon Kim; Gunjan Chhablani; Nihal Nayak; Debajyoti Datta; Jonathan Chang; Mike Tian-Jian; Han Jiang; Matteo Wang; Sheng Manica; Zheng Xin Shen; Harshit Yong; Rachel Pandey; Thomas Bawden; Trishala Wang; Jos Neeraj; Abheesht Rozen; Andrea Sharma; Thibault Santilli; Jason Fevry; Alan Fries; Ryan Teehan; Tali Bers; Stella Biderman; Leo Gao; Thomas Wolf; Alexander M Rush", "journal": "", "ref_id": "b20", "title": "Multitask prompted training enables zero-shot task generalization", "year": "2021" }, { "authors": "Noam Shazeer", "journal": "", "ref_id": "b21", "title": "Fast transformer decoding: One write-head is all you need", "year": "2019" }, { "authors": "Taylor Shin; Yasaman Razeghi; Robert L Logan; I V ; Eric Wallace; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts", "year": "2020" }, { "authors": "Allison Shorten; Joanna Smith", "journal": "Evidence-Based Nursing", "ref_id": "b23", "title": "Mixed methods research: Expanding the evidence base", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b25", "title": "", "year": "" }, { "authors": "David Vilar; Markus Freitag; Colin Cherry; Jiaming Luo; Viresh Ratnakar; George Foster", "journal": "", "ref_id": "b26", "title": "Prompting palm for translation: Assessing strategies and performance", "year": "2022" }, { "authors": "Yuan Zhang; Jason Riesa; Daniel Gillick; Anton Bakalov; Jason Baldridge; David Weiss", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "A fast, compact, accurate model for language identification of codemixed text", "year": "2018" } ]
[ { "formula_coordinates": [ 13, 141.75, 415.13, 73.77, 94.35 ], "formula_id": "formula_0", "formula_text": "French: [X 1 ] English: [ Y 1 ] ... French: [X 5 ] English: [ Y 5 ] French: [ X ] English:" } ]
2024-02-05
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b27", "b4", "b13", "b17", "b9", "b26", "b22", "b1", "b15", "b21", "b32", "b18", "b34", "b28", "b29", "b6", "b25", "b1" ], "table_ref": [], "text": "Self-supervised learning (SSL) is a field in machine learning (ML) that aims to learn useful feature representations from unlabelled input data. SSL includes mainly contrastive methods (Oord et al., 2018;Chen et al., 2020;He et al., 2020) and generative models (Kingma & Welling, 2013;Gregor et al., 2015;Oh et al., 2015). Generative models rely on using generative decoding and reconstruction loss, whereas typical contrastive methods do not involve a decoder but apply contrastive similarity metrics to hidden embeddings instead (Liu et al., 2021).\nState representation learning (SRL) (Anand et al., 2019;Jonschkowski & Brock, 2015;Lesort et al., 2018) focuses on learning representations from input data that are typically collected in a reinforcement learning (RL) environment. A collection of images can be sampled through an agent interacting with the environment according to a specified behavior policy. Such images are interesting as study subjects due to their innate temporal/spatial correlations. Moreover, RL can also benefit from self-supervised learning just as computer vision (CV) and natural language processing (NLP) do, and successful pretraining of neural network (NN) models can potentially lead to improvements in downstream RL tasks.\nA manifold can be learned by finding an atlas that accurately describes the local structure in each chart (Pitelis et al., 2013). In SSL, using an atlas can be viewed as a generalization of both dimensionality reduction and clustering (Korman, 2018;2021a;b). Namely, it generalizes the case where only one chart exists and where the charts do not overlap in an atlas. In MSimCLR (Korman, 2021b), NNs can encode an atlas of a manifold by having chart embeddings and membership probabilities. One primary issue of MSimCLR is its reliance on a uniform prior, which allocates inputs into each chart embedding uniformly. We postulate that although this uniform prior may more effectively represent the data distribution when d is exceedingly small, it concurrently introduces higher prediction uncertainty. Simultaneously, it also suffers from a problem akin to that faced by bootstrapped methods in RL. It has been noted that multiple NN heads inside a model, in the absence of additional noise, tend to output similar results after being trained a large number of epochs (Osband & Van Roy, 2015;Osband et al., 2016;Ecoffet et al., 2019;Meng et al., 2022).\nTo rectify the aforementioned problems, this study introduces a novel SSL paradigm that leverages an unbalanced atlas (UA). In this context, UA denotes the absence of a uniform prior distribution, with the membership probability distribution deliberately trained to deviate significantly from uniformity. As illustrated in Fig. 1, it is evident that the entropy of the output vector during pretraining when using UA is markedly lower than that with a uniform prior, which suggests a heightened degree of confidence in its predictions.\nOur contribution is summarized as follows: (1) We modify the SRL algorithm ST-DIM (Anand et al., 2019) with our UA paradigm and introduce a new algorithm called DIM-UA. This furthers the research into the integration of RL and SSL with a novel manifold-based learning paradigm. DIM-UA achieves the state-of-the-art performance on samples collected from 19 Atari games of the AtariARI benchmark.\n(2) We also provide detailed ablations and additional experiments on CIFAR10 to examine different underlying effects of possible design choices.\n(3) We demonstrate that our UA paradigm is capable of effectively representing a manifold with a large number (e.g., ≥256) of hidden dimensions, whereas previous research (Korman, 2021a;b) only showed promise with a small number (e.g., ≤8) of hidden dimensions. The UA paradigm thereby showcases its capability to facilitate the development of larger, more powerful models, transcending the constraints traditionally imposed by model backbones." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b34", "b33", "b12", "b2", "b7", "b17", "b23", "b27", "b4", "b13", "b11", "b5", "b3", "b18", "b8", "b0", "b16" ], "table_ref": [], "text": "Dimensionality reduction with manifolds It is common for nonlinear dimensionality reduction (NLDR) algorithms to approach their goals based on the manifold hypothesis. For example, the manifold structure of an isometric embedding can be discovered by solving for eigenvectors of the matrix of graph distances (Tenenbaum et al., 2000). A sparse matrix can also be used instead with a locally linear embedding (Roweis & Saul, 2000). Correspondence between samples in different data sets can be recovered through the shared representations of the manifold (Ham et al., 2003).\nManifold regularization provides an out-of-sample extension to novel examples compared to graph-based approaches (Belkin et al., 2006). Manifold sculpting simulates surface tension progressively in local neighborhoods to discover manifolds (Gashler et al., 2007).\nSelf-supervised learning There are some relevant works on generative models, such as variational autoencoders (VAEs) (Kingma & Welling, 2013) and adversarial autoencoders (AAEs) (Makhzani et al., 2015). Meanwhile, contrastive methods have shown promise in the field of SSL. Contrastive Predictive Coding (CPC) learns predictive representations based on the usefulness of the information in predicting future samples (Oord et al., 2018). SimCLR provides a simple yet effective framework using data augmentations (Chen et al., 2020). Momentum Contrast (MoCo) utilizes a dynamic dictionary, which can be much larger than the mini-batch size (He et al., 2020). The recent trend within research in contrastive learning has been on removing the need for negative pairs. BYOL utilizes a momentum encoder to prevent the model from collapsing due to a lack of negative pairs (Grill et al., 2020). SimSiam further shows that a stop-gradient operation alone is sufficient (Chen & He, 2021). Barlow Twins, on the other hand, achieves so by minimizing the redundancy of vector components outputted by two identical networks that take distorted versions of inputs (Zbontar et al., 2021).\nSelf-supervised learning with manifolds Representing non-Euclidean data in NN models is a key topic in geometric deep learning (Bronstein et al., 2017). Learning manifolds using NNs was explored in (Korman, 2018), in which AAEs were used to learn an atlas as latent parameters.\nConstant-curvature Riemannian manifolds (CCMs) of different curvatures can be learned similarly using AAEs (Grattarola et al., 2019). Mixture models of VAEs can be used to express the charts and their inverses to solve inverse problems (Alberti et al., 2023). A combination of autoencoders and Barlow Twins can capture both the linear and nonlinear solution manifolds (Kadeethum et al., 2022)." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "U α Z U β ψ α ψ β ψ αβ ψ βα\nFigure 2: A manifold Z embedded in a higher dimension. Two domains are denoted by U α and U β in Z. ψ α and ψ β are the corresponding charts that map them to a lower dimensional Euclidean space. An atlas is then a collection of these charts that together cover the entire manifold.\nOur method extends the work from Korman (2021b) and builds a manifold representation using multiple output embeddings and membership probabilities of those embeddings. First, an illustration of a manifold Z is in Fig. 2. Instead of directly learning an encoder of Z, we can learn encoder functions of ψ α (U α ) and ψ β (U β ) together with a score function. For a given input, we can specify which encoder to use according to the output of the score function.\nWe formally model a distribution of input data by a manifold as follows: x is the input from input space X , Z is the latent space, f is an embedding function: X → Z. I is the identity mapping, d the number of dimensions for each chart output embedding, n the number of charts, and N denotes {1, 2, ..., n}. ψ i : Z → R d is the inverse mapping of a coordinate map: R d → Z, whereas q = (q 1 , q 2 , ..., q n ) : Z → [0, 1] n is the chart membership function. The output of our model is then given by Eq. 1.\nOutput\n(x) = q i (f (x))I(ψ i (f (x)))(1)\nAt inference time, the one-hot encoding of q(x) is used instead (Eq. 2).\nOutput(x) = I(ψ i (f (x))), where i = argmax j q j (f (x))\n(2)" }, { "figure_ref": [], "heading": "UNBALANCED ATLAS", "publication_ref": [ "b10", "b24" ], "table_ref": [], "text": "Like other SSL methods with manifolds (Korman, 2021b;a), UA uses a maximal mean discrepancy (MMD) objective (Gretton et al., 2012;Tolstikhin et al., 2017), which is defined by Eq. 3.\nMMD k (P 1 , P 2 ) = ∥ S k(s, •)dP 1 (s) - S k(s, •)dP 2 (s)∥ H k (3)\nHere, k is a reproducing kernel, H k is the reproducing kernel Hilbert space of real-valued functions mapping S to R, and P 1 , P 2 are distributions on the space.\nIn our paradigm, the input x is designed to be represented in charts with higher membership probabilities. Thus, we take an MMD loss that moves the conditional membership distribution far away from the uniform distribution. We use the kernel k N : N × N → R, (i, j) → δ ij , and δ ij = 1 if i = j else 0, and thus have Eq. 4.\nL N (q) = -E z MMD k N (q(z), U N ) = -E z n i=1 (q i (z) - 1 n ) 2 (4)\nHere, U N denotes the uniform distribution on N , z is the embedding of f (x).\nUnlike MSimCLR, we do not use an MMD objective to make the prior distribution to be uniform, but take another approach to improve the model stability and head diversity when d is not trivial. In Fig. 2, U α ∩ U β has transitive maps in their respective chart coordinates with domains restricted to\nψ α (U α ∩ U β ) and ψ β (U α ∩ U β ), which are ψ αβ = ψ β • ψ -1 α and ψ βα = ψ α • ψ -1 β .\nWhat interests us the most is this intersection between U α and U β . More precisely, since the overlapping representations in each head of the model become a dominant negative factor when d grows larger, we aim at modelling a manifold with dilated prediction targets in pretraining to avoid convergent head embeddings and collapsing solutions. We use the average values of chart outputs to model a Minkowski sum (Mamatov & Nuritdinov, 2020;Wang et al., 2020), which serves a key purpose in our paradigm.\nWhile convergent head embeddings should be avoided, the learning process should not break the convergence entirely. Proposition 1 implies that the Minkowski sum of the output embeddings contains the Minkowski sum of all mappings of intersections, which means that using dilated prediction targets by taking the Minkowski sum does not omit any mapped intersected embedding.\nProposition 1. Let U = {U 1 , U 2 , ..., U n } be a collection of open subsets of Z whose union is all of Z, and n i=1 U i is not empty. For each i ∈ {1, 2, ..., n}, there is a homeomorphism\nψ i : U i → V i to an open set V i ⊂ R d . We have the Minkowski sum V i + V j = {a + b | a ∈ V i , b ∈ V j }. Then n i=1 ψ i ( n j=1 U j ) ⊂ n i=1 V i . Proof. For any vector a ∈ n i=1 ψ i ( n j=1 U j ), there exists a i ∈ ψ i ( n j=1 U j ) such that a = n i=1 a i . Because ψ i ( n j=1 U j ) ⊂ V i , we also have a i ∈ V i , i ∈ {1, 2, ..., n}. Then n i=1 a i ∈ n i=1 V i , a ∈ n i=1 V i and thus n i=1 ψ i ( n j=1 U j ) ⊂ n i=1 V i .\nProposition 2 further states that the average Minkowski sum of the output embeddings together with the Minkowski sum of the average of mappings of intersections can be used instead and still keeps Proposition 1 true, under the assumption that each mapping of the intersection is convex. More generally, Proposition 1 holds true with scalar multiplications when convexity is assumed. However, it should be noted that convexity is not guaranteed here. In Eq. 1, we approach this assumption by using an identity mapping I instead of a linear mapping from Korman (2021b). More about the convexity assumption is addressed in Appendix B.\nProposition 2. Let U = {U 1 , U 2 , ..., U n } be a collection of open subsets of Z whose union is all of Z, and n i=1 U i is not empty. For each i ∈ {1, 2, ..., n}, there is a homeomorphism ψ i : U i → V i to an open set V i ⊂ R d . The multiplication of set V and a scalar λ is defined to be λV = {λa | a ∈ V }. We take the Minkowski sum. If each ψ i ( n j=1 U j ) is convex, then n i=1 1 n ψ i ( n j=1 U j ) ⊂ 1 n n i=1 V i .\nProof. Follows Proposition 1 and the property of scalar multiplication,\n1 n n i=1 ψ i ( n j=1 U j ) ⊂ 1 n n i=1 V i .\nSince scalar multiplication is preserved for convex sets, we have\nn i=1 1 n ψ i ( n j=1 U j ) ⊂ 1 n n i=1 V i ." }, { "figure_ref": [], "heading": "DIM-UA", "publication_ref": [ "b1", "b14", "b27" ], "table_ref": [], "text": "We experiment with our UA paradigm using the SRL algorithm ST-DIM (Anand et al., 2019), and propose DIM-UA. ST-DIM develops on Deep InfoMax (DIM) (Hjelm et al., 2018) that uses infoNCE (Oord et al., 2018) as the mutual information estimator between patches. Its objective consists of two components. One is the global-local objective (L GL ) and the other one is the local-local objective (L LL ), defined by Eq. 5 and Eq. 6 respectively.\nL GL = M m=1 N n=1 -log exp(g m,n (x t , x t+1 )) xt * ∈Xnext exp(g m,n (x t , x t * ))\n(5)\nL LL = M m=1 N n=1 -log exp(h m,n (x t , x t+1 )) xt * ∈Xnext exp(h m,n (x t , x t * ))(6)\nHere, x t and x t+1 are temporally adjacent observations, whereas X next is the set of next observations and x t * is randomly sampled from the minibatch. M and N are the height and width of local feature representations.\nWe denote the encoder as f , the output (global feature) vector of input x t as Output(x t ), and the local feature vector of x t at point (m, n) as f m,n (x t ). W g and W h are linear layers that will be discarded in probing. Then, we have the score function g m,n (x t , x t+1 ) = Output(x t ) T W g f m,n (x t+1 ), and\nh m,n (x t , x t+1 ) = f m,n (x t ) T W h f m,n (x t+1 ) of ST-DIM.\nPublished as a conference paper at ICLR 2024\nFor DIM-UA, we need to redefine the score function of L GL by Eq. 7 because the UA paradigm utilizes dilated prediction targets during pretraining, where ψ i (f (x t )) is the output of x t from the i-th head following encoder f for each i in N .\ng m,n (x t , x t+1 ) = [ 1 n n i=i ψ i (f (x t ))] T W g f m,n (x t+1 )(7)\nAccording to Eq. 4 , we have the MMD objective L Q defined as Eq. 8, where q i (f (x t )) is the membership probability of the i-th head for each i in N when the input is x t .\nL Q = - 1 2 n i=1 ((q i (f (x t )) - 1 n ) 2 + (q i (f (x t+1 )) - 1 n ) 2 ) (8)\nThereby, the UA objective (Eq. 9) is a sum of above objectives, where τ is a hyper-parameter.\nL U A = L GL + L LL + τ L Q (9)" }, { "figure_ref": [], "heading": "EXPERIMENTAL DETAILS", "publication_ref": [ "b1", "b1", "b1" ], "table_ref": [], "text": "The performance of DIM-UA and other SRL methods is evaluated on 19 games of the AtariARI benchmark. There are five categories of state variables in AtariARI (Anand et al., 2019), which are agent localization (Agent Loc.), small object localization (Small Loc.), other localization (Other Loc.), miscellaneous (Misc.), and score/clock/lives/display (Score/.../Display).\nWe follow the customary SSL pipeline and record the probe accuracy and F1 scores on the downstream linear probing tasks. The encoder is first pretrained with SSL, and then is used to predict the ground truth of an image with an additional linear classifier. Notably, the weights of the encoder are trained only during the pretraining and are fixed in the probing tasks. The data for pretraining and probing are collected by an RL agent running a certain number of steps using a random policy since it was found that the samples collected by a random policy could be more favorable than those collected by policy gradient policies for SSL methods (Anand et al., 2019).\nPrevious SSL methods in Anand et al. (2019) have used a single output head with 256 hidden units.\nOne of the major interests in our experiment is to discover the effect of choosing different values for the number of dimensions d and the number of charts n. Therefore, we scale up the number of hidden units, while keeping the model architecture, to observe the performance of using a single output head without UA and of using multiple heads with UA. To make a fair comparison, we compare the performance when the total number of hidden units are equal, i.e., when 1 × d for a single output head and n × d for multiple output heads are equal in our AtariARI experiment. In contrast, we also modify SimCLR using our UA paradigm and follow the customs from MSimCLR to compare the performance of different methods with d being equal (Korman, 2021b) in additional experiments on CIFAR10.\nThe experiments are conducted on a single Nvidia GeForce RTX 2080 Ti and 8-core CPU, using PyTorch-1.7 (Paszke et al., 2019). An illustration of the model backbone, hyper-parameters, and pseudocode of the algorithm are accompanied in Appendix A." }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [ "b1" ], "table_ref": [ "tab_0", "tab_0" ], "text": "In this section, we show the empirical results of our experiments and compare DIM-UA with other SSL methods to verify the efficacy of our UA paradigm. Meanwhile, we pay special attention to the performance of models when choosing different values for n and d.\nFor a straightforward comparison, we first observe the probe F1 scores together with standard deviations of each game averaged across categories in Table 1. \"ST-DIM*\" denotes ST-DIM with one output head of 16384 units, while DIM-UA uses four output heads with 4096 units in each head. We compare them to various methods of using a single output head of 256 hidden units here, which are taken from Anand et al. (2019). Each table entry is an average of 5 independent pretraining/probing Figure 3: The mean F1 and accuracy scores of 19 games when the total number of hidden units varies. The number of heads for DIM-UA is set to 4 here.\nruns using images sampled from different seeds. The probe accuracy scores are also included in Appendix A.\nUsing 16384 units (\"ST-DIM*\") does not necessarily guarantee better performance than using 256 units (ST-DIM). In 7 out of 19 games, \"ST-DIM*\" has lower F1 scores than ST-DIM. In particular, the model collapses due to overfitting when using too many units to represent the global features on Freeway. As a result, \"ST-DIM*\" only gains an F1 score of 0.3 with standard deviation 0.355 on Freeway. The mean F1 score of \"ST-DIM*\" is only 0.7 compared to 0.72 of ST-DIM. On the other hand, DIM-UA achieves higher scores and more stable performance. The F1 scores of DIM-UA are equal or higher than those of both \"ST-DIM*\" and ST-DIM in every game. The mean F1 score of DIM-UA is 0.75, the highest among all methods in Table 1." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "ABLATIONS", "publication_ref": [], "table_ref": [], "text": "In this subsection, our goal is to observe the behavior of models with different settings while the total number of units (n × d) on the horizontal axis of figures varies. Fig. 3 between ST-DIM and DIM-UA. Fig. 4 examines the effects of changing the number of heads for DIM-UA. In addition, we compare DIM-UA with two methods designed on ST-DIM in Fig. 5. One method uses the paradigm from MSimCLR, denoted by \"+MMD\". The other is similar to ours, which minimizes the loss of Eq. 9, but without modifying the score function as in Eq. 7 (namely, DIM-UA without using dilated prediction targets), denoted by \"-UA\". The 6 games mentioned here are Asteroids, Breakout, Montezuma Revenge, Private Eye, Seaquest, and Video Pinball.\nIn Fig. 3, ST-DIM performs better than DIM-UA when the number of hidden units is small. Their scores become close to each other when the number of units is around 2048. DIM-UA continues to improve as the total number of units grows, whereas the performance of ST-DIM drops at the same time. It is expected for DIM-UA to have lower F1 and accuracy scores when the encoding dimensions are low since the diversity among output heads demands more epochs of training. However, the efficacy of our UA paradigm is clearly demonstrated, as it allows the model to extend its capability by continuously expanding the encoding dimensions.\nSince we expect DIM-UA to converge slower because of the diversity among output heads, we expect this to become more obvious as the number of heads increases. We can verify that in Fig. 4, where the model with two output heads has the highest F1 and accuracy scores when the total number of hidden units is below 2048. On the other hand, it obtains the lowest F1 and accuracy score when the total number of units grows to 16384. Meanwhile, the model with eight output heads gives the worst results when the number of units is small but shows no sign of plateau, even with very high encoding dimensions. Increasing n while keeping d the same in our UA paradigm helps with the manifold representation but also lowers the performance if d is not large enough. In Fig. 5, it is not surprising that \"+MMD\" obtains the worst results in spite of the number of units, since MSimCLR was only found out to be helpful when the number of units is extremely small (e.g., 2, 4). \"-UA\" obtains better results than DIM-UA when the number of units is 512 but gets overrun by DIM-UA when the number becomes even larger. This empirically demonstrates that the dilated prediction targets in our UA paradigm are critical to achieve effective manifold representations." }, { "figure_ref": [], "heading": "ADDITIONAL EXPERIMENTS ON CIFAR10", "publication_ref": [ "b4" ], "table_ref": [ "tab_2" ], "text": "We modify SimCLR using the UA paradigm (SimCLR-UA) and perform additional experiments on CIFAR10, following the parameter settings and evaluation protocol from Korman (2021b); Chen et al. (2020). SimCLR-UA uses multiple heads with dilated prediction targets instead in pretraining, and adds L Q in Eq. 8 to the contrastive loss of SimCLR. Here, ResNet50 is used as the backbone, which is significantly larger than the default backbone in ST-DIM. We also slightly modify our model here based on an ablation study on CIFAR10. Please check Appendix B for more details.\nIn Table 2, each entry is an average accuracy score obtained from three independent pretraining and evaluation runs, together with the standard deviation. SimCLR obtains an accuracy score of 88.3% when it uses 512 hidden units. In the mean time, SimCLR-UA achieves the highest accuracy score of 88.6% among three methods when it uses eight heads with 512 units in each. The second best score is also achieved by SimCLR-UA, which is 88.5% when using four heads with 256 units in each head, or using two heads with 1024 units in each. We acknowledge that our improvement over SimCLR is small under this experimental setup. Nonetheless, there is a significant increase in accuracy when comparing SimCLR-UA to MSimCLR, especially in the case where the number of heads is larger. For instance, the highest evaluation accuracy score of MSimCLR is 87.8%, 87.3% and 86.4% respectively, when using two, four or eight heads. In contrast, SimCLR-UA obtains the highest accuracy score when using eight heads. This supports our hypothesis that UA can be a universal paradigm to effectively create manifold representations in SSL." }, { "figure_ref": [ "fig_2" ], "heading": "DISCUSSION", "publication_ref": [], "table_ref": [ "tab_3", "tab_5", "tab_0" ], "text": "We have demonstrated that our UA paradigm helps improve the performance of both ST-DIM and SimCLR when encoding dimensions are high. Furthermore, we argue that training NNs with multiple output heads is inherently slower and more demanding than training with a single output head, which has restrained the study in its domain. It is evident that our paradigm can overcome this headwind by generating effective manifold representations. Moreover, our UA paradigm has gained a significant amount of improvement compared to the most related state-of-the-art manifold representation paradigm MSimCLR in the experiments on AtariARI and CIFAR10.\nNotably, the UA paradigm also exhibits the potential of modeling a manifold using further higher dimensions while increasing the number of output heads (Fig. 4). It can be an important contribution because this means the performance of the model scales with the size of output heads. Using 16384 hidden units in total is not very efficient economically when the entire model is small, but the additional overhead introduced by doing so can be relatively insignificant when the model itself is large. In particular, this trade-off may also be worthwhile in challenging downstream tasks where the smallest increase in probe accuracy can make a difference.\nOur work has illustrated that SSL methods with manifolds have great potential, and more topics can be researched in this area. The relationship between the number of hidden units and the number of output heads in an NN model demands more study (see Appendix B for more discussion on this). The convexity assumption is crucial in representing the manifold. Future research may focus on representing a manifold using an unbalanced atlas more efficiently, e.g., designing new objectives and convexity constraints.\nIlya Tolstikhin, Olivier Bousquet, Sylvain Gelly, and Bernhard Schoelkopf. Wasserstein autoencoders. arXiv preprint arXiv:1711.01558, 2017.\nXiangfeng Wang, Junping Zhang, and Wenxing Zhang. The distance between convex sets with minkowski sum structure: application to collision detection. Computational Optimization and Applications, 77:465-490, 2020.\nJure Zbontar, Li Jing, Ishan Misra, Yann LeCun, and Stéphane Deny. Barlow twins: Self-supervised learning via redundancy reduction. In International Conference on Machine Learning, pp. 12310-12320. PMLR, 2021.\nA DETAILS OF DIM-UA\nTable 3 provides values of some crucial hyper-parameters experimented on AtariARI that are kept the same across different methods. In addition, τ in Eq. 9 is set to 0.1 for DIM-UA.\nFig. 6 illustrates the standard backbone in the AtariARI experiment. The output values of the last convolutional layer in the backbone are taken as the feature map to get the local feature vector f m,n (x t ) of input x t at location (m, n). For the original ST-DIM, a fully connected layer of 256 units immediately follows the backbone. For DIM-UA, a projection head (ψ) and a membership probability head (q) branch from the backbone. Pytorch-style pseudocode of the DIM-UA algorithm is provided in Algorithm 1.\nOn a side note, the output of an unbalanced atlas at inference time relies on a single output head, since Eq. 4 moves the membership probability far away from the uniform As a the rest of the output heads does not play a role at inference time. This is different from MSimCLR, which partitions inputs into each head by simultaneously forcing a uniform prior and low entropy on )) # cross entropy loss loss g / = (s m * s n ), loss l / = (s m * s n ) # averaged by map size loss + = loss g + loss l # total loss # optimization step loss.backward() optimizer.step() conditional distributions. The role of those remaining output heads in an unbalanced atlas is comparable to the moving average network of BYOL, which produces prediction targets to help stabilize the bootstrap step. The UA paradigm accomplishes a similar goal by using dilated prediction targets of output heads instead.\nThe probe accuracy scores are shown in Table 4, which are overall similar to the F1 scores in Table 1. The accuracy scores of DIM-UA are equal or higher than those of both \"ST-DIM*\" and ST-DIM in every game. The mean accuracy score of DIM-UA is 0.76, the highest among all methods. " }, { "figure_ref": [ "fig_0" ], "heading": "B MORE ABOUT CIFAR10 EXPERIMENT", "publication_ref": [], "table_ref": [ "tab_6", "tab_2", "tab_2", "tab_2", "tab_7", "tab_2" ], "text": "The convexity assumption is crucial in order to effectively model a manifold (e.g., scalar multiplication is not preserved when a set is non-convex). However, the universal approximation theorem implies that weights of multiple linear layers can approximate any non-convex functions. Thus, whether multiple linear layers should be used here or not could be an interesting ablation topic. Moreover, clamping can be introduced to define open sets in our method. An ablation study of SimCLR-UA is performed on CIFAR10, using 4 heads with 512 units in each head. The results are shown in Table 5, where \"FC1\" denotes the linear layer immediately following the ResNet50 backbone and \"FC2\" denotes the projection layers following coordinate mappings. The range of clamping is set to (-10, 10). As the results suggest, the combination of clamping and \"FC2\" yields the best accuracy and is hence used to obtain results in Table 2.\nReferring to the performance of SimCLR-UA in Table 2, the accuracy reaches the highest when the number of heads is eight with 512 units in each head, but when the number of heads is four, the optimal number of units is 256. It also appears that a small number of hidden units can be sufficient. This finding is different from what is observed in the AtariARI experiment, where using eight heads and 2048 units in each head is not sufficient to guarantee the optimal (Fig. 4). This may be attributed to the image size and the number of ground truth labels in an image, and more challenging tasks may demand better representations. However, we do think there should be a limited number of heads needed, related to the intrinsic dimension (ID) of data. Techniques to find ID can potentially be used to decide the optimal number of heads. 2, it appears that the performance of SimCLR-UA could be further enhanced. Whilst comparing MSimCLR with SimCLR-UA, we maintained most hyperparameters identical for both. However, SimCLR-UA may attain higher performance given a different set of hyper-parameters. In Fig. 1, the initial entropy of using UA is notably lower than when using a uniform prior. Ideally, it may be advantageous for the entropy to remain high during the initial stages and to decrease gradually. We suggest that the hyper-parameter τ , which regulates the L Q loss, could be set smaller or set to 0 initially and gradually increased over time.\nWe conduct an additional small-scale experiment to validate this hypothesis. In this context, we use ResNet18 as the backbone instead of ResNet50, and the training duration is set at 100 epochs as opposed to 1000. The model incorporates 8 heads, with each head containing 512 hidden units. If τ is linearly scaled, it would increment linearly, from zero up to its final value over the pretraining epochs. The outcome of this experiment is detailed in Table 6.\nThe table clearly illustrates that implementing linear scaling or utilizing smaller τ values can genuinely enhance the performance of SimCLR-UA. For τ values of 0.1 and 0.2, adopting a linearscaling scheme is instrumental for optimizing the performance. However, for small τ values of 0.05 and 0.02, such a scheme is not needed. Thus, the performance of SimCLR-UA, as presented in Table 2, could potentially be boosted further, since it uses a relatively large τ value of 0.1 without any linear scaling. " } ]
The manifold hypothesis posits that high-dimensional data often lies on a lowerdimensional manifold and that utilizing this manifold as the target space yields more efficient representations. While numerous traditional manifold-based techniques exist for dimensionality reduction, their application in self-supervised learning has witnessed slow progress. The recent MSimCLR method combines manifold encoding with SimCLR but requires extremely low target encoding dimensions to outperform SimCLR, limiting its applicability. This paper introduces a novel learning paradigm using an unbalanced atlas (UA), capable of surpassing state-of-the-art self-supervised learning approaches. We investigated and engineered the DeepInfomax with an unbalanced atlas (DIM-UA) method by adapting the Spatiotemporal DeepInfomax (ST-DIM) framework to align with our proposed UA paradigm. The efficacy of DIM-UA is demonstrated through training and evaluation on the Atari Annotated RAM Interface (AtariARI) benchmark, a modified version of the Atari 2600 framework that produces annotated image samples for representation learning. The UA paradigm improves existing algorithms significantly as the number of target encoding dimensions grows. For instance, the mean F1 score averaged over categories of DIM-UA is ∼75% compared to ∼70% of ST-DIM when using 16384 hidden units.
STATE REPRESENTATION LEARNING USING AN UN-BALANCED ATLAS
[ { "figure_caption": "Figure 1 :1Figure 1: The entropy of the output vector recorded epoch-wise when pretrained on the CIFAR10 dataset for a total of 1000 epochs, utilizing 8 charts and a dimensionality of 256.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The mean F1 and accuracy scores on 6 games with different adaptations. All methods use 4 output heads.", "figure_data": "", "figure_id": "fig_1", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: An illustration of the standard backbone used by ST-DIM.", "figure_data": "", "figure_id": "fig_2", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Probe F1 scores of each game averaged across categories", "figure_data": "GameVAE CPC ST-DIMST-DIM*DIM-UAAsteroids0.36 0.420.490.48 ± 0.005 0.5 ± 0.007Bowling0.50 0.900.960.96 ± 0.021 0.96 ± 0.018Boxing0.20 0.290.580.61 ± 0.008 0.64 ± 0.007Breakout0.57 0.740.880.88 ± 0.020.9 ± 0.016Demon Attack0.26 0.570.690.71 ± 0.01 0.74 ± 0.012Freeway0.01 0.470.810.3 ± 0.3550.86 ± 0.02Frostbite0.51 0.760.750.73 ± 0.005 0.75 ± 0.004Hero0.69 0.900.930.93 ± 0.008 0.94 ± 0.004Montezuma Revenge 0.38 0.750.780.81 ± 0.016 0.84 ± 0.014Ms Pacman0.56 0.650.720.74 ± 0.017 0.76 ± 0.011Pitfall0.35 0.460.600.69 ± 0.031 0.73 ± 0.029Pong0.09 0.710.810.78 ± 0.015 0.85 ± 0.004Private Eye0.71 0.810.910.91 ± 0.009 0.93 ± 0.009Qbert0.49 0.650.730.78 ± 0.026 0.79 ± 0.02Seaquest0.56 0.660.670.68 ± 0.007 0.69 ± 0.007Space Invaders0.52 0.540.570.59 ± 0.007 0.62 ± 0.013Tennis0.29 0.600.600.57 ± 0.018 0.64 ± 0.025Venture0.38 0.510.580.57 ± 0.014 0.58 ± 0.01Video0.45 0.580.610.6 ± 0.031 0.62 ± 0.023Mean0.41 0.630.720.7 ± 0.033 0.75 ± 0.013ST-DIMDIM-UA0.760.770.740.75F1 Score0.72Accuracy0.730.70.710.680.695 1 21 0 2 42 0 4 84 0 9 68 1 9 21 6 3 8 45 1 21 0 2 42 0 4 84 0 9 68 1 9 21 6 3 8 4UnitUnit", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "examines the difference", "figure_data": "2 heads4 heads8 heads0.750.760.7230.735F1 Score0.695Accuracy0.710.6680.6850.640.665 1 21 0 2 42 0 4 84 0 9 68 1 9 21 6 3 8 45 1 21 0 2 42 0 4 84 0 9 68 1 9 21 6 3 8 4UnitUnitFigure 4: The mean F1 and accuracy scores of DIM-UA on 6 games when the number of outputheads is 2, 4, or 8.-UADIM-UA+MMD0.750.760.730.74F1 Score0.71Accuracy0.720.690.70.670.685 1 21 0 2 42 0 4 84 0 9 68 1 9 21 6 3 8 45 1 21 0 2 42 0 4 84 0 9 68 1 9 21 6 3 8 4UnitUnit", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Linear evaluation accuracy on CIFAR10", "figure_data": "MethodHeadDimension2565121024SimCLR-0.881 ± 0.0020.883 ± 0.0020.881 ± 0.003MSimCLR20.877 ± 0.0020.878 ± 0.0010.866 ± 0.003MSimCLR40.873 ± 0.0010.873 ± 0.0010.861 ± 0.002MSimCLR80.864 ± 0.0010.859 ± 0.0050.857 ± 0.002SimCLR-UA20.882 ± 0.0010.884 ± 0.0010.885 ± 0.001SimCLR-UA40.885 ± 0.0010.884 ± <0.001 0.88 ± 0.001SimCLR-UA80.882 ± <0.0010.886 ± 0.0020.876 ± 0.005", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The values of hyper-parameters on AtariARI", "figure_data": "Hyper-parameterValueImage size160 × 210Minibatch size64Learning rate3e-4Epochs100Pretraining steps80000Probe training steps35000Probe testing steps10000", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Algorithm 1: Pytorch-style pseudocode for DIM-UA. , x t+1 in loader: # load B samples of x t , x t+1 # get feature maps o t , y t , y t+1 = mean(pj(f(x t )), 1), f m (x t ), f m (x t+1 ) # B×D, B×H×W×C # get the membership probabilities q t , q t+1 = mp(f(x t )), mp(f(x t+1 )) # B×N # get the feature map size s b , s m , s n = size(y t , 0), size(y t , 1), size(y t , 2) # B, H, W", "figure_data": "# f: encoder network# f m : f but only up to the last conv layer# pj: projection head# mp: membership probability head# c1, c2: classifier layers only used in pretraining## B: batch size# N: number of heads# D: number of hidden units# C: local feature map channels# H, W: local feature map height and width## mean: mean function along a specified dimension# matmul: matrix multiplication# cross entropy: cross entropy loss# mmd: mmd loss# size: get the size along a specified dimension# range: get the range vector of an integer# t: transposefor x t # initialize the loss valuesloss g = 0, loss l = 0# mmd lossloss = -0.05 * (mmd(q t ) + mmd(q t+1 ))# spatial-temporal lossfor m in range(s", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Probe accuracy scores of each game averaged across categories", "figure_data": "GameVAE CPC ST-DIMST-DIM*DIM-UAAsteroids0.41 0.480.520.51 ± 0.005 0.53 ± 0.007Bowling0.56 0.900.960.96 ± 0.021 0.96 ± 0.017Boxing0.23 0.320.590.61 ± 0.008 0.64 ± 0.007Breakout0.61 0.750.890.89 ± 0.019 0.91 ± 0.015Demon Attack0.31 0.580.700.72 ± 0.009 0.74 ± 0.011Freeway0.07 0.490.820.33 ± 0.34 0.86 ± 0.017Frostbite0.54 0.760.750.73 ± 0.004 0.75 ± 0.004Hero0.72 0.900.930.93 ± 0.008 0.94 ± 0.004Montezuma Revenge 0.41 0.760.780.81 ± 0.015 0.84 ± 0.014Ms Pacman0.60 0.670.730.75 ± 0.016 0.77 ± 0.01Pitfall0.35 0.490.610.7 ± 0.028 0.74 ± 0.027Pong0.19 0.730.820.79 ± 0.014 0.85 ± 0.004Private Eye0.72 0.810.910.91 ± 0.01 0.93 ± 0.009Qbert0.53 0.660.740.79 ± 0.025 0.8 ± 0.019Seaquest0.61 0.690.690.69 ± 0.006 0.7 ± 0.006Space Invaders0.57 0.570.590.6 ± 0.009 0.63 ± 0.014Tennis0.37 0.610.610.58 ± 0.016 0.65 ± 0.024Venture0.43 0.520.590.58 ± 0.012 0.59 ± 0.009Video Pinball0.47 0.590.610.6 ± 0.030.63 ± 0.022Mean0.46 0.650.730.71 ± 0.031 0.76 ± 0.013", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study on CIFAR10", "figure_data": "FC1 Clamp FC2 Accuracy✓0.857✓0.876✓0.883✓✓0.872✓✓0.875✓✓0.884✓✓✓0.872", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Changing τ in SimCLR-UA", "figure_data": "τLinear scaling Accuracy0.20.7910.2✓0.7970.10.7910.1✓0.80.050.7990.05✓0.7960.020.8020.02✓0.785", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Li Meng; Morten Goodwin; Anis Yazidi; Paal Engelstad
[ { "authors": "Giovanni S Alberti; Johannes Hertrich; Matteo Santacesaria; Silvia Sciutto", "journal": "", "ref_id": "b0", "title": "Manifold learning by mixture models of vaes for inverse problems", "year": "2023" }, { "authors": "Ankesh Anand; Evan Racah; Sherjil Ozair; Yoshua Bengio; Marc-Alexandre Côté; Devon Hjelm", "journal": "Advances in neural information processing systems", "ref_id": "b1", "title": "Unsupervised state representation learning in atari", "year": "2019" }, { "authors": "Mikhail Belkin; Partha Niyogi; Vikas Sindhwani", "journal": "Journal of machine learning research", "ref_id": "b2", "title": "Manifold regularization: A geometric framework for learning from labeled and unlabeled examples", "year": "2006" }, { "authors": "Joan Michael M Bronstein; Yann Bruna; Arthur Lecun; Pierre Szlam; Vandergheynst", "journal": "IEEE Signal Processing Magazine", "ref_id": "b3", "title": "Geometric deep learning: going beyond euclidean data", "year": "2017" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b4", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Xinlei Chen; Kaiming He", "journal": "", "ref_id": "b5", "title": "Exploring simple siamese representation learning", "year": "2021" }, { "authors": "Adrien Ecoffet; Joost Huizinga; Joel Lehman; Kenneth O Stanley; Jeff Clune", "journal": "", "ref_id": "b6", "title": "Go-explore: a new approach for hard-exploration problems", "year": "2019" }, { "authors": "Michael Gashler; Dan Ventura; Tony Martinez", "journal": "Advances in neural information processing systems", "ref_id": "b7", "title": "Iterative non-linear dimensionality reduction with manifold sculpting", "year": "2007" }, { "authors": "Daniele Grattarola; Lorenzo Livi; Cesare Alippi", "journal": "Applied Soft Computing", "ref_id": "b8", "title": "Adversarial autoencoders with constantcurvature latent manifolds", "year": "2019" }, { "authors": "Karol Gregor; Ivo Danihelka; Alex Graves; Danilo Rezende; Daan Wierstra", "journal": "PMLR", "ref_id": "b9", "title": "Draw: A recurrent neural network for image generation", "year": "2015" }, { "authors": "Arthur Gretton; Karsten M Borgwardt; J Malte; Bernhard Rasch; Alexander Schölkopf; Smola", "journal": "The Journal of Machine Learning Research", "ref_id": "b10", "title": "A kernel two-sample test", "year": "2012" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "Ji Hun Ham; Daniel D Lee; Lawrence K Saul", "journal": "", "ref_id": "b12", "title": "Learning high dimensional correspondences from low dimensional manifolds", "year": "2003" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b13", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Devon Hjelm; Alex Fedorov; Samuel Lavoie-Marchildon; Karan Grewal; Phil Bachman; Adam Trischler; Yoshua Bengio", "journal": "", "ref_id": "b14", "title": "Learning deep representations by mutual information estimation and maximization", "year": "2018" }, { "authors": "Rico Jonschkowski; Oliver Brock", "journal": "Autonomous Robots", "ref_id": "b15", "title": "Learning state representations with robotic priors", "year": "2015" }, { "authors": "Teeratorn Kadeethum; Francesco Ballarin; O' Daniel; Youngsoo Malley; Nikolaos Choi; Hongkyu Bouklas; Yoon", "journal": "Scientific Reports", "ref_id": "b16", "title": "Reduced order modeling for flow and transport problems with barlow twins self-supervised learning", "year": "2022" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b17", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": " Eric O Korman", "journal": "", "ref_id": "b18", "title": "Autoencoding topology", "year": "2018" }, { "authors": " Eric O Korman", "journal": "", "ref_id": "b19", "title": "Atlas based representation and metric learning on manifolds", "year": "2021" }, { "authors": " Eric O Korman", "journal": "", "ref_id": "b20", "title": "Self-supervised representation learning on manifolds", "year": "2021" }, { "authors": "Timothée Lesort; Natalia Díaz-Rodríguez; Jean-Franois Goudou; David Filliat", "journal": "Neural Networks", "ref_id": "b21", "title": "State representation learning for control: An overview", "year": "2018" }, { "authors": "Xiao Liu; Fanjin Zhang; Zhenyu Hou; Li Mian; Zhaoyu Wang; Jing Zhang; Jie Tang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b22", "title": "Selfsupervised learning: Generative or contrastive", "year": "2021" }, { "authors": "Alireza Makhzani; Jonathon Shlens; Navdeep Jaitly; Ian Goodfellow; Brendan Frey", "journal": "", "ref_id": "b23", "title": "Adversarial autoencoders", "year": "2015" }, { "authors": "Mashrabjon Mamatov; Jalolxon Nuritdinov", "journal": "Journal of Applied Mathematics and Physics", "ref_id": "b24", "title": "Some properties of the sum and geometric differences of minkowski", "year": "2020" }, { "authors": "Li Meng; Morten Goodwin; Anis Yazidi; Paal Engelstad", "journal": "IEEE Transactions on Games", "ref_id": "b25", "title": "Improving the diversity of bootstrapped dqn by replacing priors with noise", "year": "2022" }, { "authors": "Junhyuk Oh; Xiaoxiao Guo; Honglak Lee; Richard L Lewis; Satinder Singh", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Action-conditional video prediction using deep networks in atari games", "year": "2015" }, { "authors": "Aaron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b27", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Ian Osband; Benjamin Van; Roy ", "journal": "", "ref_id": "b28", "title": "Bootstrapped thompson sampling and deep exploration", "year": "2015" }, { "authors": "Ian Osband; Charles Blundell; Alexander Pritzel; Benjamin Van; Roy ", "journal": "Advances in neural information processing systems", "ref_id": "b29", "title": "Deep exploration via bootstrapped dqn", "year": "2016" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b30", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b31", "title": "", "year": "2019" }, { "authors": "Nikolaos Pitelis; Chris Russell; Lourdes Agapito", "journal": "", "ref_id": "b32", "title": "Learning a manifold as an atlas", "year": "2013" }, { "authors": "T Sam; Lawrence K Roweis; Saul", "journal": "science", "ref_id": "b33", "title": "Nonlinear dimensionality reduction by locally linear embedding", "year": "2000" }, { "authors": "Joshua B Tenenbaum; Vin De Silva; John C Langford", "journal": "science", "ref_id": "b34", "title": "A global geometric framework for nonlinear dimensionality reduction", "year": "2000" } ]
[ { "formula_coordinates": [ 3, 243.66, 463.81, 128.79, 95.54 ], "formula_id": "formula_0", "formula_text": "U α Z U β ψ α ψ β ψ αβ ψ βα" }, { "formula_coordinates": [ 4, 255.31, 131.18, 248.7, 9.65 ], "formula_id": "formula_1", "formula_text": "(x) = q i (f (x))I(ψ i (f (x)))(1)" }, { "formula_coordinates": [ 4, 183.09, 300.55, 320.92, 17.23 ], "formula_id": "formula_2", "formula_text": "MMD k (P 1 , P 2 ) = ∥ S k(s, •)dP 1 (s) - S k(s, •)dP 2 (s)∥ H k (3)" }, { "formula_coordinates": [ 4, 183.59, 416.62, 320.41, 30.32 ], "formula_id": "formula_3", "formula_text": "L N (q) = -E z MMD k N (q(z), U N ) = -E z n i=1 (q i (z) - 1 n ) 2 (4)" }, { "formula_coordinates": [ 4, 119.21, 515.31, 357.21, 13.38 ], "formula_id": "formula_4", "formula_text": "ψ α (U α ∩ U β ) and ψ β (U α ∩ U β ), which are ψ αβ = ψ β • ψ -1 α and ψ βα = ψ α • ψ -1 β ." }, { "formula_coordinates": [ 4, 108, 680.39, 396, 53.45 ], "formula_id": "formula_5", "formula_text": "ψ i : U i → V i to an open set V i ⊂ R d . We have the Minkowski sum V i + V j = {a + b | a ∈ V i , b ∈ V j }. Then n i=1 ψ i ( n j=1 U j ) ⊂ n i=1 V i . Proof. For any vector a ∈ n i=1 ψ i ( n j=1 U j ), there exists a i ∈ ψ i ( n j=1 U j ) such that a = n i=1 a i . Because ψ i ( n j=1 U j ) ⊂ V i , we also have a i ∈ V i , i ∈ {1, 2, ..., n}. Then n i=1 a i ∈ n i=1 V i , a ∈ n i=1 V i and thus n i=1 ψ i ( n j=1 U j ) ⊂ n i=1 V i ." }, { "formula_coordinates": [ 5, 108, 282.8, 396, 70.11 ], "formula_id": "formula_6", "formula_text": "Proposition 2. Let U = {U 1 , U 2 , ..., U n } be a collection of open subsets of Z whose union is all of Z, and n i=1 U i is not empty. For each i ∈ {1, 2, ..., n}, there is a homeomorphism ψ i : U i → V i to an open set V i ⊂ R d . The multiplication of set V and a scalar λ is defined to be λV = {λa | a ∈ V }. We take the Minkowski sum. If each ψ i ( n j=1 U j ) is convex, then n i=1 1 n ψ i ( n j=1 U j ) ⊂ 1 n n i=1 V i ." }, { "formula_coordinates": [ 5, 109.2, 370.85, 394.8, 44.59 ], "formula_id": "formula_7", "formula_text": "1 n n i=1 ψ i ( n j=1 U j ) ⊂ 1 n n i=1 V i ." }, { "formula_coordinates": [ 5, 109.2, 396.43, 394.8, 39.55 ], "formula_id": "formula_8", "formula_text": "n i=1 1 n ψ i ( n j=1 U j ) ⊂ 1 n n i=1 V i ." }, { "formula_coordinates": [ 5, 196.99, 559.98, 216.82, 30.2 ], "formula_id": "formula_9", "formula_text": "L GL = M m=1 N n=1 -log exp(g m,n (x t , x t+1 )) xt * ∈Xnext exp(g m,n (x t , x t * ))" }, { "formula_coordinates": [ 5, 196.87, 606.25, 307.13, 30.2 ], "formula_id": "formula_10", "formula_text": "L LL = M m=1 N n=1 -log exp(h m,n (x t , x t+1 )) xt * ∈Xnext exp(h m,n (x t , x t * ))(6)" }, { "formula_coordinates": [ 5, 108, 721.48, 228.02, 11.23 ], "formula_id": "formula_11", "formula_text": "h m,n (x t , x t+1 ) = f m,n (x t ) T W h f m,n (x t+1 ) of ST-DIM." }, { "formula_coordinates": [ 6, 199.64, 130.91, 304.36, 30.32 ], "formula_id": "formula_12", "formula_text": "g m,n (x t , x t+1 ) = [ 1 n n i=i ψ i (f (x t ))] T W g f m,n (x t+1 )(7)" }, { "formula_coordinates": [ 6, 190.77, 207.23, 313.24, 30.32 ], "formula_id": "formula_13", "formula_text": "L Q = - 1 2 n i=1 ((q i (f (x t )) - 1 n ) 2 + (q i (f (x t+1 )) - 1 n ) 2 ) (8)" }, { "formula_coordinates": [ 6, 249.18, 265.52, 254.82, 9.65 ], "formula_id": "formula_14", "formula_text": "L U A = L GL + L LL + τ L Q (9)" } ]
10.48550/arXiv.2108.07258
2023-10-04
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b13", "b23", "b23", "b11", "b10" ], "table_ref": [], "text": "Given a set of target behaviour examples, large language models (LLMs) demonstrate exceptional abilities to accomplish a wide range of tasks, frequently exhibiting performance that surpasses that of humans (Brown et al., 2020;Srivastava et al., 2022). Specifically, LLMs exhibit impressive sequential textual reasoning ability during inference, resulting in a significant boost in their performance when encountered with reasoning questions described in natural languages (Nye et al., 2021;Wei et al., 2022). This phenomenon can be clearly observed with a multi-step chain of intermediate thinking procedure, i.e., a \"Chain of Thought\" (CoT, Wei et al. 2022).\nConventional CoT usually leverages natural languages as intermediate thinking steps in prompting. Although CoT can enhance LLMs' ability in many cases, redundant natural languages and irrelevant information also can hamper the performance of LLMs (Shi et al., 2023) in some cases. For example, spatial languages and descriptions can be hard for language models to understand Mirzaee et al. 2021; Mirzaee and Kordjamshidi 2022 due to complex spatial relationships. Aligning symbols and representing spatial relationships by symbols in word sequences can be a neater representation and thus can be potentially easier to understand by LLMs. We thus explore the use of symbols for LLM prompting, which is still an understudied topic. This is important to study which implies understanding abilities beyond language models for language understanding per se. Figure 1: An example for comparison between Chain-of-Thought (CoT) and Chain-of-Symbol (COS) that elicits large language models in tackling complex planning tasks with higher performance and fewer input tokens. We let the model generate CoT/COS during inference in a few-shot manner. Results were taken in May 2023 with ChatGPT and can be subject to change.\nTo explore the role of symbolic representations in prompting, we take the complex spatial understanding and planning as the evaluation scenarios, which require LLMs to understand the virtual spatial environments described through natural language as well as planning and achieving certain goals in such environments. Inspired by existing classic planning competitions and spatial reasoning datasets, we present three domains: (i) Brick World (ii) NLVR-based Manipulation and (iii) Natural Language Navigation. Figure 1 illustrates an example for Brick World 1D, and all these three tasks are described in detail in Section 2.1. These three tasks are all described in natural language. And we also evaluate one existing spatial question answering dataset SPARTUN (Mirzaee and Kordjamshidi, 2022) which uses human-generated questions thus closer to realistic situations. For these tasks, LLMs need to understand a virtual environment in natural language, with the spatial relationship between the objects to be operated on and the restrictions on the operation, which is easy for real humans. However, we found that there are still places for improvement in the performance of LLMs on the tasks.\nAs a major contribution to this study, we investigate the symbolic representations for spatial relationships, and propose a novel method called Chain-of-Symbol (COS) prompting to elicit spatial understanding and planning abilities on LLMs. As in Figure 1, instead of using intermediate thinking steps described in natural language in CoT prompts shown on the left-hand side, the CoS prompts remove the redundant text description but only using a set of symbols to represent spatial relationships between objects in complex environments. COS achieves noticeable improvement in both performance and efficiency (by up to 60.8% improvements in accuracy and 65.8% for the number of input tokens). We speculate that such an improvement is benefited by the more efficient symbolic representation produced by COS. Our main contributions are three-fold:\n• We evaluate LLMs on both existing classic spatial understanding tasks and our proposed synthetic spatial planning tasks. We spot that there is still room for performance improvements on current LLMs even with CoT.\n• We propose a novel method called COS, which prompts LLMs to convert the complex environment described with natural language into condensed symbolic representations. COS drastically improves LLMs on the spatial tasks. The accuracy gain of COS is large, also with a good reduction in the token consumption for LLMs.\n• We conduct an in-depth analysis on COS to explore the effect of using different symbols, on different LLMs, and different languages to show the robustness of our method. " }, { "figure_ref": [], "heading": "NLVR-based Manipulation", "publication_ref": [], "table_ref": [], "text": "There is a set of roads and a set of landmarks. The start point is bank A. There is a road which is 200 meters long from bank A to bank C. There is a road which is 100 meters long from bank C to house H. There is a road which is 100 meters long from house H to cinema F. There is a road which is 200 meters long from cinema F to store B. There is a road which is 100 meters long from store B to store G. There is a road which is 200 meters long from bank C to house D. There is a road which is 200 meters long from house D to garden J. There is a road which is 100 meters long from bank A to cinema I. There is a road which is 100 meters long from cinema I to house E. From the start point, how to reach the nearest store? \n-" }, { "figure_ref": [], "heading": "SPATIAL PLANNING AND UNDERSTANDING TASKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "NATURAL LANGUAGE SPATIAL PLANNING", "publication_ref": [ "b9", "b20" ], "table_ref": [], "text": "Inspired by classic planning domains and tasks described in Liu et al. (2023) and existing spatial reasoning dataset Suhr et al. (2017), we explore the performance of LLMs in three natural language spatial planning tasks. For all three tasks, we can formulate the problem as given a virtual scenario described by natural language, and a planning question. LLMs should take both the scenario and the question as the input and output correspondingly to solve the question. Such a solution usually contains a series of steps to achieve a final goal. The final test tasks consist of 5,500 evaluation instances, with 4,000 from Brick World, 1,000 from NLVR-based Manipulation, and the remaining 500 from Natural Language Navigation. We use code to generate these instances based on definition of each task." }, { "figure_ref": [], "heading": "BRICK WORLD", "publication_ref": [], "table_ref": [], "text": "Figure 2 demonstrates an instance for Brick World (top), which requires the LLMs to acquire certain bricks by grabbing the bricks sequentially. We explore 1D and 2D scenarios for the Brick Worlds task. Specifically, in the 1D scenario, the relationship between bricks is only vertical. In the 2D scenario, in addition to the vertical relationship, there is also a horizontal relationship, which we express as \"in the front of\". To explore the characteristics of language understanding from LLMs, we investigate different levels of difficulty in the way of describing virtual scenarios. We describe them in increasing levels of difficulty as below.\n• Firstly, we explore labelling bricks from A to Z according to the order of spatial stacking from bottom to top, and the corresponding texts are also described in order from bottom to top, we call this setting \"No shuffle\". • Secondly, we shuffle the order of the corresponding natural language description while maintaining the labelling rules in alphabetic order called \"Shuffle description\". • Thirdly, we shuffled the order of labelling so that the spatial relationships do not correspond to the alphabetic order anymore, but are still described in the order from bottom to top in the text description, called \"Shuffle label\". • Finally, we shuffled both the order of labelling and description. We call it \"Shuffle both\".\nWe use colors to represent the bricks, which enriches the information and increases the difficulty of the tasks. For each setting with 1D and 2D, we create 500 evaluation instances. The final evaluation set consists of 4,000 instances." }, { "figure_ref": [], "heading": "NLVR-BASED MANIPULATION", "publication_ref": [ "b20" ], "table_ref": [], "text": "Figure 2 demonstrates an instance for NLVR-based Manipulation (middle). We convert the format of Natural Language Visual Reasoning (NLVR, Suhr et al. (2017)) tasks into a text-based planning task. Based on the creation rules of synthetic images of NLVR, we create 1,000 natural language descriptions for the virtual spatial environments using Python code. Specifically, for each description, we set three boxes just like NLVR, in the left, middle, and right, and in each box, and there are several objects. Each object has three properties: color, shape, and size. Each description has one related question, the question is about how to move all objects that satisfy a certain condition of one property (such as \"all objects in black\" or \"all rounds\") to a specific target box. The ground truth is the set of all objects satisfied with this condition which needs to be moved (not in the target boxes)." }, { "figure_ref": [], "heading": "NATURAL LANGUAGE NAVIGATION", "publication_ref": [ "b4" ], "table_ref": [], "text": "Figure 2 demonstrates an instance for Natural Language Navigation (bottom). Inspired by Visionand-Language navigation (Gu et al., 2022), we create a virtual spatial environment that is similar to a 2D map of navigation tasks but using natural language description only. Specifically, we define a set of landmarks: ′ store ′ , ′ bank ′ , ′ house ′ , ′ cinema ′ , ′ garden ′ , ′ school ′ . For each description, there are 7 to 10 landmarks. We create 500 evaluation instances using Python code: the relationship between landmarks is a binary tree structure, with a root node which indicates the start point in the virtual scenario, and each node other than the leaf nodes has one or two child nodes, with a distance of 100 meters or 200 meters between them. Each description has one related question which is about how to reach the nearest one specific kind of landmark from the starting point." }, { "figure_ref": [], "heading": "SPATIAL QA", "publication_ref": [ "b10", "b20", "b11" ], "table_ref": [], "text": "We also evaluate COS on manually annotated existing spatial question answering task, SPARTUN (Mirzaee and Kordjamshidi, 2022), which contains a larger variety of spatial relation types and spatial expressions compared with previous Spatial QA datasets and our three synthetic spatial planning tasks. And the questions in this dataset are manually annotated, which is closer to real-world scenes. The scenarios in this dataset are described in natural languages based on NLVR (Suhr et al., 2017) and SPARTQA (Mirzaee et al., 2021)." }, { "figure_ref": [], "heading": "CHAIN-OF-SYMBOL PROMPTING", "publication_ref": [ "b23" ], "table_ref": [], "text": "We propose Chain-of-Symbol (COS) prompting for LLMs, which converts the simulated environment with natural language into a condensed symbolic representation that considers spatial relationship. In order to make our constructing method of COS generalizable and reliable, we adopt a three-step procedure in creating the demonstrations of our COS which can be used in any related tasks:\n• (i) Automatically prompt the LLMs to generate a CoT demonstration in a zero-shot manner • (ii) Correct the generated CoT demonstration if there existing errors.\n• (iii) Replace the spatial relationships described in natural languages in CoT with random symbols, and only keep objects and symbols, remove other descriptions.\nWe then use the COS demonstrations to guide the language model in a few-shot manner for prompting LLMs just like CoT (Wei et al., 2022). Figure 1 depicts an example of a demonstration of CoS produced by models. In this example, we see that both CoT and COS receive the same shared simulated spatial environment in natural language texts. COS depicts a different intermediate thinking process than CoT. The latter represents the environments in a natural language only, while the former use a condensed symbolic representation that considers spatial relationship. Specifically, we use the symbol \"/\" to represent the spatial relationship \"from the top of\" here. By doing such a conversion, and removing redundant descriptions, COS effectively improves the model performance as well as reduces the inference costs with LLMs.\nFigure 2 depicts examples of CoS demonstration for all three planning tasks we proposed. For NLVR-based Manipulation, we convert natural language descriptions for objects to the format of a triplet such as \"(large, round, black)\". For Natural Language Navigation, we represent the order of landmarks by using symbol \"/\" to connect them. For Spatial QA task, we use a set of symbols such as \"=\", \"∼\" to represent different spatial relationships, and use triplet with \"( , , )\" to represent objects and their attributes.\nCoS prompting has multiple properties that are attractive as a prompting approach for LLMs:\n• First, COS effectively allows a neater, shorter, and condensed intermediate procedure than CoT. It is more structured than natural languages, hence easier for human annotators to analyze, check and correct the intermediate thinking process for LLMs.\n• Second, COS improves important planning tasks that current LLMs do not tackle well. It provides a better representing method for spatial environments which is easier for LLMs to learn compared with natural language.\n• Finally, COS clearly reduces the amount of text input into the LLMs and output from LLMs. This makes it much cheaper to access LLMs with API/GPU." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce our experimental setup in Section 4.1 about the settings of different methods we use, the language models, and the evaluation metrics. Then, in Section 4.2, we report the results of the three spatial planning tasks we proposed. In Section 4.3, we report the results on the SPARTUN dataset." }, { "figure_ref": [], "heading": "EXPERIMENTAL SETUP", "publication_ref": [ "b7", "b23", "b23" ], "table_ref": [], "text": "We evaluate CoS and CoT on our proposed three spatial planning tasks and the existing SQA dataset, based on various LLMs like ChatGPT(gpt-3.5-turbo) and text-davinci-003. There are three prompts: zero-shot CoT, few-shot CoT, and few-shot CoS (Ours).\nZero-shot Chain-of-Thought Prompting We consider zero-shot CoT as our baseline. The reason is that we have found that our choices of LLMs naturally give their intermediate steps (CoT) in their answers, even without specifically asking them to do so. We also found that asking them to remove the thinking steps obviously degrades the results. Therefore, we allow the LLMs to generate CoT, while we do not put any demonstration to the prompt but give prompts like \"Let's think step by step\" just as Kojima et al. (2023). For an easier evaluation, we ask the LLMs to output the final results by separating the landmarks with commas.\nChain-of-Thought Prompting This baseline uses a few-shot CoT, in which we encourage LLMs to think step by step, and we use five demonstrations to guide the LLMs in the thinking procedure. Note that the intermediate thinking procedure is represented as natural language text, just like the Standard Prompting. Like in (Wei et al., 2022), we manually crafted five demonstrations for each task to guarantee their correctness. To guarantee the consistency and reliability of the prompts, we follow the format of CoT generated by zeroshot-CoT prompting. We use these fixed five demonstrations for evaluations on each task.\nChain-of-Symbol Prompting As described in Section 3, COS augments the standard CoT prompting with condensed symbolic representation. While CoT has been shown to give large improvements to LLMs on various tasks (Wei et al., 2022), we argue that using condensed symbolic representations can be an alternative to describing using natural language texts. We manually converted from CoT demonstrations to CoS using the procedure described in Section 3. Five CoS demonstrations of the same examples with CoT are created for each task of Natural Language Planning. Language Models We use Text-Davinci-003 and ChatGPT(Gpt-3.5-turbo) for the evaluation of all tasks. We set the temperature to 0 for all the experiments throughout this paper.\nEvaluation Metrics For planning tasks, we use three evaluation metrics, namely accuracy, precision, and recall. We define accuracy as the success rate in achieving the final goal. We then compute the Longest Common Sequence (LCS) between the ground truth and LLM output sequence to measure their similarity. We compute precision as the ratio of LCS against the length of the LLM output, and we compute recall as the ratio of LCS against the length of the ground truth. For spatial QA task, we only compute accuracy." }, { "figure_ref": [], "heading": "RESULTS OF SPATIAL PLANNING TASKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "BRICK WORLD", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 1 reports the results of COS against the zs-CoT and CoT on the task of Brick World. First of all, we can see that the complexity increases both from the 1D scenario to the 2D scenario and from the setting of No Shuffle to the setting of Shuffle Both, together with a drop in the performance.\nChatGPT with zs-CoT does not perform well, with only 9.8% accuracy on the most difficult setting Shuffle Both under the 2D scenario. Although CoT brings some improvements, the performance for CoT is still not satisfying, with an accuracy of 43.0% which is just below the 50% bar for setting Shuffle Both under the 1D scenario. In contrast, we see that COS gives very good improvements on this setting (from 28.2% to 69.7%). We found that COS gives consistent improvements to all the settings on Brick World, clearly surpassing CoT. The largest gain is on the setting of Shuffle Label under the 1D scenario, with 60.8% improvements in accuracy (from 31.8% to 92.6%). We postulate that such improvements come from symbolic spatial representations that are more condensed and easier to be understood by LLMs. Another underlying reason could be the elimination of redundant information, as LLMs can be easily distracted by irrelevant context (Shi et al., 2023)." }, { "figure_ref": [ "fig_1" ], "heading": "FURTHER ANALYSIS OF BRICK WORLD", "publication_ref": [], "table_ref": [ "tab_1", "tab_2", "tab_1" ], "text": "Randomness in the Experiments To investigate the randomness in our experiments, we run multiple trials with three different sets of demonstrations for CoT and COS. Table 1 reports their means and standard deviations. We see a general trend here that COS usually reports a lower standard deviation than CoT (for example, a standard deviation of 1.9 for Acc. for No Shuffle under the 1D scenario for COS, against 11.0 for CoT). This represents that COS is more stable than CoT on Brick World. CoS on the Different Language In addition to the tasks described in English, we also tested COS on Brick World in Chinese. CoT reports 22.9% accuracy, and COS reports 39.1% in 1D scenario of Brick World, which demonstrates the robustness of COS to a language other than English. Robustness to Different Symbols Figure 3 demonstrates the robustness of using different symbols for COS. As we can see, using different symbols brings consistent performance, while not having any symbol drastically impacts the results. Among all the symbols, the comma gives the best results. We conclude that COS is robust to the selection of symbol. Results on Different Language Models Table 2 reports the results on InstructGPT under the 1D scenario. The experimental results align with our previous conclusion that COS outperforms CoT obviously on all of the metrics. Saving Tokens for Prompting One advantage featured by COS is that it reduces the number of input tokens to be fed into LLMs. For Brick World (1D scenario), COS reduces the number of tokens for the intermediate steps from 407 to 139 (Table 1, the numbers are reported from OpenAI Playground1 ). This subsequently saves the costs of accessing LLMs via API/GPUs." }, { "figure_ref": [], "heading": "NLVR-BASED MANIPULATION", "publication_ref": [], "table_ref": [ "tab_3", "tab_3" ], "text": "For the task of NLVR-based Manipulation, we adopt almost the same experimental settings as the ones for Brick World, including both baseline settings as well as our choice of language models. The only difference is the evaluation metrics we report. In contrast to Brick World, we compute precision and recall based on the set rather than the Longest Common Sequence.\nMain Results Table 3 reports the results of NLVR-based Manipulation with both text-davinci-003 and gpt-3.5-turbo. For this task, we see that both two models give an unsatisfying performance. Gpt-3.5-turbo only has an accuracy of 18.6% on its own, with 61.5 % accuracy when CoT is applied. In comparison, InstructGPT with COS gives the highest accuracy (71.7%) among all the models. COS reports a higher performance than COT and the standard prompting on all of the metrics.\nSaving Tokens for Prompting One advantage featured by COS features is that COS reduces the number of input tokens to be fed into LLMs. Table 3 reported that for NLVR-based Manipulation, COS reduces the number of tokens for the intermediate steps from 653 to 534, nearly by half of the original intermediate steps (we separate the tokens by space). This subsequently saves the costs of accessing LLMs via API/GPU, which enables easier access to the models. " }, { "figure_ref": [], "heading": "NATURAL LANGUAGE NAVIGATION", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "For Natural Language Navigation, we adopt almost the same experimental settings as for Brick World. We adopt N s = 3, where N s represents the number of demonstrations for COS and CoT. And we adopt the same evaluation metrics, baseline settings as well as our choice of language models.\nMain Results Table 4 reports the results of Natural Language Navigation with both text-davinci-003 and gpt-3.5-turbo. For this task, we see that both two models give an unsatisfying performance.\nText-davinci-003 only has an accuracy of 32.5% on its own, with 68.7 % accuracy when CoT is applied. In comparison, COS gives much better performance. text-davinci-003 with COS gives the highest score among all the models, which is about 71.7%. COS reports the best performance on all of the metrics.\nSaving Tokens for Prompting One advantage that COS features is that it reduces the number of input tokens to be fed into LLMs. Table 4 reports that for Natural Language Navigation, COS reduces the number of tokens for the intermediate steps from 390 to 239. This subsequently saves the costs of accessing LLMs via API/GPUs, which enables easier access to the models." }, { "figure_ref": [], "heading": "SPATIAL QUESTION ANSWERING", "publication_ref": [ "b10" ], "table_ref": [], "text": "We also explore the effectiveness of COS in a more real-world scenario, by using existing human annotated spatial QA dataset SPARTUN (Mirzaee and Kordjamshidi, 2022). Specifically, we applied both COS and CoT on GPT-3.5-Turbo and GPT-4. COS gains better performance and uses fewer tokens compared with CoT. It indicates in real-world scenarios, where both background descriptions and questions are described with more various expressions, CoS is still a better method than conventional CoT described in natural languages. In table 5, we report the results of performance, and both CoT and CoS have 5 shots. And the results show the superior of the CoS compared with using CoT with better performance and fewer tokens. It should be noticed that there are far more types of spatial relationships in SPARTUN dataset than our proposed planning tasks, so the results indicate CoS can gain promising performance even when there are a lot of symbols to represent different spatial relationships. " }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b0", "b5", "b22", "b24", "b13", "b23", "b15", "b6", "b11", "b6", "b16", "b11", "b21", "b4", "b12", "b3", "b14", "b8" ], "table_ref": [], "text": "Large Language Models Large language models (LLMs) have demonstrated remarkable few-shot learning abilities across various domains (Brown et al., 2020;Srivastava et al., 2022), leading to a paradigm shift in AI to use LLMs as foundational models for language-related tasks, either directly or through fine-tuning (Bommasani et al., 2021;Hu et al., 2022). Srivastava et al. (2022) proposed a benchmark that covers many areas from education to software development. However, the planning task with text environments is overlooked. While less relevant to COS, a concurrent work converts natural language into executable actions for robots with ChatGPT (Wake et al., 2023). Another very recent concurrent work uses Symbol Tuning that replaces natural language labels with arbitrary symbols to improve in-context learning (Wei et al., 2023).\nChain-of-Thought Reasoning The ability of LLMs (Brown et al., 2020;Srivastava et al., 2022) to perform complex reasoning tasks can be significantly enhanced by using a show known as Chain-of-Thought (CoT) prompting, which involves providing them with intermediate reasoning steps (Nye et al., 2021;Wei et al., 2022). Such a phenomenon also generalizes to the multilingual settings (Shi et al., 2023). Despite the fact that CoT is powerful, there are reports that demonstrate that CoT is not always useful and that integrating CoT degrades the performance on the task of Machine Translation in their experiment. And this is possibly due to the word-by-word translation (Peng et al., 2023).\nSpatial Reasoning Spatial reasoning over natural language texts has been an important research direction in the community (Janner et al., 2018;Mirzaee et al., 2021). Janner et al. (2018) proposes to leverage representation learning on a navigation task that requires the agent to move a specific location. Rojowiec et al. (2020) proposes a new task on spatial reasoning that requires the language model to generate natural language instructions for 'before' and 'after' image pairs. Mirzaee et al. (2021) proposes a new benchmark for spatial question-answering with 'which' and 'what' questions regarding the environment. In a concurrent work, Tsai et al. (2023) demonstrates that LLMs perform poorly on text-based games with question-answering tasks that require several steps of reasoning.\nNavigation and Path Planning Language grounding navigation (Gu et al., 2022) refers to the interdisciplinary task that requires the intelligent agent to perceive the visual environment and guide the user to the goal location through natural language instructions (Nguyen et al., 2019;Chen et al., 2019). Path planning (Panov et al., 2018;Krishna Lakshmanan et al., 2020) refers to the tasks that require the agent to plan its own path to achieve certain goals such as the shortest path or maximizing the cleaning area, typically through the use of reinforcement learning. These areas are highly relevant to the spatial planning tasks we explored and COS, as the spatial environments can be potentially represented by symbolic representations. We leave the investigations of these application areas to future studies." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We found that current popular LLMs still lack abilities in complex spatial planning and understanding tasks. To this end, we propose a novel method called COS (Chain-of-Symbol Prompting) that converts spatial relationships described in natural languages to condensed symbolic representations in the chained intermediate thinking steps. COS is easy to use and does not need additional training on LLMs. Extensive experiments indicate that using few-shot COS demonstration clearly surpasses the performance of using CoT described in natural languages on all three spatial planning tasks we proposed and the representative spatial QA benchmark with even fewer tokens (down to about 1/3 tokens of the thinking steps with CoT) used in the inputs compared with CoT prompting. The performance gain is strong, by up to 60.8% accuracy (from 31.8% to 92.6%) on Brick World for ChatGPT.\nLimitations Refer to the Appendix for the section on Broader Impact. In addition, we only use two models to verify the effectiveness of our method due to the limited time and resources. It would be interesting to apply our method to more models with different sizes to see whether there is an emergent ability of CoS for LLMs. Nevertheless, our choices of foundation models are representative and they are popular LLMs." }, { "figure_ref": [], "heading": "A BROADER IMPACT", "publication_ref": [], "table_ref": [], "text": "COS is a prompting technique that is easy to use, which can effectively improve the performance of complex planning with LLMS. It also indicates that future training with LLMs can also be well benefited by employing COS in the training procedure to further improve LLM's planning abilities." }, { "figure_ref": [], "heading": "B EXTENDED SETTINGS B.1 NUMBER OF TOKENS", "publication_ref": [], "table_ref": [], "text": "We have mentioned that we used white spacing for calculating the number of tokens in the intermediate thinking steps. This was a typo and in fact, we accurately measures the number of tokens using the OpenAI Playground. 2 The numbers we reported are correct and there is no need for modification." }, { "figure_ref": [], "heading": "B.2 DESIGNING THE INTERMEDIATE STEPS", "publication_ref": [], "table_ref": [], "text": "The intermediate steps we use in the demonstrations for CoT are created and modified from the zero-shot CoT from the LLMs by simply adding \"Let's think step by step\" before the answer. We then manually correct the intermediate steps from the outputs of using zero-shot CoT for further improvements. We attempted our best efforts in tuning the baselines, and we report the best results we achieved." }, { "figure_ref": [], "heading": "C FEW-SHOT EXEMPLARS", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In the remaining of this section, we demonstrate the few-shot exemplars used in the experiments in our study. We demonstrate the exemplars for both COS and CoT. Answer: Let's think step by step: 1. To get brick B, we find C is on top of B 2. We find D is on top of C 3. We find D is on the top 4. We need to remove brick D, as it is on top of brick C. 5. We need to remove brick C, as it is on top of brick B. 6. Brick B is now accessible and can be grabbed. So we get the result as D, C, B.\nQuestion: There are a set of bricks. The brick P is on top of the brick R . The brick J is on top of the brick B . The brick D is on top of the brick P . The brick R is on top of the brick H . The brick K is in front of the brick M. The brick B is on top of the brick D . For the brick M, the color is blue. The brick C is on top of the brick J . The brick H is in front of the brick K. Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick M?\nAnswer: Let's think step by step: 1. To get brick M, we find there is no brick on top of brick M, So we get the result as M directly.\nQuestion: There are a set of bricks. The brick K is on top of the brick F . The brick M is in front of the brick F. The brick N is on top of the brick K . For the brick O, the color is blue. The brick G is on top of the brick A . The brick F is in front of the brick I. The brick I is in front of the brick O. The brick A is on top of the brick N . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick O?\nAnswer: Let's think step by step: 1. To get brick O, we find there is no brick on top of brick O, So we get the result as O directly.\nQuestion: There are a set of bricks. For the brick A, the color is blue. The yellow brick B is in front of the brick A. The yellow brick C is in front of the brick B. The white brick D is on top of the brick C .\nThe white brick E is on top of the brick D . The yellow brick F is on top of the brick E . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick C?\nAnswer: Let's think step by step: 1. To get brick C, we find C is in front of B. 2. We find D is on top of C. 3. We find E is on top of D. 4. We find F is on top of E. 5. We need to remove brick F, as it is on top of brick E. 6. We need to remove brick E, as it is on top of brick D. 7. We need to remove brick D, as it is on top of brick C. 8. Brick C is now accessible and can be grabbed. So we get the result as F, E, D, C.\nTable 10: Few-shot exemplars for full COS prompt for Natural Language Navigation\nQuestion: There is a set of roads and a set of landmarks. The start point is bank A. There is a road which is 200 meters long from bank A to bank C. There is a road which is 100 meters long from bank C to house H. There is a road which is 100 meters long from house H to cinema F. There is a road which is 200 meters long from cinema F to store B. There is a road which is 100 meters long from store B to store G. There is a road which is 200 meters long from bank C to house D. There is a road which is 200 meters long from house D to garden J. There is a road which is 100 meters long from bank A to cinema I.\nThere is a road which is 100 meters long from cinema I to house E. From the start point, how to reach the nearest store?\nAnswer: There are two stores on the map, store B and store G.\n-bank A / bank C / house H / cinema F / store B (200 + 100 + 100 + 200 = 600) -bank A / bank C / house H / cinema F / store B / store G (200 + 100 + 100 + 200 + 100 = 700) So we get the result as bank A / bank C / house H / cinema F / store B." }, { "figure_ref": [], "heading": "So the answer is ACHFB", "publication_ref": [], "table_ref": [], "text": "Question: There is a set of roads and a set of landmarks. The start point is bank H. There is a road which is 100 meters long from bank H to store E. There is a road which is 200 meters long from store E to bank C. There is a road which is 100 meters long from bank C to house A. There is a road which is 100 meters long from house A to house F. There is a road which is 200 meters long from bank C to garden I. There is a road which is 200 meters long from store E to cinema G. There is a road which is 200 meters long from cinema G to school J. There is a road which is 200 meters long from bank H to school D. There is a road which is 100 meters long from school D to store B. From the start point, how to reach the nearest school?\nAnswer: There are two schools on the map, school J and school D. " }, { "figure_ref": [], "heading": "So the answer is HD", "publication_ref": [], "table_ref": [], "text": "Question: There is a set of roads and a set of landmarks. The start point is garden B. There is a road which is 200 meters long from garden B to cinema D. There is a road which is 200 meters long from cinema D to school J. There is a road which is 100 meters long from garden B to school I. There is a road which is 100 meters long from school I to house C. There is a road which is 100 meters long from house C to bank F. There is a road which is 100 meters long from house C to cinema G. There is a road which is 200 meters long from cinema G to store H. There is a road which is 100 meters long from store H to cinema E. There is a road which is 200 meters long from cinema E to bank A. From the start point, how to reach the nearest school?\nAnswer: There are two schools on the map, school J and school I.\n-garden B / cinema D / school J (200+200 = 400) -garden B / school I (100) We get the result as garden B / school I (100). So the answer is BI (200 meters). So the answer is HD Question: There is a set of roads and a set of landmarks. The start point is garden B. There is a road which is 200 meters long from garden B to cinema D. There is a road which is 200 meters long from cinema D to school J. There is a road which is 100 meters long from garden B to school I. There is a road which is 100 meters long from school I to house C. There is a road which is 100 meters long from house C to bank F. There is a road which is 100 meters long from house C to cinema G. There is a road which is 200 meters long from cinema G to store H. There is a road which is 100 meters long from store H to cinema E. There is a road which is 200 meters long from cinema E to bank A. From the start point, how to reach the nearest school?\nAnswer: Let's think step by step 1. Start at garden B. 2. There are two schools on the map, school J and school I. 3. The road from garden B to school J is from garden B to cinema D to school J (200 meters + 200 meters = 400 meters). 4. The road from garden B to school I is from garden B to school I (100 meters). 5. Therefore, the nearest school from the start point (garden B) is school I. 6. Follow the road from garden B to school I (100 meters). So the answer is BI Story: There are three boxes. In the left box, there are one middle square in yellow, one middle square in black, one small square in blue, one middle square in blue. In the middle box, there are one large square in blue, one middle square in blue, one middle square in black, one large triangle in black, one middle round in blue, one small square in yellow. In the right box, there are one large round in blue, one small triangle in yellow, one large triangle in blue. Question:How to move all rounds to the middle box?\nAnswer: To move all rounds to the middle box, we need to identify all the round objects in the three boxes. They are:\n-middle box: (middle, round, blue) -right box: (large, round, blue)\nThen we can move the objects not in the middle box above to the middle box one by one:\n-(large, round, blue) middle -right Story: There are three boxes. In the left box, there are one small round in blue, one small round in blue, one large round in black, one large square in blue, one small round in yellow, one small round in yellow. In the middle box, there are one large square in blue, one middle triangle in yellow. In the right box, there are one large round in black, one large round in blue, one middle triangle in black, one middle triangle in black. Question:How to move all squares to the right box?\nAnswer: To move all sqaure objects to the right box, we need to identify all the square objects in the three boxes. They are:\n-left box: (large, square, blue) -middle box: (large, square, blue)\nThen we can move the objects not in the right box above to the right box:\n-(large, square, blue) left -right box -(large, square, blue) middle -right box Story: There are three boxes. In the left box, there are one large square in blue, one middle square in blue, one small round in blue, one middle triangle in blue, one middle round in yellow, one large square in yellow. In the middle box, there are one small round in yellow, one middle square in blue, one small triangle in black, one small square in black, one small triangle in yellow, one large round in black.\nIn the right box, there are one small square in yellow, one small triangle in yellow, one middle triangle in black, one large round in yellow, one middle square in blue, one large square in yellow. Question:How to move all black objects to the right box?\nAnswer: To move all black objects to the right box, we need to identify all the black objects in the three boxes. They are:\n-middle box: (small, triangle, black), (small, square, black), (large, round, black) Then we can move the objects not in the right box above to the right box:\n-(small, triangle, black) middle -right -(small, square, black) middle -right -(large, round, black) middle -right Story: There are three boxes. In the left box, there are one middle square in yellow, one middle square in black, one small square in blue, one middle square in blue. In the middle box, there are one large square in blue, one middle square in blue, one middle square in black, one large triangle in black, one middle round in blue, one small square in yellow. In the right box, there are one large round in blue, one small triangle in yellow, one large triangle in blue. Question:How to move all rounds to the middle box?\nAnswer: To move all rounds to the middle box, we need to identify all the round objects in the three boxes. They are:\n-middle box: (middle, round, blue) -right box: (large, round, blue)\nThen we can move the objects not in the middle box above to the middle box:\n-(large, round, blue) right -middle Story: There are three boxes. In the left box, there are one large round in black, one small square in blue, one small square in blue, one small triangle in yellow, one middle square in yellow. In the middle box, there are one large square in yellow, one middle triangle in blue, one large round in black. In the right box, there are one large square in blue, one large triangle in black, one middle triangle in black. Question:How to move all black objects to the left box?\nAnswer: To move all black objects to the left box, we need to identify all the black objects in the three boxes. Background: There are three blocks called A, B, and C. In block A, there is a small blue square far above a small blue circle. There is a small black triangle far to the right of both small blue objects. The black triange is near and above a medium blue square. The medium blue square is touching the right edge of A and is near and above a medium black circle. Near and to the right of the medium black circle is a medium yellow triangle. The yellow triangle is to the left of the small blue circle. In B, which is to the right of A, there is a large blue square near and to the left of a large yellow triangle. There is also a small yellow triangle far below the square. In C, which is to the right of B. there is a small yellow circle near and above a small yellow triangle. The small yellow triangle is touching the top of a medium black triangle. The black triangle is touching the right edge of C. There is a large blue circle far below the black triangle.\nQuestion: Are all yellow objects in C above a blue thing?\nCandidate Answers:[\"Yes\", \"No\", \"Do not Know\"]\nAnswer:\nLet's analyze the information provided step by step: In block C: 1. There a small yellow circle near and above a small yellow triangle. 2. The small yellow triangle is touching the top of a medium black triangle. 3. There is a large blue circle far below the black triangle. Based on the provided information, the small yellow circle and the small yellow triangle in Block C are indeed above a blue thing, which is the large blue circle. So, the correct answer is: \"Yes\"." }, { "figure_ref": [], "heading": "Example 2:", "publication_ref": [], "table_ref": [], "text": "Background: There are three blocks called A, B, and C. A is to the left of B and B is to the left of C. In A, there is a small black triangle near and ot the left of a large blue circle. The large blue circle is near and to the left of a small blue circle. The small blue square is far below a medium yellow square. In B, there is a small yellow circle. In C, there is a medium black triangle that is near and to the left of a small yellow circle. The small yellow circle is near and to the left of a blue triangle. There is a large yellow square touching the bottom of the small yellow circle. Near and below the large yellow square is a large black square.\nQuestion: What is the relation between the large black thing and the small yellow thing?\nCandidate Answers:[\"left\", \"right\", \"above\", \"below\", \"near to\", \"far from\", \"touching\", \"Do Not Know\"]\nAnswer:\nLet's analyze the information provided step by step: In block C: 1. There is a medium black triangle that is near and to the left of a small yellow circle. 2. The small yellow circle is near and to the left of a blue triangle. 3. There is a large yellow square touching the bottom of the small yellow circle. 4. Near and below the large yellow square is a large black square. Now, let's determine the relation between the \"large black thing\" (large black square) and the \"small yellow thing\" (small yellow circle): 1. The large black square is in block C, and it is described as being \"near and below\" the large yellow square. 2. The small yellow circle is also in block C, and it is described as being \"near and to the left of\" the medium black triangle. Based on the provided information, the large black square is below the large yellow square and the small yellow circle is near to the medium black triangle. So, the correct answer: \"below\" and \"near to\"." }, { "figure_ref": [], "heading": "Example 3:", "publication_ref": [], "table_ref": [], "text": "Background: There are three blocks called A, B, and C. A is to the right of B and B is to the right of C. In A, there is a medium black circe that is far to the right of a medium blue square. The medium blue square is near and to the right of a small yellow circle. Far above the blue square is a medium yellow triangle. Near above the medium triangle is a small blue circle. In B, there is a large blue square that is touching the right edge of B. There is a medium black triangle that is touching the left side of the blue square. Near and above the medium black triangle is a large blue circle. Near and above the large blue circle is a large yellow triangle. In C, there is a small yellow triangle near and to the right of a medium black circle. The yellow triangle is touching the right edge of C. Far to the left of the medium black circle is a small blue square.\nQuestion: What block does not have any objects touching the edge of it?\nCandidate Answers: [\"A\", \"B\", \"C\"]\nAnswer:\nLet's analyze the information provided step by step: In block A: 1. There is a medium black circle, which is far to the right of a medium blue square. 2. The medium blue square is near and to the right of a small yellow circle. 3. Far above the blue square is a medium yellow triangle. 4. Near above the medium triangle is a small blue circle. In block B: 1. There is a large blue square that is touching the right edge of B. 2. There is a medium black triangle that is touching the left side of the blue square. 3. Near and above the medium black triangle is a large blue circle. 4. Near and above the large blue circle is a large yellow triangle. In block C: 1. There is a small yellow triangle near and to the right of a medium black circle. 2. The yellow triangle is touching the right edge of C. 3. Far to the left of the medium black circle is a small blue square. Based on the provided information, it's Block A, which does not have any objects touching its edge. So, the correct answer is: \"A\"." }, { "figure_ref": [], "heading": "Example 4:", "publication_ref": [], "table_ref": [], "text": "Background: There are three blocks called A, B, and C. A is to the left of B and B is to the left of C. In A, there is a large black triangle far above a medium black triangle. The medium black triangle is near and above a large yellow circle. Near and to the right of the large circle is a small blue circle. In B, there is a medium blue circle near and to the left of a small black square. Far below the blue circle is a small yellow circle. In C, there is a small blue square.\nQuestion: What object is near the large yellow thing, the medium black triangle or the small blue circle the medium black triangle or the smal blue circle?\nCandidate Answers:[\"the medium black triangle\", \"the smal blue circle\", \"both of them\", \"none of them\"]\nAnswer:\nLet's analyze the information provided step by step: In lock A: 1.There is a large black triangle far above a medium black triangle. 2. The medium black triangle is near and above a large yellow circle. 3. Near and to the right of the large circle is a small blue circle. In block B: 1. There is a medium blue circle near and to the left of a small black square. 2. Far below the blue circle is a small yellow circle. In block C: 1. There is a small blue square. Now, let's analyze the position of the large yellow circle: 1. The large yellow circle is in Block A and is near and above the medium black triangle. 2. The small blue circle is also in Block A and is near and to the right of the large yellow circle. Based on the provided information, both the medium black triangle and the small blue circle are near the large yellow circle in Block A. So, the correct answer is: \"both of them\"." }, { "figure_ref": [], "heading": "Example 5:", "publication_ref": [], "table_ref": [], "text": "Background: There are three blue shapes in a block that we called A. Two medium blue squares are below a big blue square. First medium blue square is to the left of the other medium one which is touching the bottom edge of block A. There is another block below the block A we call it B. A big blue triangle is touching the top edge of this block which is above a small black triangle, a big blue circle and a medium blue triangle. The medium triangle is touching the bottom edge of the block. The circle is below and to the left of the small triangle. It is above the medium triangle.\nQuestion: Is the small triangle below the medium triangle?\nCandidate Answers:[\"Yes\", \"No\", \"Do not Know\"]\nAnswer:\nLet's analyze the information provided step by step: In block B: 1. A big blue triangle is touching the top edge of this block which is above a small black triangle, a big blue circle and a medium blue triangle. 2. The medium triangle is touching the bottom edge of the block. 3. The circle is below and to the left of the small triangle. It is above the medium triangle. Based on the provided information, the small black triangle is above the medium blue triangle. So, the correct answer is: \"No\"." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Table 13: Few-shot exemplars for full COT prompt for NLVR-based Manipulation.\nStory: There are three boxes. In the left box, there are one middle square in yellow, one middle square in black, one small square in blue, one middle square in blue. In the middle box, there are one large square in blue, one middle square in blue, one middle square in black, one large triangle in black, one middle round in blue, one small square in yellow. In the right box, there are one large round in blue, one small triangle in yellow, one large triangle in blue. Question:How to move all rounds to the middle box?\nAnswer: To move all rounds to the middle box, we need to identify all the round objects in the three boxes. They are: -One middle round in blue in the middle box -One large round in blue in the right box Then we can move the boxes above not in the middle box to the middle box one by one: 1. Move the large round in blue from the right box to the middle box. Now all the rounds are in the middle box.\nStory: There are three boxes. In the left box, there are one small round in blue, one small round in blue, one large round in black, one large square in blue, one small round in yellow, one small round in yellow. In the middle box, there are one large square in blue, one middle triangle in yellow. In the right box, there are one large round in black, one large round in blue, one middle triangle in black, one middle triangle in black. Question:How to move all squares to the right box?\nAnswer: To move all sqaure objects to the right box, we need to identify all the square objects in the three boxes. They are: -One large square in blue in the left box -One large square in blue in the middle box Then we can move the box above with the square object to the right box: 1. Move the large square in blue from the left box to the right box. 2. Move the large square in blue from the middle box to the right box. Now all the square objects are in the right box.\nStory: There are three boxes. In the left box, there are one large square in blue, one middle square in blue, one small round in blue, one middle triangle in blue, one middle round in yellow, one large square in yellow. In the middle box, there are one small round in yellow, one middle square in blue, one small triangle in black, one small square in black, one small triangle in yellow, one large round in black. In the right box, there are one small square in yellow, one small triangle in yellow, one middle triangle in black, one large round in yellow, one middle square in blue, one large square in yellow. Question:How to move all black objects to the right box?\nAnswer: To move all black objects to the right box, we need to identify all the black objects in the three boxes. Story: There are three boxes. In the left box, there are one middle square in yellow, one middle square in black, one small square in blue, one middle square in blue. In the middle box, there are one large square in blue, one middle square in blue, one middle square in black, one large triangle in black, one middle round in blue, one small square in yellow. In the right box, there are one large round in blue, one small triangle in yellow, one large triangle in blue. Question:How to move all rounds to the middle box?\nAnswer: To move all rounds to the middle box, we need to identify all the round objects in the three boxes. They are: -One middle round in blue in the middle box -One large round in blue in the right box Then we can move the boxes above not in the middle box to the middle box one by one: 1. Move the large round in blue from the right box to the middle box. Now all the rounds are in the middle box.\nStory: There are three boxes. In the left box, there are one large round in black, one small square in blue, one small square in blue, one small triangle in yellow, one middle square in yellow. In the middle box, there are one large square in yellow, one middle triangle in blue, one large round in black. In the right box, there are one large square in blue, one large triangle in black, one middle triangle in black. Question:How to move all black objects to the left box?\nAnswer: To move all black objects to the left box, we need to identify all the black objects in the three boxes. " } ]
While conventional Chain-of-Thought prompting shows promising performance on various language tasks for LLMs, the spatial scenarios are nearly unexplored. In this paper, we first investigate the performance of LLMs on complex spatial understanding and planning tasks that require LLMs to understand a virtual spatial environment simulated via natural language and act or reason correspondingly in text. By evaluating on classic spatial planning scenarios, we found that current popular LLMs such as ChatGPT still lack abilities to handle spatial relationships in texts. This raises a question: Is the natural language the best way to represent complex spatial environments for LLMs, or are other alternatives such as symbolic representations more efficient and effective for LLMs? To this end, we propose a novel method called COS (Chain-of-Symbol Prompting) that represents the spatial relationships with condensed symbols during the chained intermediate thinking steps. COS is easy to use and does not need additional training on LLMs. Extensive experiments indicate that COS clearly surpasses the performance of the Chainof-Thought (CoT) Prompting described in natural language in all three spatial planning tasks and existing spatial QA benchmark, with even fewer tokens used in the inputs compared with CoT. The performance gain is strong, by up to 60.8% accuracy (from 31.8% to 92.6%) on Brick World for ChatGPT. COS also reduces the number of tokens in the prompt obviously, by up to 65.8% of the tokens (from 407 to 139) for the intermediate steps from demonstrations on Brick World.
CHAIN-OF-SYMBOL PROMPTING FOR SPATIAL RELA-TIONSHIPS IN LARGE LANGUAGE MODELS
[ { "figure_caption": ", round, black): middleleft 2. (middle, triangle, black): rightleft 3. (large, triangle, black): rightleft", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Performance of using different symbols for COS on Brick World 1D (Shuffle Both) in accuracy.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "-bank H / store E / bank C / garden I / cinema G / school J (200 + 200 + 200 = 600) -bank H / school D (200) We get the result as bank H / school D.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "The results of ChatGPT(gpt-3.5-turbo) on Brick World. We report the results with four settings as described in Section 2.2, under both 1D and 2D scenarios. We adopt N s = 5, where N s represents the number of demonstrations for COS and CoT. The best results are bolded. For COS and CoT, we report the average and the standard deviation from three runs with different sets of demonstrations. Acc. represents accuracy, Pre. represents precision, and Rec. represents recall. zs-CoT represents zero-shot CoT. We report the average number of tokens in the intermediate steps.", "figure_data": "ModelNo ShuffleShuffle DescriptionShuffle LabelShuffle BothAcc.Pre.Rec.Acc.Pre.Rec.Acc.Pre.Rec.Acc.Pre.Rec. Tok.1D Scenariozs-CoT61.077.271.960.477.577.531.863.459.828.258.655.3-CoT81.0±11.0 87.7±4.5 90.1±2.6 71.5±9.2 90.7± 3.6 81.8±7.1 75.1±10.1 88.0± 3.6 90.1±0.9 43.0±4.4 71.4±3.3 75.7±1.6 407COS96.6±1.9 98.0±0.8 97.7±0.8 95.9±1.2 97.9± 0.6 97.5±0.3 92.6±2.0 97.0± 1.3 95.9±1.1 69.7±5.1 86.7±4.2 83.6±1.6 1392D Scenariozs-CoT32.753.860.614.831.946.913.032.042.39.830.438.4-CoT25.0±15.6 49.8±9.8 45.0±10.5 21.5±8.2 45.6±5.4 41.2±6.3 21.8±2.3 44.7±5.9 43.2±4.0 14.9±3.4 38.1±2.9 36.4±3.5 546COS60.7±1.9 67.2±1.1 71.3±1.3 33.7±3.2 46.7±0.8 50.0±1.5 23.5±5.0 45.9±0.8 63.0±12.1 28.9±2.3 46.3±1.0 44.4±2.8 341", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The automatic evaluation results with Text-Davinci-003 on Brick World. We report the results with four settings as described in Section 2.2, under the 1D scenario. We adopt N s = 5, where N s represents the number of demonstrations for COS and CoT. The best results are bolded. Acc. represents accuracy, Pre. represents precision, and Rec. represents recall.", "figure_data": "ModelNo ShuffleShuffle DescriptionShuffle LabelShuffle BothAcc.Pre.Rec.Acc.Pre.Rec.Acc.Pre.Rec.Acc.Pre.Rec.1D Scenariozs-CoT51.077.675.251.683.283.143.467.473.021.851.454.6CoT89.089.389.556.272.057.079.483.280.059.674.460.6COS90.090.090.075.890.478.685.086.886.673.487.074.8", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The automatic evaluation results with Text-Davinci-003 and gpt-3.5-turbo on NLVR-based Manipulation. We set N s = 5, where N s represents the number of demonstrations for prompting with COS and CoT. The best results are bolded. We report the average and the standard deviation from three runs with different demonstrations. Acc. represents accuracy, Pre. represents precision, and Rec. represents recall (precision and recall are computed with sets in this case).", "figure_data": "Modeltext-davinci-003gpt-3.5-turboTokensAcc.Pre.Rec.Acc.Pre.Rec.Tok.zs-CoT42.056.979.918.626.919.7-CoT74.2±5.171.2±1.785.7±5.363.9±1.762.6±3.080.4±1.7653COS74.9±3.487.9±1.886.7±3.068.4±2.371.2±1.982.9±2.1534", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The results with Text-Davinci-003 and gpt-3.5-turbo on Natural Language Navigation. Acc. represents accuracy, Pre. represents precision, and Rec. represents recall. We report the average and the standard deviation from three runs with different demonstrations. We report the number of tokens in the intermediate steps from demonstrations as the last column.", "figure_data": "Modeltext-davinci-003gpt-3.5-turboTokensAcc.Pre.Rec.Acc.Pre.Rec.Tok.zs-CoT32.550.064.452.874.079.6-CoT65.6±2.383.8±1.784.1±1.953.6±2.876.3±1.181.7±0.8390COS69.4±4.485.3±2.685.4±3.564.1±3.881.7±1.384.5±0.7239", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The automatic evaluation results with GPT-3.5-Turbo and GPT-4 on SPARTUN dataset. We apply CoT with 5 shots, and CoS with 5 shots (Ours) respectively. We report the number of tokens in the intermediate steps from demonstrations as the last column.", "figure_data": "Model GPT-3.5-Turbo GPT-4 TokensCoT-547.169.8198CoS-549.472.6167", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full Chain-of-Symbol prompt for brick 1D. There is a set of bricks. For brick B, the color is yellow. The yellow brick C is on top of the brick B . The yellow brick A is on top of the brick C . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick B There is a set of bricks. The yellow brick A is on top of the brick C . The yellow brick B is on top of the brick A . For the brick C, the color is white. Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick A? There is a set of bricks. The white brick F is on top of the brick D . The yellow brick B is on top of the brick A . The blue brick D is on top of the brick C . The white brick E is on top of the brick G . For the brick A, the color is blue. The blue brick C is on top of the brick E . The white brick G is on top of the brick B . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick G?", "figure_data": "Answer:A//CC//BIn sum: A//C//BSo we get the result as A, C, B.Answer:B//AA//CIn sum: B//A//CSo we get the result as B, A.Answer:A//EE//DD//BB//CIn sum: A//E//D//B//CSo we get the result as A, E, D.Answer:E//AA//DD//FF//CC//BIn sum: E//A//D//F//C//BSo we get the result as E, A, D, F.Answer:A//B//G//E//C//D//FD//FC//DE//CG//ESo we get the result as F, D, C, E, G.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full COT prompt for brick 1D. There is a set of bricks. The yellow brick B is on top of the brick D . For the brick D, the color is white. The yellow brick A is on top of the brick C . The yellow brick E is on top of the brick A .The blue brick C is on top of the brick B . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick D?Answer: The bricks from bottom to top is B, D, C, A, E 1. Remove brick E from the top of brick A. 2. Remove brick A from the top of brick C. 3. Remove brick C from the top of brick B. 4. Now brick B is the topmost yellow brick and can be grabbed, but we need to remove it to get to brick D. 5. Remove brick B from the top of brick D. 6. Now brick D is the topmost white brick and can be grabbed. So we get the result as E, A, C, B, D. There is a set of bricks. For the brick A, the color is blue. The white brick B is on top of the brick C . The blue brick C is on top of the brick A . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick B? There is a set of bricks. The white brick B is on top of the brick C . The yellow brick D is on top of the brick A . For the brick A, the color is yellow. The yellow brick E is on top of the brick B . The white brick C is on top of the brick D . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick D?Answer: The bricks from bottom to top is A, D, C, B, E 1. Remove brick E from the top of brick B. 2. Remove brick B from the top of brick C. 3. Now brick C is the topmost white brick and can be grabbed, but we need to remove it to get to brick D. 4. Remove brick C from the top of brick D. 5. Now brick D is the topmost yellow brick and can be grabbed. So we get the result as E, B, C, D. There is a set of bricks. The yellow brick C is on top of the brick B . The white brick B is on top of the brick A . For the brick A, the color is white. Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick B?", "figure_data": "Question:", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full COS prompt for brick 2D. There is a set of bricks. There is a brick A. The brick B is in front of the brick A. The brick C is in front of the brick B. The brick D is on top of the brick B . The brick E is on top of the brick A . The brick F is on top of the brick C . The brick G is on top of the brick D . The brick H is on top of the brick G . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick A? There is a set of bricks. For the brick A. The yellow brick B is on top of the brick A . The blue brick C is on top of the brick B . The blue brick D is on top of the brick C . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick B? There is a set of bricks. The brick P is on top of the brick R . The brick J is on top of the brick B . The brick D is on top of the brick P . The brick R is on top of the brick H . The brick K is in front of the brick M. The brick B is on top of the brick D . For the brick M, the color is blue. The brick C is on top of the brick J . The brick H is in front of the brick K. Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick M? There is a set of bricks. The brick K is on top of the brick F . The brick M is in front of the brick F. The brick N is on top of the brick K . For the brick O, the color is blue. The brick G is on top of the brick A . The brick F is in front of the brick I. The brick I is in front of the brick O. The brick A is on top of the brick N . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick O?", "figure_data": "Answer:We can getA//E,C//F,B//D//G//H,So we get the result as E, A.Answer:We can getA//B//C//D.Answer:We can getC->B->A,C//D//E//F.So we get the result as F, E, D, C.Answer:We can getH//R//P//D//B//J//C,M-> K -> H.So we get the result as M directly.Answer:We can getF//K//N//A//G,F->I->O,So we get the result as O directly.", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full COT prompt for brick 2D. There are a set of bricks. There is a brick A. The brick B is in front of the brick A. The brick C is in front of the brick B. The brick D is on top of the brick B . The brick E is on top of the brick A . The brick F is on top of the brick C . The brick G is on top of the brick D . The brick H is on top of the brick G . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick A? There are a set of bricks. For the brick A. The yellow brick B is on top of the brick A . The blue brick C is on top of the brick B . The blue brick D is on top of the brick C . Now we have to get a specific brick. The bricks must now be grabbed from top to bottom, and if the lower brick is to be grabbed, the upper brick must be removed first. How to get brick B?", "figure_data": "Answer: Let's think step by step:1. To get brick A, we find E is on top of A.2. We find E is on the top.3 We need to remove brick E first, as it is on top of brick A.4. Brick A is now accessible and can be grabbed.So we get the result as E, A.", "figure_id": "tab_9", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full COT prompt for Natural Language NavigationQuestion: There is a set of roads and a set of landmarks. The start point is bank A. There is a road which is 200 meters long from bank A to bank C. There is a road which is 100 meters long from bank C to house H. There is a road which is 100 meters long from house H to cinema F. There is a road which is 200 meters long from cinema F to store B. There is a road which is 100 meters long from store B to store G. There is a road which is 200 meters long from bank C to house D. There is a road which is 200 meters long from house D to garden J. There is a road which is 100 meters long from bank A to cinema I. There is a road which is 100 meters long from cinema I to house E. From the start point, how to reach the nearest store?Answer: Let's think step by step 1. Start at bank A. 2. There are two stores on the map, store B and store G. 3. The road from bank A to store B is from bank A to bank C to house H to cinema F to store B (200 meters + 100 meters + 100 meters + 200 meters = 600 meters) 4. The road from bank A to store B is shorter than the road from bank A to store G. 5. Therefore, the nearest store from the start point (bank A) is store B. So the answer is ACHFB.Question: There is a set of roads and a set of landmarks. The start point is bank H. There is a road which is 100 meters long from bank H to store E. There is a road which is 200 meters long from store E to bank C. There is a road which is 100 meters long from bank C to house A. There is a road which is 100 meters long from house A to house F. There is a road which is 200 meters long from bank C to garden I. There is a road which is 200 meters long from store E to cinema G. There is a road which is 200 meters long from cinema G to school J. There is a road which is 200 meters long from bank H to school D. There is a road which is 100 meters long from school D to store B. From the start point, how to reach the nearest school?", "figure_data": "Answer: Let's think step by step 1. Start at bank H. 2. There are two schools on the map, school J andschool D. 3. The road from bank H to school J is from bank H to store E to cinema G to school J (200meters + 200 meters + 200 meters = 600 meters). 4. The road from bank H to school D is from bank Hto school D (200 meters). 5. Therefore, the nearest school from the start point (bank H) is school D. 6.Follow the road from bank H to school D", "figure_id": "tab_10", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full COS prompt for NLVR-based Manipulation.", "figure_data": "", "figure_id": "tab_11", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Few-shot exemplars for full COT prompt for SPARTQA.", "figure_data": "Example 1:", "figure_id": "tab_13", "figure_label": "14", "figure_type": "table" } ]
Hanxu Hu; Hongyuan Lu; Huajian Zhang; Yun-Ze Song; Wai Lam; Yue Zhang
[ { "authors": "Rishi Bommasani; Drew A Hudson; Ehsan Adeli; Russ Altman; Simran Arora; Michael S Sydney Von Arx; Jeannette Bernstein; Antoine Bohg; Emma Bosselut; Erik Brunskill; Shyamal Brynjolfsson; Dallas Buch; Rodrigo Card; Niladri Castellon; Annie Chatterji; Kathleen Chen; Jared Quincy Creel; Dora Davis; Chris Demszky; Moussa Donahue; Esin Doumbouya; Stefano Durmus; John Ermon; Kawin Etchemendy; Li Ethayarajh; Chelsea Fei-Fei; Trevor Finn; Lauren Gale; Karan Gillespie; Noah Goel; Shelby Goodman; Neel Grossman; Tatsunori Guha; Peter Hashimoto; John Henderson; Daniel E Hewitt; Jenny Ho; Kyle Hong; Jing Hsu; Thomas Huang; Saahil Icard; Dan Jain; Pratyusha Jurafsky; Siddharth Kalluri; Geoff Karamcheti; Fereshte Keeling; Omar Khani; Pang Khattab; Mark Wei Koh; Ranjay Krass; Rohith Krishna; Ananya Kuditipudi; Faisal Kumar; Mina Ladhak; Tony Lee; Jure Lee; Isabelle Leskovec; Levent; Lisa Xiang; Xuechen Li; Tengyu Li; Ali Ma; Christopher D Malik; Suvir Manning; Eric Mirchandani; Zanele Mitchell; Suraj Munyikwa; Avanika Nair; Deepak Narayan; Ben Narayanan; Allen Newman; Juan Carlos Nie; Hamed Niebles; Julian Nilforoshan; Giray Nyarko; Laurel Ogut; Isabel Orr; Papadimitriou; Sung Joon; Chris Park; Eva Piech; Christopher Portelance; Aditi Potts; Rob Raghunathan; Hongyu Reich; Frieda Ren; Yusuf Rong; Camilo Roohani; Jack Ruiz; Christopher Ryan; Dorsa Ré; Shiori Sadigh; Keshav Sagawa; Andy Santhanam; Krishnan Shih; Alex Srinivasan; Rohan Tamkin; Armin W Taori; Florian Thomas; Rose E Tramèr; William Wang; Bohan Wang; Jiajun Wu; Yuhuai Wu; Sang Wu; Michihiro Michael Xie; Jiaxuan Yasunaga; Matei You; Michael Zaharia; Tianyi Zhang; Xikun Zhang; Yuhui Zhang; Lucia Zhang; Kaitlyn Zheng; Percy Zhou; Liang", "journal": "", "ref_id": "b0", "title": "On the Opportunities and Risks of Foundation Models", "year": "2021-08" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b2", "title": "", "year": "2020" }, { "authors": "Howard Chen; Alane Suhr; Dipendra Misra; Noah Snavely; Yoav Artzi", "journal": "", "ref_id": "b3", "title": "Touchdown: Natural language navigation and spatial reasoning in visual street environments", "year": "2019" }, { "authors": "Jing Gu; Eliana Stefani; Qi Wu; Jesse Thomason; Xin Wang", "journal": "", "ref_id": "b4", "title": "Vision-and-language navigation: A survey of tasks, methods, and future directions", "year": "2022-05" }, { "authors": "J Edward; Phillip Hu; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b5", "title": "LoRA: Low-rank adaptation of large language models", "year": "2022" }, { "authors": "Michael Janner; Karthik Narasimhan; Regina Barzilay", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b6", "title": "Representation learning for grounded spatial reasoning", "year": "2018" }, { "authors": "Takeshi Kojima; Shane Shixiang; Machel Gu; Yutaka Reid; Yusuke Matsuo; Iwasawa", "journal": "", "ref_id": "b7", "title": "Large language models are zero-shot reasoners", "year": "2023" }, { "authors": "Anirudh Krishna Lakshmanan; Rajesh Elara Mohan; Balakrishnan Ramalingam; Anh Vu Le; Prabahar Veerajagadeshwar; Kamlesh Tiwari; Muhammad Ilyas", "journal": "Automation in Construction", "ref_id": "b8", "title": "Complete coverage path planning using reinforcement learning for tetromino based cleaning and maintenance robot", "year": "2020" }, { "authors": "Bo Liu; Yuqian Jiang; Xiaohan Zhang; Qiang Liu; Shiqi Zhang; Joydeep Biswas; Peter Stone", "journal": "", "ref_id": "b9", "title": "Llm+p: Empowering large language models with optimal planning proficiency", "year": "2023" }, { "authors": "Roshanak Mirzaee; Parisa Kordjamshidi", "journal": "", "ref_id": "b10", "title": "Transfer learning with synthetic corpora for spatial role labeling and reasoning", "year": "2022-12" }, { "authors": "Roshanak Mirzaee; Rajaby Hossein; Qiang Faghihi; Parisa Ning; Kordjamshidi", "journal": "", "ref_id": "b11", "title": "SPARTQA: A textual question answering benchmark for spatial reasoning", "year": "2021-06" }, { "authors": "Khanh Nguyen; Debadeepta Dey; Chris Brockett; Bill Dolan", "journal": "", "ref_id": "b12", "title": "Vision-based navigation with language-based assistance via imitation learning with indirect intervention", "year": "2019" }, { "authors": "Maxwell Nye; Anders Johan Andreassen; Guy Gur-Ari; Henryk Michalewski; Jacob Austin; David Bieber; David Dohan; Aitor Lewkowycz; Maarten Bosma; David Luan; Charles Sutton; Augustus Odena", "journal": "", "ref_id": "b13", "title": "Show Your Work: Scratchpads for Intermediate Computation with Language Models", "year": "2021-11" }, { "authors": "I Aleksandr; Konstantin S Panov; Roman Yakovlev; Suvorov", "journal": "", "ref_id": "b14", "title": "Grid path planning with deep reinforcement learning: Preliminary results", "year": "2017" }, { "authors": "Keqin Peng; Liang Ding; Qihuang Zhong; Li Shen; Xuebo Liu; Min Zhang; Yuanxin Ouyang; Dacheng Tao", "journal": "", "ref_id": "b15", "title": "Towards Making the Most of ChatGPT for Machine Translation", "year": "2023-03" }, { "authors": "Robin Rojowiec; Jana Götze; Philipp Sadler; Henrik Voigt; Sina Zarrieß; David Schlangen", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "From \"before\" to \"after\": Generating natural language instructions from image pairs in a simple visual domain", "year": "2020-12" }, { "authors": "Freda Shi; Xinyun Chen; Kanishka Misra; Nathan Scales; David Dohan; Ed Chi; Nathanael Schärli; Denny Zhou", "journal": "", "ref_id": "b17", "title": "Large Language Models Can Be Easily Distracted by Irrelevant Context", "year": "2023-01" }, { "authors": "Freda Shi; Mirac Suzgun; Markus Freitag; Xuezhi Wang; Suraj Srivats; Soroush Vosoughi; Hyung Won Chung; Yi Tay; Sebastian Ruder; Denny Zhou; Dipanjan Das; Jason Wei", "journal": "", "ref_id": "b18", "title": "Language models are multilingual chain-of-thought reasoners", "year": "2023" }, { "authors": "Aarohi Srivastava; Abhinav Rastogi; Abhishek Rao; Abu Awal; Md Shoeb; Abubakar Abid; Adam Fisch; Adam R Brown; Adam Santoro; Aditya Gupta; Adrià Garriga-Alonso; Agnieszka Kluska; Aitor Lewkowycz; Akshat Agarwal; Alethea Power; Alex Ray; Alex Warstadt; Alexander W Kocurek; Ali Safaya; Ali Tazarv; Alice Xiang; Alicia Parrish; Allen Nie; Aman Hussain; Amanda Askell; Amanda Dsouza; Ambrose Slone; Ameet Rahane; Anantharaman S Iyer; Anders Andreassen; Andrea Madotto; Andrea Santilli; Andreas Stuhlmüller; Andrew Dai; Andrew La; Andrew Lampinen; Andy Zou; Angela Jiang; Angelica Chen; Anh Vuong; Animesh Gupta; Anna Gottardi; Antonio Norelli; Anu Venkatesh; Arash Gholamidavoodi; Arfa Tabassum; Arul Menezes; Arun Kirubarajan; Asher Mullokandov; Ashish Sabharwal; Austin Herrick; Avia Efrat; Aykut Erdem; Ayla Karakaş; B Ryan Roberts; Bao Sheng Loe; Barret Zoph; Bartłomiej Bojanowski; Batuhan Özyurt; Behnam Hedayatnia; Behnam Neyshabur; Benjamin Inden; Benno Stein; Berk Ekmekci; Bill Yuchen Lin; Blake Howald; Cameron Diao; Cameron Dour; Catherine Stinson; Cedrick Argueta; César Ferri Ramírez; Chandan Singh; Charles Rathkopf; Chenlin Meng; Chitta Baral; Chiyu Wu; Chris Callison-Burch; Chris Waites; Christian Voigt; Christopher D Manning; Christopher Potts; Cindy Ramirez; Clara E Rivera; Clemencia Siro; Colin Raffel; Courtney Ashcraft; Cristina Garbacea; Damien Sileo; Dan Garrette; Dan Hendrycks; Dan Kilman; Dan Roth; Daniel Freeman; Daniel Khashabi; Daniel Levy; Daniel Moseguí González; Danielle Perszyk; Danny Hernandez; Danqi Chen; Daphne Ippolito; Dar Gilboa; David Dohan; David Drakard; David Jurgens; Debajyoti Datta; Deep Ganguli; Denis Emelin; Denis Kleyko; Deniz Yuret; Derek Chen; Derek Tam; Dieuwke Hupkes; Diganta Misra; Dilyar Buzan; Dimitri Coelho Mollo; Diyi Yang; Dong-Ho Lee; Ekaterina Shutova; Ekin Dogus Cubuk; Elad Segal; Eleanor Hagerman; Elizabeth Barnes; Elizabeth Donoway; Ellie Pavlick; Emanuele Rodola; Emma Lam; Eric Chu; Eric Tang; Erkut Erdem; Ernie Chang; Ethan A Chi; Ethan Dyer; Ethan Jerzak; Ethan Kim; Eunice Engefu Manyasi; Evgenii Zheltonozhskii; Fanyue Xia; Fatemeh Siar; Fernando Martínez-Plumed; Francesca Happé; Francois Chollet; Frieda Rong; Gaurav Mishra; Genta Indra Winata; Gerard De Melo; Germán Kruszewski; Giambattista Parascandolo; Giorgio Mariani; Gloria Wang; Gonzalo Jaimovitch-López; Gregor Betz; Guy Gur-Ari; Hana Galijasevic; Hannah ", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "Alane Suhr; Mike Lewis; James Yeh; Yoav Artzi", "journal": "", "ref_id": "b20", "title": "A corpus of natural language for visual reasoning", "year": "2017-07" }, { "authors": "Chen Feng Tsai; Xiaochen Zhou; Sierra S Liu; Jing Li; Mo Yu; Hongyuan Mei", "journal": "", "ref_id": "b21", "title": "Can Large Language Models Play Text Games Well? Current State-of-the-Art and Open Questions", "year": "2023-04" }, { "authors": "Naoki Wake; Atsushi Kanehira; Kazuhiro Sasabuchi; Jun Takamatsu; Katsushi Ikeuchi", "journal": "", "ref_id": "b22", "title": "Chatgpt empowered long-step robot control in various environments: A case application", "year": "2023" }, { "authors": "Jason Wei; Xuezhi Wang; Dale Schuurmans; Maarten Bosma; Fei Xia; Ed H Chi; Denny Quoc V Le; Zhou", "journal": "", "ref_id": "b23", "title": "Chain of thought prompting elicits reasoning in large language models", "year": "2022" }, { "authors": "Jerry Wei; Le Hou; Andrew Lampinen; Xiangning Chen; Da Huang; Yi Tay; Xinyun Chen; Yifeng Lu; Denny Zhou; Tengyu Ma; V Quoc; Le", "journal": "", "ref_id": "b24", "title": "(small, yellow, circle) ↑ (small, yellow, triangle) 2. (small, yellow, triangle) = top of (medium, black, triangle) 3. (large, blue, circle) ∞↓ (", "year": "2023-05" }, { "authors": "", "journal": "", "ref_id": "b25", "title": "small, blue, square) ∞↓ (medium, yellow, square) B: 1. (small, yellow, circle) C: 1. (medium, black, triangle) < (small, yellow, circle) 2. (small, yellow, circle) < (blue, triangle) 3. (large, yellow, square) =↓ (small, yellow, circle) 4. (large, black, square) ↓ (large, yellow, square) Now, let's determine the relation between the (large, black, square) and the (small, yellow, circle): 1. C: (large, black, square) ↓ (large, yellow, square). Therefore, (large, black, square)", "year": "" }, { "authors": "", "journal": "", "ref_id": "b26", "title": "square)", "year": "" }, { "authors": " Medium", "journal": "", "ref_id": "b27", "title": "yellow, triangle) ∞↑ (blue, square)", "year": "" }, { "authors": "", "journal": "", "ref_id": "b28", "title": "small, blue, circle) ↑ (medium, triangle). B: 1. (large, blue, square) => edge of B. 2. (", "year": "" }, { "authors": " Large", "journal": "", "ref_id": "b29", "title": "circle) ↑", "year": "" }, { "authors": "Yellow Large", "journal": "", "ref_id": "b30", "title": "triangle) ↑ (large, blue, circle). C: 1. (small, yellow, triangle)", "year": "" }, { "authors": "Yellow Small", "journal": "", "ref_id": "b31", "title": "circle) ∞↓ (blue, circle). C: 1. there is a small blue square. Now, let's analyze the position of the (large, yellow, circle)", "year": "" }, { "authors": "A ", "journal": "", "ref_id": "b32", "title": "(big, blue, triangle) = top edge of B 2. (big, blue, triangle) ↑ (small, black, triangle), (big, blue, circle) and (medium", "year": "" } ]
[ { "formula_coordinates": [ 3, 145.39, 276.49, 1.76, 4.14 ], "formula_id": "formula_0", "formula_text": "-" } ]
10.18653/v1/P19-1264
2023-10-25
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b48" ], "table_ref": [], "text": "The concept of entropy from Information Theory is broadly applied in Natural Language Processing (NLP) technology and computational linguistic studies. The most notable example is the use of cross-entropy in training and evaluating language models, where the exponentiation of cross-entropy, perplexity, is adopted to measure models' performance in next-word (or masked-word) prediction task. However, low perplexity alone does not guarantee good performance in language generation tasks, which not only depend on model sizes but are also closely related to the sampling techniques used in decoding stage. The complexity of the generation task makes it especially important to have different metrics that can reflect the generation quality from multiple angles. One particular perspective is that the language generated from a good model should have a similar distribution of words/tokens as in the \"natural\" human language.\nRecent advances in psycholinguistics put forward new directions for developing more sophisticated metrics other than Zipf's coefficient. In particular, studies on temporal and spectral patterns in dialogue [7,47] reveal that cross-entropy (or referred to as surprisal, information density in the psycholinguistics literature) changes periodically in natural language, which points out the potentials of using fine-grained transformation of cross-entropy to quantify the differences in language data (see Section 3 for a detailed review). It motivates the basic idea of this study: Can we effectively quantify the periodical pattern of the cross-entropy, and use it as an indicator to distinguish human and model-generated languages?\nWe summarize our contributions as follows: 1. We propose a set of metrics based on the frequency spectra obtained from the Fast Fourier Transform (FFT) of the cross-entropy sequences of language data, named FACE (Fourier Analysis of Cross-Entropy). 2. We empirically show FACE's performance on identifying human-model gap and how it scales with model sizes in Section 4.1. 3. We explore FACE's correlations with sampling methods and human evaluation in Section 4.2 and Section 4.3, respectively. 4. We validate the statistical soundness of FACE in Section 4.4. 5. We discuss an intuitive interpretation of the metrics and how it reflects the characteristics of language use in Section 4.5. 6. Implementation and experiments code are available in this public repository: https://github.com/CLCS-SUSTech/FACE." }, { "figure_ref": [ "fig_0" ], "heading": "FACE", "publication_ref": [], "table_ref": [], "text": "The basic idea of FACE is to obtain the spectra of cross-entropy from different data sources (human or models) and compute their similarities. The overall workflow is shown in Figure 1, which we describe in five steps:\n1. Collect the datasets for human-written and model-generated texts, D h and D m . 2. Use a third pre-trained language model m est to estimate the cross-entropy of text in D h and D m , resulting in two sequences of cross-entropy output, E h and E m . 3. Obtain the frequency spectra for each cross-entropy sequences, E h ⇒ F h and E m ⇒ F m . 4. Develop FACE metrics that quantify the spectral similarity between F h and F m . 5. Evaluate FACE on different model types/sizes, sampling methods, and the correlations with other metrics for Natural Language Generation (NLG).\nWe describe the steps in detail from Section 2.1 to Section 2.3." }, { "figure_ref": [], "heading": "Estimate cross-entropy", "publication_ref": [ "b15", "b16", "b25", "b19", "b49", "b12", "b13", "b47", "b48", "b38" ], "table_ref": [], "text": "We use a pre-trained language model m est as the estimator for cross-entropy, which runs in the evaluation model (no gradients produced). It takes as input a sequence of T tokens, [t 1 , t 2 , . . . , t T ]; for each position i = 1, . . . , T , it predicts the probability of the next token P (t i+1 |t 1 , . . . , t i ); the cross-entropy between this probability and the ground truth token t i+1 is then computed, resulting in the cross-entropy sequence that consists of T -\n1 real values E = [c 1 , c 2 , . . . , c T -1 ],\nas the first token is not predicted:\nE = [c1, c2, . . . , cT -1] ≜ [-log P (t2|t1), -log P (t3|t1, t2), . . . , -log P (tT |t1, t2, . . . , tT -1)] (1)\nNote that\nc i = - T i=2 log P (t i |t 1 . . . t i-1\n) is exactly the definition of negative log-likelihood loss, i.e., cross-entropy loss, for training a language model, where c i is the negative logarithm of the predicted probability for each token t i+1 . In psycholinguistic studies, this c i quantity is usually referred to several different terms, including surprisal [16,17], information density [25,20,48], and entropy [13,14,46,47], each of which has a specific theoretical flavor. There have been debates over the justifiability of using \"entropy\" to denote the negative log-likelihood, because it is not a weighted summation as originally defined in [37]. Albeit, we decide to use cross-entropy as it is the most broadly communicated term and we believe it will not cause confusion as its mathematical form is clearly defined. Apparently, the choice for m est will influence the next steps, because better \nℱ ! ∈ ℝ \" ! , ℱ # ∈ ℝ \" \" Interpolated & absolute: ℱ′ ! , ℱ′ # ∈ ℝ \" # 𝑆𝑂 = 𝐴𝑈𝐶(|ℱ $ ! | ∩ |ℱ $ ′ # |) 𝐴𝑈𝐶(|ℱ $ ! | ∪ |ℱ $ # |) 𝐶𝑂𝑅𝑅 = 𝑐𝑜𝑣(ℱ $ ! , ℱ $ # ) 𝜎 ℱ $ ! 𝜎(ℱ $ # ) 𝑆𝐴𝑀 = 𝑎𝑟𝑐𝑐𝑜𝑠 ℱ $ ! ⋅ ℱ $ # 𝜎 ℱ $ ! 𝜎 ℱ $ # 𝑆𝑃𝐸𝐴𝑅 = 1 - 6 ∑ |𝑋 ! (𝜔 % )| -|𝑋 # (𝜔 % )| & 𝑁 ' (𝑁 ' & -1)\nFour FACE metrics language models produce lower perplexity scores, that is, lower cross entropy. Therefore, we discuss how different choices for m est affect our metrics in Section 4.4." }, { "figure_ref": [], "heading": "Fast Fourier transform", "publication_ref": [ "b40", "b4", "b44" ], "table_ref": [], "text": "We treat the estimated cross-entropy sequence [c 1 , . . . , c T -1 ] as a finite discrete signal in the time domain, where the sampling interval is approximated with the average duration of one token. With this simplified assumption, we find that the discrete Fourier transform (DFT) is the most suitable spectral analysis tool [39] 3 . The formula for DFT is as follows:\nX(ω k ) ≜ N -1 n=0 x(t n )e -jω k tn , k = 0, 1, . . . , N -1(2)\nin which x(t n ) is the signal at time t n , corresponding to the n-th cross-entropy value c n (n = 1 . . . , T -1 and N ≜ T -1). X(ω k ) is a complex number that reflects the magnitude (strength) of the k-th frequency component ω k = 2πk/N . In practice, DFT is implemented with an efficient algorithm known as Fast Fourier Transform [5] that runs in O(n log n) time.\nWe compared two methods, periodogram and vanilla FFT. The periodogram approach computes the Fourier transform after applying auto-correlation and time-averaging windows to the signal for de-noising purposes [43]. However, we think de-noising is inappropriate because our \"signal\" is a time series of cross-entropy, whose value reflects the sampling result at each time step from a large. Auto-correlation or time averaging will remove the distinctiveness of rare tokens. Therefore, we use vanilla FFT and take the real part of X(ω k ) to represent the magnitude spectrum for the frequency component ω k , which is written as X(ω k ) for brevity.\nFor an input cross-entropy sequence E = [c 1 , . . . , c T -1 ] obtained from Section 2.1, the resulting frequency spectrum can be represented as a list of tuples of the same length,\nF = [⟨ω 1 , X(ω 1 )⟩, . . . , ⟨ω T -1 , X(ω T -1 )⟩],\nwhere [ω 1 , . . . , ω T -1 ] are the T -1 sample frequencies, and [X(ω 1 ), . . . , X(ω T -1 )] are the corresponding magnitudes." }, { "figure_ref": [ "fig_2" ], "heading": "Spectral similarity metrics", "publication_ref": [ "b5", "b28", "b48", "b21", "b21", "b1", "b21", "b42" ], "table_ref": [], "text": "We develop four metrics to measure the similarity between spectra F h and F m : Spectral Overlap (SO), Spectrum Angle Mapper (SAM) [6], Pearson's correlation (CORR), and Spearman's correlation (SPEAR), as summarized in Figure 2. Before computing the metrics, two spectra F h and F m which are of different lengths N 1 and N 2 , are first interpolated to the same length:\nF h ∈ R N1 ⇒ F ′ h ∈ R N C , F m ∈ R N2 ⇒ F ′ m ∈ R N C .\nHere, N C is the maximum length of the spectrum in our data. Thereafter, the computation of the subsequent metrics can commence.\nSpectral Overlap (SO) is inspired by the power spectrum overlap proposed in [28], which is used in [47] for measuring the spectral similarity between dialogue participants. The frequency magnitudes in F ′ h and F ′ m are converted to absolute values, i.e., X(ω k ) ⇒ |X(ω k )|, and then compute the Area-Under-Curve (AUC) for the interaction F ′ h ∩ F ′ m and the union F ′ h ∪ F ′ m , respectively. SO is defined as the ratio of the two:\nSO = AUC(F ′ h ∩ F ′ m )/AUC(F ′ h ∪ F ′ m )\n. The procedure of converting to absolute values is indispensable, since negative values in X(ω k ) will result in negative AUCs. SO has the range [0, 1], and a higher value indicates a stronger resemblance between the two spectra.\nSpectrum Angle Mapper (SAM) calculates the angles between F ′ h and F ′ m , treating them as two vectors in a space [22]. The angle is measured in radians, which is calculated by the inverse function\narccos(F ′ h • F ′ m /||F ′ h || • ||F ′ m ||)\n, producing a value within [0, π/4]. We understand SAM is equivalent to the cosine similarity score, which is more commonly-used in NLP, but here we just follow the conventions in [22,2]. A smaller SAM value indicates a greater similarity between F ′ h and F ′ m . Pearson's correlation (CORR) can also be leveraged to measure spectral similarities as discussed in [22].\nCORR = cov(F ′ h , F ′ m )/σ(F ′ h )σ(F ′ m )\n, with a [-1, 1] range. A positive CORR value indicates high similarity (negative for dissimilarity), and 0 indicates weak correlation between F ′ h and F ′ m . Spearman's correlation (SPEAR) [41] is commonly used to assess the monotonic relationship between the comparison and reference groups and to capture the presence of non-linear associations between the two. It has not been used for spectral similarity to the best of our knowledge, but we test it in our experiments. SPEAR also has the range [-1, 1] with meanings similar to CORR." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b12", "b13", "b47", "b33", "b41", "b15", "b24", "b16", "b25", "b19", "b49", "b14", "b6", "b49", "b11", "b30", "b18", "b53", "b29", "b45", "b43", "b10", "b3", "b37", "b39", "b51", "b52", "b31", "b8", "b22", "b35", "b11", "b31", "b18", "b26", "b17", "b2" ], "table_ref": [ "tab_1" ], "text": "Entropy as a metric in psycholinguistics. The entropy of human language has long been a research interest in computational linguistics and psycholinguistics. The entropy of written text is estimated with the average per-word negative log-probability in sentences, and then used to validate the principle of entropy rate constancy (ERC) in human language [13,14]. Similar studies were conducted in dialogue [46,32]. Entropy is also defined in probabilistic grammars to describe the capacity of a language [40], and is used to develop complexity metrics to measure the cognitive load of processing syntactic expressions [16,24,17]. In the line of work on language production, a different term information density with the same mathematical formulation is used instead of entropy. It is found that speakers reduce syntactic complexity when the information density (or entropy) is high [25,20]. In parallel with the concept of ERC, this line of work summarizes the tendency of distributing information evenly in human language with the term uniform information density (UID), which is commonly used as a equivalent term as ERC, for example, in [48,15]. In conclusion, entropy is commonly used as a metric for essential properties of human language.\nPeriodical change of cross-entropy in language. We draw inspiration from the following studies about the distribution of information in dialogue. Humans are sensitive to the peaks and troughs of entropy in speech, with evidence from human-system dialogues and crowd-sourced ratings from human judges [7]. The entropy of utterances from two speakers converge towards each other within the scope of topical segments in spontaneous dialogues [48]. They measure the entropy of utterances from two participants of a task-oriented dialogue, and have found that the frequency domain features -power spectrum overlap and phase delay -are useful predictors of task outcomes. Both works reviewed above suggest that the periodical up-and-downs of entropy are commonly observable in the human language. It naturally leads to the question of whether and to what extent model-generated language aligns with this empirical finding.\nAutomatic measures for text generation. Entropy and its related variants have already used as a metric for evaluating generated text, for instance, entropy provides good visualization for the difference between GPT2-generated text and human written ones [12]. Other than entropy, there is a rich body of existing metrics targeted on discriminating human-written text and modelgenerated text, which we summarize in three branches: (1) statistics-based; (2) language modeling;\n(3) reference-based. Table 1 gives a brief summary of these three categories, as well as our proposed frequency-based FACE.\nStatistics-based measures compare the model-generated distribution M with respect to the humanwritten distribution H in terms of some statistic. The Zipf coefficient [30] is used in [19] to describe the distribution of word frequencies in text. Self-BLEU [52] is derived by calculating the BLEU [29] score for each generated text utilizing all other generations as references. Repetition measures the sequence-level degree of repetition on the basis of the percentage of duplicated n-grams in the generated continuations x cont ∼ M [44]. Meanwhile, we aggregate the 2-gram, 3-gram, and 4-gram repetition rates to evaluate the lexical diversity in an inverse manner. Human Judgment Bradley-Terry Score Human preference via the pairwise evaluation Language modeling metrics measure how un(certain) human text x ∼ H follows the model distribution M , using the probability distribution M (x). In our work, the perplexity is calculated upon the set of human texts to quantify how well the distribution M predicts a text continuation. Coherence is approximated by cosine similarity between the sentence embeddings of prompt x pre ∼ H and continuation x cont ∼ M as proposed in [42], where the embedding EMB(•) is produced by the pre-trained SimCSE sentence embedding [11]. Metrics under this category never observe model-generated text samples, and hence, they cannot justify how likely x cont is under the human distribution H.\nP (i beats j) = 1 1+e -(β i -β j )/100\nReference-based measures assess the generated text with respect to a small set of reference text, rather than calculating over the full sequence distributions. Some recent reference-based approaches encompass: (1) [4,36,38,50] aim to capture distributional semantic information in high-dimensional space; (2) [51] concerns Euclidean distance between vector representations of n-grams and their document frequencies; (3) [31] straightforwardly computes the similarity of one learned distribution from a text generation and the other distribution of human-written text using information divergence frontiers [9,23,34]. Reference-based metrics are well-suited for targeted generation tasks (e.g., machine translation). Nevertheless, they become unfavorable in the open-ended generation scenario where multiple reasonable and diverse continuations are preferred.\nNon-automatic metrics. Recent works [12,31,19,26] on evaluation metrics and decoding strategies for natural language generation rely on human judgments, assuming that human annotations are the gold standard. Considering the expense of Human Unified with Statistical Evaluation (HUSE) [18], we adopt a pairwise evaluation protocol based on human preferences, to serve as a non-automatic complement of FACE metrics. We leverage the Bradley-Terry model [3] to predict the outcome of a head-to-head comparison given n players with scores\nβ 1 , • • • , β n ." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "Task formulation. Given an input text passage as prefix, the open-ended generation aims to produce texts that form a fluent and coherent continuation. More formally, given a sequence of m tokens denoted [x 1 . . . x m ], as the prompt, the goal is to generate the next n continuation tokens to form a complete sequence [x 1 . . . x m+n ]. The continuation probability at the decoding time by conditioning on the preceding context is defined as: P (x m+1 . . .\nx m+n | x 1 . . . x m ) = m+n i=m+1 P (x i | x 1 . . . x i-1 ) , where P (x i | x 1 . . . x i-1\n) is the next-token distribution." }, { "figure_ref": [ "fig_3" ], "heading": "Model sizes", "publication_ref": [ "b34", "b50", "b36", "b9", "b27", "b0", "b18", "b26" ], "table_ref": [ "tab_2", "tab_3", "tab_3" ], "text": "We consider such a text completion task in three domains: Wiki text, News, and Stories. Intuitively, the generated texts involving different domain knowledge may have different language usages and writing style, which may reflect on metrics. We generate completions from large-scale language models (LMs). In particular, we adopt three representatives of state-of-the-art pre-trained autoregressive LMs: Generative Pre-trained Transformer 2 (GPT2) [33], Open Pre-trained Transformer LMs (OPT) [49], and BigScience Large Open-science Open-access Multilingual LM (BLOOM) [35]. We explore two sizes for each model to illustrate that our FACE metrics generalize across multiple LM families and sizes. Details regarding our task and input data are summarized in Table 2: Dataset and task summary. In our research, we set the maximum generation length to 1024 for all models on three datasets. Note that the WritingPrompts dataset [10] contains ready-to-use prompts, so the length of prompts varies. For WikiText-103 [27] and RealNews [1] datasets, we cleaned them before extracting the texts corresponding to the first 35 tokens (tokenized by GPT2Tokenizer) to form our prompt sets. We applied the majority voting to determine the winner. OPT and BLOOM are postfixed with their number of parameters.\nDifferent models may generate vastly different numbers of continuations in each length interval (see Supplementary Material). To ensure the fairness of investigating the correlation between FACE and other widely-used metrics with respect to different models (with different sizes), we compute the weighted arithmetic mean for every metric across five length intervals.\nThe evaluation metrics we are interested in are based on various motivations and principles. Specifically, MAUVE and FACE emphasize the parallels between human and machine-produced texts, as stated in Section 3. Therefore, we group MAUVE together with four FACE metrics. To further obtain intuitive results, we utilize the voting approach to explore the correlations between these metrics on large/small models across three task domains. The results are shown in Table 3.\nIn our investigations, the GPT2-xl model consistently outperforms its small counterpart among statistics-based and language modeling metrics as all relevant \"vs.\" columns indicate, apart from the Coherence in the Stories domain. In the GPT2 experimental group, it is astonishing that the small model always performs better when referring to the voting results from MAUVE and FACE rows. Across three task domains, the performances of OPT and BLOOM models in two sizes differ. Large models have better overall performance, and small models only win four out of twelve comparisons by voting. Nonetheless, it is noteworthy that four FACE metrics we proposed maintain a relatively high level of consistency with MAUVE across all models. At least two FACE metrics yield the same results (in eight out of nine sets of human-model language comparisons) with MAUVE. Concretely Figure 4: FACE scores (conditional generation) on original experimental data of [19] and [26]. Nine sampling methods are compared: greedy, beam search, stochastic beam search, pure sampling, temperature, top-k, top-k with temperature, nucleus, and contrastive. Note that logarithmic normalization on parameter values as well as enlarged markers for greedy decoding, pure sampling, and contrastive decoding are adopted for better visualization effect. Best viewed when zoomed in.\nspeaking, SO and SAM show a higher positive correlation to MAUVE than CORR and SPEAR, given that seven out of nine voting results (marked with yellow in Table 3) are identical.\nTo further evaluate model sizes, we apply FACE to the original GPT2 output data (webtext) 4generated from GPT2-sm and GPT2-xl. GPT2-xl has a higher SO score than GPT2-sm, which is confirmed with the t-test, but non-significant effects are found on the other three metrics. Combining our generation task with the original GPT2 data, we illustrate the results for SO in Figure 3.\nTo conclude, we discover three keypoints: (1) FACE is consistent with MAUVE in evaluating three different model types (two sizes for each); (2) the metrics estimating similarity between human-written and model-generated text generations (e.g., FACE, MAUVE) may produce opposite results to the text-centered metrics (e.g., Diversity, Coherence); (3) the four metrics of FACE show relatively homogeneous results, and using these metrics together helps to identify model-generated texts with a more comprehensive evaluation." }, { "figure_ref": [], "heading": "Sampling methods", "publication_ref": [ "b18", "b26", "b26" ], "table_ref": [ "tab_5" ], "text": "Recent work [19,26] has indicated three clear trends in open-ended text generation using autoregressive LMs: (1) maximization-based decoding algorithms (e.g., beam search, greedy decoding, etc.) lead to copious repetition, while sampling with temperature may result in incoherence;\n(2) truncation-based sampling methods like nucleus sampling produce text with higher quality;\n(3) contrastive decoding outperform nucleus sampling in terms of both fluency and coherence. Accordingly, to demonstrate the effectiveness of our approach, FACE should follow the inequality: maximization-based/temperature-based ≺ nucleus ≺ contrastive in terms of the quality relationship.\nFigure 4 visualizes the correlation between FACE scores and various decoding algorithms. The contrastive decoding approach yields the best performance among the four FACE metrics. It can be clearly observed that the maximization-based sampling methods behave worse than other algorithms. Moreover, adding the temperature parameter to top-k sampling results in incoherent text generations, which explains the gap between the red curve (top-k w/o temperature) and the gray curve (top-k w/ temperature). We also plot the correlation graphs of unconditional generation (in the Supplementary Material) with fewer sampling methods involved. The trends and patterns in the visualization of unconditional generation are basically consistent with its conditional counterpart.\nIn Table 4, FACE scores on different decoding algorithms are summarized. FACE metrics correctly match the expected quality relationship of the sampling methods examined by assigning the best SO (.44), CORR (.75), SAM (.23), and SPEAR (.17) scores to contrastive decoding. Other evaluation metrics fail to capture the correct relationship, for example, the perplexity rates nucleus-sampled text as better than contrastive-decoded text, which is irrational suggested by Li et al. [26]." }, { "figure_ref": [], "heading": "Human judgments", "publication_ref": [ "b31", "b31" ], "table_ref": [ "tab_6", "tab_6" ], "text": "We also explore the correlation between FACE and human judgement scores, using the crowd-source dataset collected in [31] when human evaluation is available. The dataset contains model-generated continuations (by GPT2-sm, -md, -lg, and -xl with ancestral and nucleus sampling), human-written continuations using the same prefix, and the crowd-source workers' answers on which completion is more human-like, interesting, and sensible. We follow the same experimental settings and protocol to verify whether the FACE scores of the text completions correlate well with the human quality judgements by computing the Spearman's rank correlation coefficient. The results are presented in Table 5.\nWe observe a high and positive correlation between FACE-SO and human judgments scores, which outperforms five out of the six evaluation metrics reported in [31] and achieves a comparative performance against MAUVE. The remaining three FACE metrics have insignificant correlations. However, we consider human judgments to be subjective and sometimes biased. Including more fine-grained questions to perform human judgments may lead to more accurate correlation statistics. Additionally, we recomputed the correlations with human judgement scores to keep those pairs in which there are exactly one item from human and the other item from model (i.e., a subset of data used for the analysis in Table 5). As shown in SO-S and MAUVE-S columns, FACE-SO has a stronger correlation than MAUVE among two of these three dimensions." }, { "figure_ref": [], "heading": "Sanity tests", "publication_ref": [ "b26", "b31", "b20", "b7" ], "table_ref": [ "tab_8", "tab_8" ], "text": "Sanity test on validity. We evaluate the validity of FACE by examining whether its scores on human-human split is lower than human-model groups -an expected result based on our assumption that the spectral difference between human's \"natural\" language and models' \"artificial\" ones should be amplified by FACE. Therefore, the sanity tests are conducted as follows: first, we evenly and randomly split the human data into two folds (across three domains) to serve as control groups.\nThe FACE scores between these control folds are then computed. As for the human-to-model experimental group, we create other two folds using human data and the best model-generated data (from contrastive decoding [26]) in terms of text quality. Theoretically, if FACE can effectively capture the fundamental difference between human and model languages, then we are expected to Higher scores mean better correlation. All the numbers except the SO, SO-S, and MAUVE-S columns are sourced from [31]. \"BT\" denotes the Bradley-Terry score of the pairwise human evaluation, which is employed to compute the Spearman's rank correlation with the scores of other metrics. Additionally, it is important to note that the original human judgments encompass certain pairs in which both texts are generated by models, albeit different models. Therefore, we refine the original human judgment dataset to only include judgments involving both human and model-generated languages, and the results are shown in SO-S and MAUVE-S.\nobserve higher scores in control groups than in the experimental group. The results are shown in Table 6. It can be seen from Table 6 that the control groups show significantly better FACE scores than the experimental group: FACE-SO and FACE-CORR are higher in human-to-human folds, while FACE-SAM scores are lower. The only exception is FACE-SPEAR, while we will show it is a good metric in later sections. Nonetheless, these tabulated results have proved the validity of FACE in effectively capturing human-to-model spectral differences.\nSO (↑) CORR (↑) SAM (↓) SPEAR (↑) h-h (\nChoice of estimator model. We examine how different choices of the estimator model m est affect the resulting spectra, using GPT2-sm, -md, -lg and -xl as m est , respectively. The spectra of webtext and the original GPT2 output data are computed. It is found that the spectra obtained from m est have different magnitudes, but their aggregated curves have the same shape (see Supplementary Material). Therefore, the choice of m est will not affect FACE scores as long as the same m est is used for all data.\nStationarity tests. One of the assumptions of the Fourier transform is that the signal is stationary [21], that is, the mean and variance do not change over time. We applied the Augmented Dickey-Fuller (ADF) test [8] to examine the stationarity of the cross-entropy sequences for all the human and model-generated data used in this study. The null hypothesis H 0 of the ADF test is non-stationarity, and thus a p < .05 testing result rejects H 0 and accepts the alternative hypothesis of stationarity in the series. We calculate the proportions of cross-entropy sequences that pass the ADF test with p < .05 for all model-generated and human data: 97.4% for GPT2, 92.1% for OPT, 74.5% for BLOOM, and 97.9% for human. Therefore, the vast majority meets the stationarity requirement for the Fourier transform." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4", "fig_4" ], "heading": "Interpretation of spectra", "publication_ref": [ "b46" ], "table_ref": [], "text": "As the frequency spectrum reflects the key characteristics of a signal, we attempt to interpret the spectra to see if they tell how the \"signals\" -entropy of human and machine languages -differ. Without aggregation, the raw spectra of single cross-entropy sequence look indistinguishable between GPT2-sm, GPT2-xl, and human (see the left plot in Figure 5). By aggregating 5,000 spectra from each group and smoothing the curves, it can be seen that GPT2-xl's curve is closer to human than the GPT2-sm curve (readers can find this by zooming in the middle plot in Figure 5). Here, the smoothing is done with generalized additive models (GAMs) [45]. Results from other models are included in the Supplementary Material.\nWhen plotted separately, the aggregated spectra from human and different models have similar shapes: First, the majority of components exist in the low-frequency range (ω < 0.05). In addition, the locations of peaks and troughs are almost the same between groups. For instance, ω 1 = 0.06 is the first trough, and ω 2 = 0.12 is the first peak (see the right plots in Figure 5). Thus, roughly speaking, the main difference between human and model spectra is not in the locations of peak and trough frequencies but in the relative magnitudes of those frequencies.\nWe propose a simple way to interpret the peaks in spectra: the reciprocal of a frequency component T k = 1/ω k denotes the corresponding cycle in the time domain. Because the time interval (i.e., sampling interval) of an entropy sequence is not measured in seconds but fixed as one token, the measurement unit of T k is also in number of tokens. For example, the first frequency peak in Figure 5 (right plot) implies ω 2 = 0.12 ⇒ T 2 = 1/0.12 ≈ 8.3 (tokens), which approximately means that tokens of the same cross-entropy levels tend to recur every 8.3 tokens. This pattern is consistent in both human and model data. However, the degree of this recurrence can mark the difference between the human and model languages. We leave more detailed interpretations of spectra to future work." }, { "figure_ref": [], "heading": "Conclusion and Limitations", "publication_ref": [], "table_ref": [], "text": "We propose FACE, a set of metrics based on the Fourier analysis of cross-entropy, which is able to distinguish human and model-generated language with satisfactory performance in the open-ended generation task. The metrics scale with model sizes; reflect the effect of various sampling methods; correlate well with other existing metrics and outperform most of them in alignment with human judgement scores. Among the four implementation methods of FACE experimented, Spectral Overlap (SO) has the best overall performance.\nFACE is computationally efficient with easy-to-interpret output. As a method inspired by psycholinguistic studies on the predictability (entropy/surprisal/information density) of human language, we believe FACE is a good example of incorporating knowledge from different fields for better human-centered AIs. We can generally conclude that better language models can produce spectral representations of information that are more similar to human.\nOur current work has several limitations: Firstly, for open-ended generation experiments (Section 4.1), a broader set of sampling methods other than top-k can be used. Secondly, larger models (with more than 100 billion parameters) need to be included for more comprehensive comparisons. We will improve from these aspects in future work. Five sampling (decoding) methods are compared: pure sampling, temperature, top-k, top-k with temperature, and nucleus. Note that logarithmic normalization on parameter values as well as an enlarged marker for pure sampling are adopted for better visualization. " }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Human judgments", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "Table 8 shows the FACE scores based on the output texts from MAUVE. Each column of FACE scores is used to compute the Spearman's rank correlation coefficient between a specific FACE metric and Bradley-Terry scores (4 model sizes × 2 sampling methods = 8 scores in total) from one criterion (three criteria correspond to three questions in total)." }, { "figure_ref": [], "heading": "Choice of estimator model", "publication_ref": [], "table_ref": [], "text": "We examine how different choices of estimator model m est affect the resulting spectra of cross-entropy. Five input data sources are examined (webtext plus four GPT2 original output datasets), on which four different estimator models are applied: m est ∈ {GPT2-sm, GPT2-md, GPT2-lg, GPT2-xl}, resulting in 5 × 4 = 20 aggregated spectra curves in Figure 8. It can be found that on the same input data, the spectra from four estimators largely overlap. It indirectly suggests that FACE should be stable across different m est s. We leave the full inspection for future work." }, { "figure_ref": [], "heading": "Intuitive interpretation of spectra", "publication_ref": [], "table_ref": [], "text": "As pointed out in Section 4.5, the aggregated spectral shapes from human and different models are nearly identical. A set of higher resolution plots from GPT-xl, OPT, BLOOM and human (webtext) are shown in Figure 9. It can be seen that although the X(ω k ) has different ranges on y-axis, the x coordinates of the peaks and troughs are the same." }, { "figure_ref": [], "heading": "Corner Cases", "publication_ref": [], "table_ref": [], "text": "Two examples highlight the difference between our proposed FACE-SO and MAUVE in their ability to recognize human-generated and model-generated texts. In the filtered datasets for human judgments, the average values for FACE-SO and MAUVE are 0.4738 and 0.9549, respectively. In Case 1, human evaluators noted a high level of similarity between the model-generated text and human text, resulting " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This material is based upon work supported by the National Science Foundation under Grant No. (2105192)." }, { "figure_ref": [], "heading": "Supplementary Material 1 Broader Impacts", "publication_ref": [], "table_ref": [], "text": "FACE measures the distance between human and model-generated languages, therefore it is technically possible to be used for designing or augmenting systems that mimic humans. We acknowledge the risks of FACE (and other metrics) being utilized in applications that deliberately confuse humanauthored and model-produced text. We call for the collective efforts from the community to come up with a systematic framework that unifies different metrics, for developing more reliable and natural language generation systems." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "Preprocessing. We utilize three raw datasets: WritingPrompts, WikiText-103, and RealNews. For WritingPrompts, the prompt set has already been well-curated, so we just extracted the first 5,000 prompts (the length may vary) for our generation task. WikiText-103 and RealNews contain many complete texts. For each complete text, we further truncate it corresponding to the first 35 tokens as a prompt. To fairly evaluate the performance of metrics, we also divide text generations according to five predefined length (from 0 up to 1024) intervals for each dataset. Thereby, the human-written texts and model-produced texts used to evaluate the performance of metrics may be generated by different prompts (i.e., unpaired comparison).\nHyper-parameters. We have several hyper-parameters during the text generation and evaluation phases. For both conditional and unconditional generation, we preset a random seed integer (32 by default). Furthermore, the maximum length of each text (1024 by default) as well as the batch size (which varies according to GPUs capacity) for perplexity computation have to be determined before automatic evaluation. " }, { "figure_ref": [], "heading": "Hardware.", "publication_ref": [], "table_ref": [], "text": "For the text generation task, we use the remote workstation that has two NVIDIA RTX A6000 graphics cards. It should be noted that all models were run in parallel when available.\nComputation time for text generation. We spent 10 and 25 hours or so obtaining 5,000 text continuations by GPT2-sm, -xl, respectively. OPT-125m, -6.7b cost our GPU resources roughly 11 and 44 hours to output the same number of text continuations, respectively. When it comes to BLOOM-560m, -7b, they took approximately 18 and 48 hours, respectively, to generate 5,000 continuations per task domain.\nEvaluation time for FACE. Computation time of four FACE metrics for a single pair of references are: 5.96 × 10 -8 seconds for SO, 5.01 × 10 -8 seconds for CORR, 4.53 × 10 -8 seconds for SAM, and 4.29 × 10 -8 seconds for SPEAR, respectively. The cross-entropy, which should be calculated beforehand, takes 5.65 × 10 -2 seconds. All of the above measurements take place on an AMD Ryzen Threadripper PRO 3995WX 64-Cores CPU (frequency range ∈ [2200.00MHz, 4308.40MHz]). Users can leverage more advanced GPU resources to perform the whole computation process with a faster speed." }, { "figure_ref": [], "heading": "Additional Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model sizes (generation length)", "publication_ref": [], "table_ref": [], "text": "It should be emphasized that LMs have diverse designs and were pre-trained using different strategies on different datasets, giving them distinct preferences on the generation length. The numbers of text generations in each length interval are summarized in Table 7. To ensure the consistency of our experiments, we run six LMs separately (using their own tokenizers) with the same prompt sets and settings as described in Table 2 to generate 5,000 pieces of continuations in each domain. Besides, we utilize the GPT2Tokenizer to calculate the numbers of continuations for each interval, which allows us to compare FACE scores with other metrics more objectively, as we believe it is unfair to explicitly compare texts of varying lengths. Then, we compute weighted arithmetic mean to evaluate a model in each domain, by s ′ = n i=1 mi M s i , where s ′ denotes the weighted mean; n denotes the number of length intervals; m i is the number of generated continuations in the length interval i; M = n i=1 m i , and s i means a certain metric value in the interval i. Figure 6 conveys a more intuitive representation (via bar plots) of Table 3." }, { "figure_ref": [], "heading": "Sampling methods (unconditional generation)", "publication_ref": [], "table_ref": [], "text": "We also carried out experiments on unconditional text generation. Here, the prompt is not required as we generate continuations from a random seed (set to 32 empirically). Four sampling methods, which are greedy decoding, beam search, stochastic beam search, and contrastive decoding, are not involved in this set of experiments.\nThe results are displayed in Figure 7. The overall trends are same as its conditional counterpart, where the previous quality relationship (maximization-based/temperature-based ≺ nucleus ≺ contrastive) is satisfied. Yet, it is crucial to note that the advantages of top-k sampling w/o temperature become more obvious compared to the conditional case. in ties for human-like, interesting, and sensible aspects. However, the MAUVE score in Case 1 is lower than the average value, while the FACE-SO score surpasses its mean. This discrepancy suggests that SO aligns more consistently with human opinions. Conversely, in Case 2, human judgement indicates a significant dissimilarity between the model-generated text and human text, making them easily distinguishable. However, the MAUVE score exceeds its mean, suggesting the two texts are similar to each other. Our FACE-SO score is lower than its mean, indicating better alignment with human opinion." } ]
Measuring the distance between machine-produced and human language is a critical open problem. Inspired by empirical findings from psycholinguistics on the periodicity of entropy in language, we propose FACE, a set of metrics based on Fourier Analysis of the estimated Cross-Entropy of language, for measuring the similarity between model-generated and human-written languages. Based on an open-ended generation task and the experimental data from previous studies, we find that FACE can effectively identify the human-model gap, scales with model size, reflects the outcomes of different sampling methods for decoding, correlates well with other evaluation metrics and with human judgment scores.
FACE: Evaluating Natural Language Generation with Fourier Analysis of Cross-Entropy
[ { "figure_caption": "Figure 1 :1Figure 1: Overall workflow of this study.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Definitions of four FACE metrics.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: FACE-SO scores on OPT, BLOOM and GPT2 original output data. Model sizes compared: small vs. large for OPT and BLOOM; -sm vs. -xl for GPT2. Error bars represent 95% confidence intervals from bootstrap. The significant levels are based on t-test between the two model-size groups.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Intuitive observations on the spectra from GPT2 and human data (webtext). Left: Spectra of three randomly sampled entropy sequences from GPT2-sm, GPT2-xl, and webtext. Middle: Smoothed plot of 5,000 aggregated spectra with absolute values, |X ω k | ∼ ω k . Right: Typical smoothed plot of raw spectra X ω k ∼ ω k , with peaks and troughs annotated.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: FACE scores (unconditional generation) on original experimental data 5 of nucleus sampling.Five sampling (decoding) methods are compared: pure sampling, temperature, top-k, top-k with temperature, and nucleus. Note that logarithmic normalization on parameter values as well as an enlarged marker for pure sampling are adopted for better visualization.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Example of two corner cases. For each case, the prompt text, model-generated text, human text, MAUVE and FACE-SO scores, as well as the results from human judgments are tabulated.", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Summary of metrics (automatic & non-automatic) we employed for evaluating open-ended text generation. FACE provides a way to approximate the human-model gap in the frequency domain.", "figure_data": "", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "DomainModelDatasetPrompt Length Maximum Generation Length Number of GenerationsWiki text GPT2/OPT/BLOOM WikiText-10335 tokens1024 tokens5000NewsGPT2/OPT/BLOOM RealNews35 tokens1024 tokens5000StoriesGPT2/OPT/BLOOM WritingPromptsvarying1024 tokens5000", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "DomainMetricGPT2-sm GPT2-xl vs. Voting OPT-125m OPT-6.7b vs. Voting BLOOM-560m BLOOM-7.1b vs. VotingDiversity (↑)0.7330.753L0.6450.789L0.5330.732LCoherence (↑) Zipf Coefficient (↓)0.595 0.9900.624 0.975L LL0.614 0.9890.634 1.016L SL0.926 1.0920.819 0.980S LESelf-BLEU (↓)0.4590.424L0.4230.379L0.2800.422SWiki textMAUVE (↑)0.6770.186S0.1690.265L0.5170.184SSO (↑) CORR (↑) SAM (↓)0.414 0.806 0.1990.406 0.781 0.213S S SS0.424 0.771 0.2160.436 0.769 0.217L S SL0.426 0.675 0.2580.432 0.789 0.208L L LLSPEAR (↑)0.0220.023L0.0260.029L0.0590.023SDiversity (↑)0.8900.897L0.8530.876L0.7400.870LCoherence (↑) Zipf Coefficient (↓)0.613 0.9610.640 0.958L LL0.663 0.9650.663 0.968S LL0.897 0.9640.785 0.966S SSSelf-BLEU (↓)0.6190.573L0.6110.543L0.3840.501SNewsMAUVE (↑)0.3930.281S0.1620.130S0.0140.095LSO (↑) CORR (↑) SAM (↓)0.424 0.757 0.2240.412 0.723 0.240S S SS0.438 0.746 0.2290.440 0.732 0.236L S SS0.436 0.615 0.2810.437 0.733 0.234L S LLSPEAR (↑)0.0210.019S0.0170.021L0.0480.019SDiversity (↑)0.7430.785L0.7690.875L0.5270.830LCoherence (↑) Zipf Coefficient (↓)0.421 1.0970.420 1.085S LL0.440 1.0210.388 1.003S LL0.880 0.9990.660 1.058S SSSelf-BLEU (↓)0.6170.565L0.5870.511L0.1800.455SStoriesMAUVE (↑)0.5040.121S0.0250.013S0.0060.008LSO (↑) CORR (↑) SAM (↓)0.411 0.813 0.1950.402 0.787 0.209S S SS0.406 0.737 0.2310.405 0.705 0.245S S SS0.350 0.573 0.3000.418 0.772 0.214L L LLSPEAR (↑)0.0230.022S0.0360.041L0.0500.027S", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results for comparing all sampling methods with selected parameters regarding the conditional generation. The values closest to human scores are bolded, except for our proposed FACE scores, where the highest (for SO, CORR, and SPEAR) or the lowest (for SAM) values are in bold.", "figure_data": "", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Spearman's rank correlation coefficients of SO and five other metrics with human judgments.", "figure_data": "MetricGeneration Perplexity Zipf Coefficient Repetition Distinct-4 Self-BLEUSOMAUVESO-S MAUVE-SHuman-like/BT0.8100.833-0.1670.7380.5950.8810.9520.3570.214Interesting/BT0.6430.524-0.1430.5240.4050.7620.8100.5240.667Sensible/BT0.7380.690-0.0710.5950.5240.7860.8570.9950.706", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results of sanity test on FACE's validity. The top three rows are the control groups, and \"h-h\" stands for human-to-human folds. The last row is the experimental group, where \"h-m\" is for human-to-model fold. Better FACE scores are in bold. The scores in the bottom row are retrieved from", "figure_data": "", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_9", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "FACE results based on MAUVE's original experimental data6 .", "figure_data": "Sampling Method (parameter)SOCORR SAM SPEARGPT2-xlNucleus Sampling (p=0.95) Ancestral Sampling0.481 0.821 0.191 0.472 0.807 0.1990.359 0.331GPT2-lgNucleus Sampling (p=0.95) Ancestral Sampling0.480 0.819 0.193 0.472 0.814 0.1960.356 0.338GPT2-mdNucleus Sampling (p=0.9) Ancestral Sampling0.478 0.815 0.194 0.462 0.813 0.1970.358 0.310GPT2-smNucleus Sampling (p=0.9) Ancestral Sampling0.476 0.817 0.194 0.468 0.816 0.1950.359 0.319", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" } ]
Zuhao Yang; Yingfang Yuan; Yang Xu; Shuo Zhan; Huajun Bai; Kefan Chen
[ { "authors": "H Ahmed; I Traore; S Saad", "journal": "Springer", "ref_id": "b0", "title": "Detection of online fake news using n-gram analysis and machine learning techniques", "year": "2017" }, { "authors": "J Boardman", "journal": "", "ref_id": "b1", "title": "Sips user's guide spectral image processing system", "year": "1992" }, { "authors": "R A Bradley; M E Terry", "journal": "Biometrika", "ref_id": "b2", "title": "Rank analysis of incomplete block designs: I. the method of paired comparisons", "year": "1952" }, { "authors": "E Clark; A Celikyilmaz; N A Smith", "journal": "", "ref_id": "b3", "title": "Sentence mover's similarity: Automatic evaluation for multi-sentence texts", "year": "2019-07" }, { "authors": "J W Cooley; J W Tukey", "journal": "Mathematics of computation", "ref_id": "b4", "title": "An algorithm for the machine calculation of complex fourier series", "year": "1965" }, { "authors": "O A De Carvalho; P R Meneses", "journal": "JPL publication", "ref_id": "b5", "title": "Spectral correlation mapper (scm): an improvement on the spectral angle mapper (sam)", "year": "2000" }, { "authors": "N Dethlefs; H Hastie; H Cuayáhuitl; Y Yu; V Rieser; O Lemon", "journal": "Computer speech & language", "ref_id": "b6", "title": "Information density and overlap in spoken dialogue", "year": "2016" }, { "authors": "D A Dickey; W A Fuller", "journal": "Journal of the American statistical association", "ref_id": "b7", "title": "Distribution of the estimators for autoregressive time series with a unit root", "year": "1979" }, { "authors": "J Djolonga; M Lucic; M Cuturi; O Bachem; O Bousquet; S Gelly", "journal": "", "ref_id": "b8", "title": "Precision-recall curves using information divergence frontiers", "year": "2019" }, { "authors": "A Fan; M Lewis; Y Dauphin", "journal": "", "ref_id": "b9", "title": "Hierarchical neural story generation", "year": "2018-07" }, { "authors": "T Gao; X Yao; D Chen", "journal": "", "ref_id": "b10", "title": "SimCSE: Simple contrastive learning of sentence embeddings", "year": "2021-11" }, { "authors": "S Gehrmann; H Strobelt; A Rush", "journal": "", "ref_id": "b11", "title": "GLTR: Statistical detection and visualization of generated text", "year": "2019-07" }, { "authors": "D Genzel; E Charniak", "journal": "", "ref_id": "b12", "title": "Entropy rate constancy in text", "year": "2002" }, { "authors": "D Genzel; E Charniak", "journal": "", "ref_id": "b13", "title": "Variation of entropy and parse trees of sentences as a function of the sentence number", "year": "2003" }, { "authors": "M Giulianelli; A Sinclair; R Fernández", "journal": "", "ref_id": "b14", "title": "Is information density uniform in task-oriented dialogues", "year": "2021" }, { "authors": "J Hale", "journal": "", "ref_id": "b15", "title": "A probabilistic earley parser as a psycholinguistic model", "year": "2001" }, { "authors": "J Hale", "journal": "Language and Linguistics Compass", "ref_id": "b16", "title": "Information-theoretical complexity metrics", "year": "2016" }, { "authors": "T B Hashimoto; H Zhang; P Liang", "journal": "", "ref_id": "b17", "title": "Unifying human and statistical evaluation for natural language generation", "year": "2019-06" }, { "authors": "A Holtzman; J Buys; L Du; M Forbes; Y Choi", "journal": "", "ref_id": "b18", "title": "The curious case of neural text degeneration", "year": "2020" }, { "authors": "T F Jaeger", "journal": "Cognitive psychology", "ref_id": "b19", "title": "Redundancy and reduction: Speakers manage syntactic information density", "year": "2010" }, { "authors": "I Kaplan", "journal": "", "ref_id": "b20", "title": "Dft of a non-stationary time series", "year": "2001-09" }, { "authors": "F A Kruse; A Lefkoff; J Boardman; K Heidebrecht; A Shapiro; P Barloon; A Goetz", "journal": "Remote sensing of environment", "ref_id": "b21", "title": "The spectral image processing system (sips)-interactive visualization and analysis of imaging spectrometer data", "year": "1993" }, { "authors": "T Kynkäänniemi; T Karras; S Laine; J Lehtinen; T Aila", "journal": "", "ref_id": "b22", "title": "Improved precision and recall metric for assessing generative models", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b23", "title": "", "year": "2019" }, { "authors": "R Levy", "journal": "Cognition", "ref_id": "b24", "title": "Expectation-based syntactic comprehension", "year": "2008" }, { "authors": "R Levy; T Jaeger", "journal": "", "ref_id": "b25", "title": "Speakers optimize information density through syntactic reduction", "year": "2007" }, { "authors": "X L Li; A Holtzman; D Fried; P Liang; J Eisner; T Hashimoto; L Zettlemoyer; M Lewis", "journal": "", "ref_id": "b26", "title": "Contrastive decoding: Open-ended text generation as optimization", "year": "2022" }, { "authors": "S Merity; C Xiong; J Bradbury; R Socher", "journal": "", "ref_id": "b27", "title": "Pointer sentinel mixture models", "year": "2017" }, { "authors": "O Oullier; G C De Guzman; K J Jantzen; J Lagarde; J Scott Kelso", "journal": "Social neuroscience", "ref_id": "b28", "title": "Social coordination dynamics: Measuring human bonding", "year": "2008" }, { "authors": "K Papineni; S Roukos; T Ward; W.-J Zhu", "journal": "", "ref_id": "b29", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002-07" }, { "authors": "S Piantadosi", "journal": "Psychonomic bulletin & review", "ref_id": "b30", "title": "Zipf's word frequency law in natural language: A critical review and future directions", "year": "2014" }, { "authors": "K Pillutla; S Swayamdipta; R Zellers; J Thickstun; S Welleck; Y Choi; Z Harchaoui", "journal": "", "ref_id": "b31", "title": "Mauve: Measuring the gap between neural text and human text using divergence frontiers", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b32", "title": "", "year": "2021" }, { "authors": "T Qian; T F Jaeger", "journal": "", "ref_id": "b33", "title": "Topic shift in efficient discourse production", "year": "2011" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "", "ref_id": "b34", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "M S M Sajjadi; O Bachem; M Lucic; O Bousquet; S Gelly", "journal": "", "ref_id": "b35", "title": "Assessing generative models via precision and recall", "year": "2018" }, { "authors": "T L Scao; A Fan; C Akiki; E Pavlick; S Ilić; D Hesslow; R Castagné; A S Luccioni; F Yvon; M Gallé", "journal": "", "ref_id": "b36", "title": "A 176b-parameter open-access multilingual language model", "year": "2022" }, { "authors": "T Sellam; D Das; A Parikh", "journal": "", "ref_id": "b37", "title": "BLEURT: Learning robust metrics for text generation", "year": "2020-07" }, { "authors": "C E Shannon", "journal": "The Bell System Technical Journal", "ref_id": "b38", "title": "Communication theory of secrecy systems", "year": "1949" }, { "authors": "H Shimanaka; T Kajiwara; M Komachi", "journal": "", "ref_id": "b39", "title": "RUSE: Regressor using sentence embeddings for automatic machine translation evaluation", "year": "2018-10" }, { "authors": "J O Smith", "journal": "", "ref_id": "b40", "title": "Spectral Audio Signal Processing", "year": "2011" }, { "authors": "S Soule", "journal": "Information and Control", "ref_id": "b41", "title": "Entropies of probabilistic grammars", "year": "1974" }, { "authors": "C Spearman", "journal": "The American Journal of Psychology", "ref_id": "b42", "title": "The proof and measurement of association between two things", "year": "1961" }, { "authors": "Y Su; T Lan; Y Wang; D Yogatama; L Kong; N Collier", "journal": "", "ref_id": "b43", "title": "A contrastive framework for neural text generation", "year": "2022" }, { "authors": "P Welch", "journal": "IEEE Transactions on audio and electroacoustics", "ref_id": "b44", "title": "The use of fast fourier transform for the estimation of power spectra: a method based on time averaging over short, modified periodograms", "year": "1967" }, { "authors": "S Welleck; I Kulikov; S Roller; E Dinan; K Cho; J Weston", "journal": "", "ref_id": "b45", "title": "Neural text generation with unlikelihood training", "year": "2020" }, { "authors": "S N Wood", "journal": "CRC press", "ref_id": "b46", "title": "Generalized additive models: an introduction with R", "year": "2017" }, { "authors": "Y Xu; D Reitter", "journal": "", "ref_id": "b47", "title": "Entropy converges between dialogue participants: Explanations from an information-theoretic perspective", "year": "2016" }, { "authors": "Y Xu; D Reitter", "journal": "", "ref_id": "b48", "title": "Spectral analysis of information density in dialogue predicts collaborative task performance", "year": "2017" }, { "authors": "Y Xu; D Reitter", "journal": "Cognition", "ref_id": "b49", "title": "Information density converges in dialogue: Towards an informationtheoretic model", "year": "2018" }, { "authors": "S Zhang; S Roller; N Goyal; M Artetxe; M Chen; S Chen; C Dewan; M Diab; X Li; X V Lin; T Mihaylov; M Ott; S Shleifer; K Shuster; D Simig; P S Koura; A Sridhar; T Wang; L Zettlemoyer", "journal": "", "ref_id": "b50", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "T Zhang; * ; V Kishore; * ; F Wu; * ; K Q Weinberger; Y Artzi", "journal": "", "ref_id": "b51", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" }, { "authors": "W Zhao; M Peyrard; F Liu; Y Gao; C M Meyer; S Eger", "journal": "", "ref_id": "b52", "title": "MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance", "year": "2019-11" }, { "authors": "Y Zhu; S Lu; L Zheng; J Guo; W Zhang; J Wang; Y Yu", "journal": "", "ref_id": "b53", "title": "Texygen: A benchmarking platform for text generation models", "year": "2018-06" } ]
[ { "formula_coordinates": [ 2, 313.73, 565.47, 146.01, 9.65 ], "formula_id": "formula_0", "formula_text": "1 real values E = [c 1 , c 2 , . . . , c T -1 ]," }, { "formula_coordinates": [ 2, 130.6, 604.27, 374, 8.37 ], "formula_id": "formula_1", "formula_text": "E = [c1, c2, . . . , cT -1] ≜ [-log P (t2|t1), -log P (t3|t1, t2), . . . , -log P (tT |t1, t2, . . . , tT -1)] (1)" }, { "formula_coordinates": [ 2, 160.71, 622.95, 132.41, 14.11 ], "formula_id": "formula_2", "formula_text": "c i = - T i=2 log P (t i |t 1 . . . t i-1" }, { "formula_coordinates": [ 3, 113.1, 78.36, 386, 76.4 ], "formula_id": "formula_3", "formula_text": "ℱ ! ∈ ℝ \" ! , ℱ # ∈ ℝ \" \" Interpolated & absolute: ℱ′ ! , ℱ′ # ∈ ℝ \" # 𝑆𝑂 = 𝐴𝑈𝐶(|ℱ $ ! | ∩ |ℱ $ ′ # |) 𝐴𝑈𝐶(|ℱ $ ! | ∪ |ℱ $ # |) 𝐶𝑂𝑅𝑅 = 𝑐𝑜𝑣(ℱ $ ! , ℱ $ # ) 𝜎 ℱ $ ! 𝜎(ℱ $ # ) 𝑆𝐴𝑀 = 𝑎𝑟𝑐𝑐𝑜𝑠 ℱ $ ! ⋅ ℱ $ # 𝜎 ℱ $ ! 𝜎 ℱ $ # 𝑆𝑃𝐸𝐴𝑅 = 1 - 6 ∑ |𝑋 ! (𝜔 % )| -|𝑋 # (𝜔 % )| & 𝑁 ' (𝑁 ' & -1)" }, { "formula_coordinates": [ 3, 204.32, 321.74, 300.35, 30.2 ], "formula_id": "formula_4", "formula_text": "X(ω k ) ≜ N -1 n=0 x(t n )e -jω k tn , k = 0, 1, . . . , N -1(2)" }, { "formula_coordinates": [ 3, 108, 507.81, 396, 20.56 ], "formula_id": "formula_5", "formula_text": "F = [⟨ω 1 , X(ω 1 )⟩, . . . , ⟨ω T -1 , X(ω T -1 )⟩]," }, { "formula_coordinates": [ 3, 108, 606.81, 397.24, 24.39 ], "formula_id": "formula_6", "formula_text": "F h ∈ R N1 ⇒ F ′ h ∈ R N C , F m ∈ R N2 ⇒ F ′ m ∈ R N C ." }, { "formula_coordinates": [ 3, 228.54, 689.94, 158.96, 12.55 ], "formula_id": "formula_7", "formula_text": "SO = AUC(F ′ h ∩ F ′ m )/AUC(F ′ h ∪ F ′ m )" }, { "formula_coordinates": [ 4, 108, 122.7, 125.3, 12.55 ], "formula_id": "formula_8", "formula_text": "arccos(F ′ h • F ′ m /||F ′ h || • ||F ′ m ||)" }, { "formula_coordinates": [ 4, 130.19, 171.82, 151.79, 12.55 ], "formula_id": "formula_9", "formula_text": "CORR = cov(F ′ h , F ′ m )/σ(F ′ h )σ(F ′ m )" }, { "formula_coordinates": [ 5, 414.04, 168.85, 83.05, 10.33 ], "formula_id": "formula_10", "formula_text": "P (i beats j) = 1 1+e -(β i -β j )/100" }, { "formula_coordinates": [ 5, 323.02, 483.08, 47.45, 9.65 ], "formula_id": "formula_11", "formula_text": "β 1 , • • • , β n ." }, { "formula_coordinates": [ 5, 117.41, 578.95, 386.59, 23.96 ], "formula_id": "formula_12", "formula_text": "x m+n | x 1 . . . x m ) = m+n i=m+1 P (x i | x 1 . . . x i-1 ) , where P (x i | x 1 . . . x i-1" }, { "formula_coordinates": [ 9, 213.39, 275.82, 185.23, 20.86 ], "formula_id": "formula_13", "formula_text": "SO (↑) CORR (↑) SAM (↓) SPEAR (↑) h-h (" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b22", "b24", "b0", "b20", "b5", "b5", "b23", "b15", "b7", "b6", "b30", "b18", "b14", "b18", "b14", "b25", "b13", "b2", "b4", "b19", "b14" ], "table_ref": [], "text": "Learning to learn or meta-learning (Schmidhuber, 1987;Thrun & Pratt, 1998), offers a powerful tool for few-shot learning (Andrychowicz et al., 2016;Ravi & Larochelle, 2017;Finn et al., 2017). The crux for few-shot metalearning is to accrue prior meta-knowledge from a set of meta-training tasks, which enables fast adaptation to a new task with limited data. Despite remarkable achievements of existing meta-learning algorithms for few-shot learning (Finn et al., 2017;Snell et al., 2017;Liu et al., 2022;Hu et al., 2022;He et al., 2022) these works depend on a large number of meta-training tasks during training. However, an extensive collection of meta-training tasks is unlikely to be available for many real-world applications. For example, in medical image diagnosis, a shortage of data samples and tasks arises due to the need for specialist labeling by physicians and patient privacy concerns. Additionally, rare disease types (Wang et al., 2017) present challenges for few-shot learning. In this paper, we focus on few-task metalearning, where the number of available tasks at training time is limited.\nTo tackle the few-task meta-learning problem, a variety of task augmentation (Ni et al., 2021;Yao et al., 2021a) and task interpolation (Lee et al., 2022;Yao et al., 2021b) methods have been proposed. The key idea of task augmentation (Ni et al., 2021;Yao et al., 2021a) is to increase the number of tasks from the support set and query set during meta-training. The weakness of these approaches is that they are only able to capture the global task distribution within the distribution of the provided tasks. Task interpolation (Lee et al., 2022;Yao et al., 2021b) generates a new task by interpolating the support and query sets of different tasks by Mixup (Verma et al., 2019) or a neural set function (Lee et al., 2019). Here, a key question is how to combine tasks and at what feature level. For example, the state-of-the-art MLTI by (Yao et al., 2021b) randomly selects the features of a single layer from two known tasks for a linear mixup but ignores all other feature layers for new task generation. It leads to a sub-optimal interpolated task diversity. To address this limitation, we propose a new task modulation strategy that captures the knowledge from one known task at different levels.\nOne key aspect of task modulation is the ability to leverage the representation of a single task at different levels of abstraction. This allows the model to modulate representa-tions of other tasks at varying levels of detail, depending on the specific needs of the new task. Conditional batch normalization (De Vries et al., 2017;Dumoulin et al., 2016;Perez et al., 2018) has been successfully applied to visual question answering and other multi-modal applications. In conditional batch normalization, the normalization parameters (i.e., the scale and shift parameters) are learned from a set of additional input conditions, which can be represented as a set of auxiliary variables or as a separate input branch to the network. This allows the network to adapt to the specific task at hand and improve its performance. Inspired by these general-purpose conditional batch normalization methods, we make in this paper three contributions.\nIn this paper, we propose a method for few-shot learning with fewer tasks called MetaModulation. It contains three key contributions. First, a meta-training task is randomly selected as a base task, and additional task information is introduced as a condition. We predict the scale and shift of the batch normalization for the base task from the conditional task. This allows the model to modulate the statistics of the conditional task on the base task for a more effective task representation. It is also worth noting that our modulation operates on each layer of the neural network, while previous methods (Yao et al., 2021b;Lee et al., 2022) only select a single layer for modulation. Thus, the model can generate more diverse tasks during meta-training, as it utilizes the statistical information of each level of the conditional task. As a second contribution, we introduce variational task modulation, which treats the conditional scale and shifts as latent variables inferred from the conditional task. The optimization is formulated as a variational inference problem, and new evidence lower bound is derived under the meta-learning framework. In doing so, the model obtains probabilistic conditional scale and shift values that are more informative and better represent the distribution of real tasks. As a third contribution, we propose hierarchical variational task modulation, which obtains the probabilistic conditional scale and shifts at each layer of the network. We cast the optimization as a hierarchical variational inference problem in the Bayesian framework; the inference parameters of the conditional scale and shift are jointly optimized in conjunction with the modulated task training.\nTo verify our method, we conduct experiments on four few-task meta-learning benchmarks: miniImagenet-S, ISIC, DermNet-S, and Tabular Murris. We perform a series of ablation studies to investigate the benefits of using a learnable task modulation method at various levels of complexity. Our goal is to illustrate the advantages of increasing task diversity through such a method, as well as demonstrate the benefits of incorporating probabilistic variations in the fewtask meta-learning framework. Our experiments show that MetaModulation consistently outperforms state-of-the-art few-task meta-learning methods on the four benchmarks." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b23", "b8" ], "table_ref": [], "text": "Problem statement. For the traditional few-shot metalearning problem, we deal with tasks T i , as sampled from a task distribution p(T ). We sample N -way k-shot tasks from the meta-training tasks, where k is the number of labeled examples for each of the N classes. Each t-th task includes a support set S t ={(x i , y i )} N ×k i=1 and query set\nQ t ={(x i , ỹi )} m i=1 (S t , Q t ⊆ X )\n. Given a learning model f φ , where φ denotes the model parameters, few-shot learning algorithms attempt to learn φ to minimize the loss on the query set Q i for each of the sampled tasks using the data-label pairs from the corresponding support set S i . After that, during the testing stage, the trained model f φ and the support set S j for new tasks T j perform inference and evaluate performance on the corresponding query set Q j . In this paper, we focus on few-task meta-learning. In this setting, the main challenge is that the number of meta-training tasks T i is limited, which causes the model to overfit easily.\nPrototype-based meta-learning. We develop our method based on the prototypical network (ProtoNet) by Snell et al. (2017). Specifically, ProtoNet leverages a non-parametric classifier that assigns a query point to the class having the nearest prototype in the learned embedding space. The prototype c k of an object class c is obtained by:\nc k = 1 K k f φ (x c,k ), where f φ (x c,k\n) is the feature embedding of the sample x c,k , which is usually obtained by a convolutional neural network. For each query sample x q , the distribution over classes is calculated based on the softmax over distances to the prototypes of all classes in the embedding space:\np(y q n = k|x q ) = exp(-d(f φ (x q ), c k )) k exp(-d(f φ (x q ), c k )) ,(1)\nwhere y q denotes a random one-hot vector, with y q c indicating its n-th element, and d(•, •) is some (Euclidean) distance function. Due to its non-parametric nature, the ProtoNet enjoys high flexibility and efficiency, achieving considerable success in few-shot learning.\nConditional batch normalization. The aim of Batch Normalization (Ioffe & Szegedy, 2015) is to accelerate the training of deep networks by reducing internal covariate shifts. For a layer with d-dimensional input x=(x (1) ...x (d) ) and activation x (k) , batch normalization normalizes each scalar feature as follows:\ny (k) = γ (k) x (k) -E[x (k) ] Var[x (k) ] + + β (k) ,(2)\nwhere is a constant added to the variance for numerical stability. γ (k) \n∆β = MLP(e q ) ∆γ = MLP(e q ),(3)\nwhere e q is an additional language embedding. So, given a feature map with C channels, these MLPs output a vector of size C. They then add these changes to the β and γ parameters:\nβc = β c + ∆β c γc = γ c + ∆γ c .(4)\nFinally, the updated β and γ are used as transformation parameters for the batch normalization (eq. ( 2)) of vision features. Rather than using a language embedding for the conditioning, we randomly select one additional task as a condition to predict the scale and shift of the batch normalization for another task. " }, { "figure_ref": [], "heading": "Meta", "publication_ref": [], "table_ref": [], "text": "Ĥs,l n = λH s,l i;n + (1 -λ)H s,l j;n ,(5)\nĤq,l n = λH q,l i;n + (1 -λ)H q,l j;n ,(6)\nwhere l indicates the l-th layer (0 ≤ l ≤ L), and λ ∈ [0, 1] is sampled from a Beta distribution Beta(α, β). The interpolated support samples Ĥs,l cn;n and query samples Ĥq,l cn;n can be regarded as the new classes in the interpolated task. However, MLTI (Yao et al., 2021b) randomly selects only the features of a single layer from two known tasks to be mixed and ignores all the other feature layers. It leads to the interpolated task's diversity being limited and therefore does not increase the generalizability of the model." }, { "figure_ref": [], "heading": "MetaModulation", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose MetaModulation for few-task meta-learning. We first introduce meta task modulation in section 3.1. To obtain more diverse meta-training tasks, we then propose variational task modulation in section 3.2, which introduces variational inference into the modulation. We also introduce hierarchical meta variational modulation in section 3.3, which adds variational modulation to each network layer to obtain a richer task distribution." }, { "figure_ref": [ "fig_0" ], "heading": "Meta task modulation", "publication_ref": [], "table_ref": [], "text": "To address the single layer limitation in MLTI (Yao et al., 2021b), we introduce meta task modulation for few-task meta-learning, which modulates the features of two different tasks at different layers. We modulate all layers of samples from a meta-training task T j by predicting the γ and β of the batch normalization from base task T i . Following CBN (De Vries et al., 2017), we only predict the change ∆β c and ∆γ c on the original scalars from the task T i , which benefits training stability.\nSpecifically, to infer the conditional scale and shift ∆β c and ∆γ c , we deploy two functions f l β (•) and f l γ (•) that take the activations H l i;n from task T i as input, and the output are ∆β l i;n;c and ∆γ l i;n;c . The functions f l β (•) and f l γ (•) are parameterized by two feed-forward multi-layer perceptrons:\n∆β s,l i;n;c = MLP(H s,l i;n ) ∆γ s,l i;n;c = MLP(H s,l i;n ) (7)\nwhere ∆γ s,l i;n;c and ∆γ s,l i;n;c are the changes of the support set. We obtain ∆γ q,l i;n;c and ∆γ q,l i;n;c of the query set by the same strategy. Note that the functions f l β (•) and f l γ (•) are shared by different channels in same layer and we learn L pairs of those functions if we have L convolutional layers.\nUsing the above functions, we generate the changes for the batch normalization scale and shift, then following eq. ( 4), we add these changes to the original β l j;n;c and γ l j;n;c from task T j : βs,l j;n;c = β s,l j;n;c +∆β s,l i;n;c γs,l j;n;c = γ s,l j;n;c +∆γ s,l i;n;c (8) Once we obtain the modulated scale γl i;n;c and shift βl i;n;c , we compute the modulated features for the support and query set from task T j based on eq. ( 2):\nĤs,l n = γs,l i;n;c H s,l j;n -E[H s,l j;n ] Var[H s,l j;n ] + + βs,l j;n;c ,(9)\nĤq,l n = γq,l i;n;c\nH q,l j;n -E[H q,l j;n ] Var[H q,l j;n ] + + βq,l j;n;c ,(10)\nwhere E[H l i;n ] and Var[H l i;n ] are the mean and variance of samples features from T j . We illustrate the meta task modulation process in Figure 1.\nHowever, the deterministic conditional scale and shift are not sufficiently representative of modulated tasks. Moreover, uncertainty is inevitable due to the scarcity of data and tasks, which should also be encoded into the conditional scale and shift. In the next section, we derive a probabilistic latent variable model by modeling conditional scale and shift as distributions, which we learn by variational inference." }, { "figure_ref": [ "fig_1" ], "heading": "Variational task modulation", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce variational task modulation using a latent variable model in which we treat the conditional scale ∆β s,l i;n;c and shift ∆γ s,l i;n;c as latent variables z inferred from one known task. We formulate the optimization of variational task modulation as a variational inference problem by deriving a new evidence lower bound (ELBO) under the meta-learning framework.\nFrom a probabilistic perspective, the conditional latent scale and shift maximize the conditional predictive log-likelihood from two known tasks T i , T j .\nmax p log p(ŷ|T i , T j ) = max p log p(ŷ|x q , xs )p(x q , xs |T i , T j )dx q dx s = max p log p(ŷ|x q , xs )p(x q , xs |z, T j )p(z|T i )dzdx q dx s (11)\nwhere xs , xq are the support sample and query sample of the modulated task T . Since p(z, xq , xs |Ti, Tj)=p(x q , xs |z, Tj)p(z|Ti) is generally intractable, we resort to a variational posterior q(z, xq , xs |T j ) for its approximation. We obtain the variational distribution by minimizing the Kullback-Leibler (KL) divergence:\nD KL [q(z, xq , xs |T j )||p(z, xq , xs |T i , T j ). (12\n)\nBy applying the Baye's rule to the posterior q(z, xq , xs |Ti) , we derive the ELBO as: The second term in the ELBO can also be simplified. Since\nlog p(ŷ|Ti, Tj) ≥E q(z,x q ,x s ) [log p(ŷ|x q , xs )] -DKL [q(z, xq , xs |Tj)||p(z, xq , xs |Ti, Tj)](13)\nD KL [q(z, xq , xs )|T i ||p(z, ẑ|T i , T j )] = E q(z,x q ,x s ) log q(z, x |T i ) p(z, x |T i , T j ) ,(14)\nand q(z, xq , xs |T j ) = p(x q , xs |z, T j )q(z),\nwe then combine eq. ( 14), eq. ( 15) and eq. ( 11), to obtain:\nE q(z,x q ,x s ) log q(z, xq , xs |T j ) p(z, xq , xs |T i , T j ) = E q(z,x q ,x s ) log p(x q , xs |z, T i )q(z) p(x q , xs |z, T i )p(z|T i ) = E q(z) log q(z) p(z|T i ) = D KL [q(z)||p(z|T i )] .(16)\nThis provides the final ELBO for the variational task modulation:\nq(z, xq , xs |T i ) ≥ E q(z,x q ,x s ) [log p(ŷ|x q , xs )] -D KL [q(z)||p(z|T i )](17)\nThe overall computation graph of variational task modulation is shown in Figure 2.\nDirectly optimizing the above objective does not take into account the task information of all model layers, since it only focuses on the conditional latent scale and shift at a specific layer. Thus, we introduce hierarchical variational inference into the variational task modulation by conditioning the posterior on both the known tasks and the conditional latent scale and shift from the previous layers." }, { "figure_ref": [ "fig_2" ], "heading": "Hierarchical variational task modulation", "publication_ref": [ "b11" ], "table_ref": [], "text": "We replace variational distribution in eq. ( 12) with a new conditional distribution q(z l , xq , xs |z l-1 , T j ) that makes latent scale and shift of current l-th layer also dependent on the latent scale and shift from the upper l-1-th layers. The hierarchical variational inference gives rise to a new ELBO, as follows:\nq(z, xq , xs |T i ) ≥ E q(z l ,x q ,x s |z l-1 ) [log p(ŷ|x q , xs )] -D KL q(z l |z l-1 )||p(z l |z l-1 , T i )(18)\nThe graphical model of hierarchical variational task modulation is shown in Figure 3.\nIn practice, the prior p(z l |z l-1 , T i ) is implemented by an amortization network (Kingma & Welling, 2013) that takes the concatenation of the average feature representations of samples in the support set from T i and the upper layer latent scale and shift z l-1 and returns the mean and variance of the current layer latent scale and shift z l . To enable back-propagation with the sampling operation during training, we adopt the reparametrization trick (Rezende et al., 2014; Kingma & Welling, 2013) as z=z µ + z σ , where ∼ N (0, I). The hierarchical probabilistic scale and shift provide a more informative task representation than the deterministic meta task modulation and have the ability to capture different representation levels, thus modulating more diverse tasks for few-task meta-learning.\nIn the meta-training stage, we use the known meta-training tasks T i with our meta task modulation and its variational variants to generate the new tasks T for the meta-training. To ensure that the original tasks are also trained together, we train the generated tasks together with the original tasks. Thus the loss function of our meta task modulation L MTM is as follows:\nLMTM = 1 T T i ( Ŝi , Qi )∼ Ti LCE + λ (S i ,Q i )∼T i LCE . (19\n)\nThe loss of variational task modulation L VTM is\nLVTM = 1 T T i,j (x q ,ŷ)∈ Q -E q(z,x q ,x s ) [log p(ŷ|x q , xs )] + βDKL [q(z)||p(z|Ti)] + λ 1 T T i (S i ,Q i )∼T i LCE.(20)\nAnd the loss of hierarchical variational task modulation can be written as\nLHVTM = 1 T T i,j (x q ,ŷ)∈ Q -E q(z l ,x q ,x s |z l-1 ) [log p(ŷ|x q , xs )]\n-βDKL q(z l |z l-1 )||p(z l |z l-1 , Ti)\n+ λ 1 T T i (S i ,Q i )∼T i LCE,(21)\nwhere L CE is the cross-entropy loss,\nLCE = 1 NC NQ d(f φ (x q ), c k ) + log k exp(-d(f φ (x q ), c k )) ,(22)\nN C and N Q are the number of prototypes and query samples in each task, and λ > 0 and β > 0 are the regularization hyper-parameters.\nIn the meta-test stage, we directly input the support set S using the meta-trained feature extractor f φ (•) to obtain the prototype c k from the test task. Then we obtain the prediction of the query set x q for performance evaluation based on eq. ( 1). " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b23", "b25", "b23", "b23", "b16" ], "table_ref": [ "tab_2", "tab_7", "tab_3", "tab_4", "tab_5", "tab_7" ], "text": "Benefit of meta task modulation. To show the benefit of meta task modulation, we first compare our method with a vanilla Prototypical network (Snell et al., 2017) on all tasks, without using task interpolation, in Table 1. Our model performs better under various shot configurations on all few-task meta-learning benchmarks. We then compare our model with the state-of-the-art MLTI (Yao et al., 2021b) in Table 5, which interpolates the task distribution by Mixup (Verma et al., 2019). Our meta task modulation also compares favorably to MLTI under various shot configurations. On ISIC, for example, we surpass MLTI by 2.71% on the 5-way 5-shot setting. This is because our model can learn how to modulate the base task features to better capture the task distribution instead of using linear interpolation as described in the (Yao et al., 2021b).\nBenefit of variational task modulation. We investigate the benefit of variational task modulation by comparing it with deterministic meta task modulation. The results are reported on miniImageNet-S under various shots in Table 2. {1 st , 2 nd , 3 rd , 4 th }, random and, all are the selected determined layer, the randomly chosen one layer and all the layers to be modulated, respectively. The variational task modulation consistently outperforms the deterministic meta task modulation on any selected layers, demonstrating the benefit of probabilistic modeling. By using probabilistic task modulation, the base task can be modulated in a more informative way, allowing it to encompass a larger range of task distributions and ultimately improve performance on the meta-test task.\nHierarchical vs. flat variational task modulation. We compare hierarchical modulation with flat variational modulation, which only selects one layer to modulate. As shown in Table 3, the hierarchical variational modulation improves the overall performance under both the 1-shot and 5-shot settings on all three benchmarks. The hierarchical structure is well-suited for increasing the density of the task distribution across different levels of features, which leads to better performance compared to flat variational modulation. This makes sense because the hierarchical structure allows for more informative transformations of the base task, enabling it to encompass a broader range of task distributions. Note that, we use hierarchical variational task modulation to compare the state-of-the-art methods in the subsequent experiments.\nInfluence of the number of meta-training tasks. In Figure 4, we analyze the effect of the number of available meta- training tasks on the performance of our model under a 5shot setting on miniImageNet-S. Naturally, our model's performance improves, as the number of meta-training classes increases. The number of meta-training tasks is important for making the model more generalizable through metalearning. More interesting, our model's performance is considerably improved by using a learnable modulation that incorporates information from different levels of the task. Compared to the best result of a vanilla prototype network, 63.7% for 64 meta-training classes, we can reduce the number of classes to 40 for the same accuracy.\nCross-domain adaptation ability. To further evaluate the effectiveness of our proposed method, we conducted additional tests to assess the performance of MetaModulation in cross-domain adaptation scenarios. We trained Meta-Modulation on one source domain and then evaluated it on a different target domain. Specifically, we chose the miniImagenet-S and Dermnet-S domains. The results, as shown in Table 4, indicate MetaModulation generalizes better even in this more challenging scenario.\nAnalysis of modulated tasks. To understand how our Meta-Modulation is able to improve performance, we plotted the similarity between the vanilla, interpolated and modulated tasks and the meta-test tasks in Figure 5. Red numbers indicate the accuracy per model on each task. Specifically, we select 4 meta-test tasks and 300 meta-train tasks per model from the 1-shot miniImagenet-S setting to compute the task representation of each model. We then used instance pooling to obtain the representation of each task. Instance pooling involves combining a task's support and query sets and averaging the feature vectors of all instances to obtain a fixed-size prototype representation. This approach allows us to represent each task by a single vector that captures the essence of the task. We calculated the similarity between meta-train and meta-test tasks using Euclidean distance. When using the vanilla prototype model (Snell et al., 2017) directly, the similarity between meta-train and meta-test tasks is extremely low, indicating a significant difference in task distribution between meta-train and meta-test. This results in poor performance as seen in Figure 5 red numbers due to the distribution shift. However, the tasks modulated by our MetaModulation have a higher similarity with the meta-test tasks compared to the vanilla (Snell et al., 2017) and MLTI (Yao et al., 2021b), resulting in high accuracy. But, the similarity between the modulated tasks by our Meta-Modulation and T 4 is also relatively low and performance is also poor. This may be because the task distribution of T 4 is an outlier in the entire task distribution, making it hard to mimic this task during meta-training. Future work could investigate ways to mimic these outlier tasks in the meta-training tasks.\nComparison with state-of-the-art. We evaluate MetaModulation on the four different datasets under 5-way 1-shot and 5-way 5-shot in Table 5. Our model achieves stateof-the-art performance on all four few-task meta-learning benchmarks under each setting. On miniImagenet-S, our model achieves 43.21% under 1-shot, surpassing the secondbest MLTI (Yao et al., 2021b), by a margin of 1.85%. On ISIC (Milton, 2019), our method delivers 76.40% for 5shot, outperforming MLTI (Yao et al., 2021b) with 4.88%. Even on the most challenging DermNet-S, which forms the largest dermatology dataset, our model delivers 50.45% on the 5-way 1-shot setting. The consistent improvements on all benchmarks under various configurations confirm that our approach is effective for few-task meta-learning." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b13" ], "table_ref": [], "text": "Few-task meta-learning. In few-task meta-learning, the goal is to develop meta-learning algorithms that learn quickly and efficiently from a small number of examples with limited tasks in order to adapt to new tasks with minimal additional training. A common strategy for few-task meta-learning is task augmentation ( (Lee et al., 2019) to interpolate a given set of tasks and trains the interpolating function using bilevel optimization so that the meta-learner trained with the augmented tasks generalizes to meta-validation tasks. Both task augmentation and interpolation methods often randomly mix the features of two known tasks in a linear way without considering the features of other layers. This limits the diversity of the interpolated task and its potential benefit for increasing model generalizability. In contrast, we propose a learnable task modulation method that enables the model to learn a more diverse set of tasks by considering the features of each layer and allowing for a non-linear modulation between tasks. " }, { "figure_ref": [], "heading": "Conditional batch normalization.", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we addressed the issue of meta-learning algorithms requiring a large number of meta-training tasks which may not be readily available in real-world situations. We propose MetaModulation, which is to use a neural network to increase the density of the meta-training tasks by modulating batch normalization parameters during metatraining. Our MetaModulation consists of three different implementations. First is the meta task modulation, which modified parameters at various levels of the neural network to increase task diversity. Furthermore, we proposed a variational meta task modulation where the modulation parameters are treated as latent variables. We also introduced learning variational feature hierarchies by the variational meta task modulation. Our ablation studies showed the advantages of utilizing a learnable task modulation at different levels and the benefit of incorporating probabilistic variants in few-task meta-learning. Our MetaModulation and its variational variants consistently outperformed stateof-the-art few-task meta-learning methods on four few-task meta-learning benchmarks." }, { "figure_ref": [], "heading": "Acknowledgment", "publication_ref": [], "table_ref": [], "text": "This work is financially supported by the Inception Institute of Artificial Intelligence, the University of Amsterdam and the allowance Top consortia for Knowledge and Innovation (TKIs) from the Netherlands Ministry of Economic Affairs and Climate Policy, the National Key R&D Program of China (2022YFC2302704), the Special Foundation of President of the Hefei Institutes of Physical Science (YZJJ2023QN06), and the Postdoctoral Researchers' Scientific Research Activities Funding of Anhui Province (2022B653)." }, { "figure_ref": [], "heading": "A. Dataset.", "publication_ref": [], "table_ref": [], "text": "We apply our method to four few-task meta-learning image classification benchmarks. Sample images from each dataset are provided in Figure 6. B. Effect of the β.\nWe test the impact of β in (20) and ( 21). The value of β control how much information in the base task will be modulated during the meta-training stage. The experimental results on the three datasets under both 1-shot and 5-shot setting are shown in Figure 7 and8. We can see that the performance achieves the best when the values of β are 0.01. This means that in each modulate we need to keep the majority of base task. C. Effect of the λ.\nWe would like to emphasize that the hyper-parameters λ (Eq. 19, 20, 21) enable us to introduce constraints on new tasks, beyond just minimizing prediction loss. By adjusting the value of λ, we can control the trade-off between the prediction loss of the new tasks and the constraints imposed by the meta-training tasks. To clarify the impact of λ, we performed an ablation on the HVTM (Eq. 21). The results in Table 6 show that when the original tasks have higher weight, the performance is worse. Additionally, we have conducted experiments to investigate the distribution differences between the meta-training and generated tasks. Specifically, in Table 6, we analyze the task representations of meta-training and generated tasks and show that they are similar, indicating that the generated tasks do indeed have a similar distribution as the meta-training tasks." }, { "figure_ref": [], "heading": "miniImagenet-S ISIC", "publication_ref": [], "table_ref": [], "text": "1-shot " } ]
Meta-learning algorithms are able to learn a new task using previously learned knowledge, but they often require a large number of meta-training tasks which may not be readily available. To address this issue, we propose a method for fewshot learning with fewer tasks, which we call MetaModulation. The key idea is to use a neural network to increase the density of the metatraining tasks by modulating batch normalization parameters during meta-training. Additionally, we modify parameters at various network levels, rather than just a single layer, to increase task diversity. To account for the uncertainty caused by the limited training tasks, we propose a variational MetaModulation where the modulation parameters are treated as latent variables. We also introduce learning variational feature hierarchies by the variational MetaModulation, which modulates features at all layers and can consider task uncertainty and generate more diverse tasks. The ablation studies illustrate the advantages of utilizing a learnable task modulation at different levels and demonstrate the benefit of incorporating probabilistic variants in few-task meta-learning. Our MetaModulation and its variational variants consistently outperform state-of-the-art alternatives on four few-task meta-learning benchmarks.
MetaModulation: Learning Variational Feature Hierarchies for Few-Shot Learning with Fewer Tasks
[ { "figure_caption": "Figure 1 .1Figure 1. Meta task modulation. Various combinations of the transformation parameters γ and β from task Ti can modulate the individual activation of task Tj at different layers, which can make the newly generated task more diverse.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Variational task modulation. x and ŷ denote the sample and label of newly generated task T and z represents the latent modulation parameters.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Hierarchical variational task modulation. z l indicates the latent modulation parameters at the layer l. The latent transformation parameter z l is depend on the task Ti and the upper z l-1 .", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "35", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Batch normalization (Ioffe & Szegedy, 2015) is a crucial milestone in the development of deep neural networks. Conditional batch normalization (CBN) (De Vries et al., 2017) allows a neural network to learn different normalization parameters per class of input data. Note the contrast to traditional batch normalization, which uses the same normalization parameters for all inputs to a network layer. By conditioning the normalization on additional information, such as the class labels of the training examples, CBN allows the network to adapt its normalization parameters to the specific class characteristics. Similarly, Perez et al. (Perez et al., 2018) propose the feature-wise linear modulation layer for deep neural networks. In this paper, we take inspiration from conditional batch normalization and propose meta task modulation for few-task meta-learning, where the condition stems from the samples of a meta-training task. We use the conditional task as the condition, instead of data from another modality as in (De Vries et al., 2017), to predict the scale and shift parameters of the batch normalization for the base task.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Benefit of meta task modulation in (%) on three fewtask meta-learning challenges. Our meta task modulation (MTM) achieves better performance compared to a vanilla ProtoNet.", "figure_data": ",", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Benefit of variational task modulation for varying layers on miniImageNet-S. Variational task modulation (VTM) improves over any of the selected individual layers using MTM.", "figure_data": "", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hierarchical vs. flat variational modulation. Hierarchical variational task modulation (HVTM) is more effective than flat variational task modulation (VTM) for few-task meta-learning. Influence of the number of meta-training tasks for 5-way 5-shot on miniImageNet. All MetaModulation implementations improve over a vanilla prototype network, especially when fewer tasks are available for meta-learning. Where a vanilla network requires 64 tasks to reach 63.7% accuracy, we need 40.", "figure_data": "miniImagenet-SISICDermNet-S1-shot5-shot1-shot 5-shot 1-shot 5-shotVTM42.0555.8264.04 72.59 49.19 64.62HVTM 43.2157.2665.16 76.40 50.45 67.056664Accuracy (%)52 54 56 58 60 62Vanilla MTM VTM HVTM1225 Number of meta-training classes 38 5164Figure 4.", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "This paper 37.15±0.75 53.92 ± 1.01 31.56 ± 0.68 44.13 ± 0.92 Cross-domain adaptation ability. MetaModulation achieves better performance even in a challenging cross-domain adaptation setting compared to a vanilla prototype network and MLTI by Yao et al. (2021b).", "figure_data": "mini → DermnetDermnet → mini1-shot5-shot1-shot5-shotVanilla33.1250.1328.1140.35MLTI35.4651.7930.0642.23ATA35.83±0.58 51.65±0.6--", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Figure 5. Analysis of modulated tasks. Similarity of metatraining tasks to meta-test tasks for different methods, and the corresponding accuracy (red numbers) for the meta-test tasks. The tasks modulated by MetaModulatation have high similarity with the meta-test tasks, resulting in high accuracy.", "figure_data": "0.629.735.137.338.239.10.534.239.742.243.543.90.40.333.542.443.742.944.10.2.141.342.142.743.30.1Vanilla MLTI MTM VTM HVTM", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-art. All results, except for the MetaInterpolation (Lee et al., 2022), are sourced from MLTI (Yao et al., 2021b). MetaModulation is a consistent top performer for all settings and datasets. new query set. Another approach is to rely on unsupervised or self-supervised learning to generate additional tasks from the training data (Vu et al., 2021; Wang & Deng, 2021). An alternative few-task meta-learning strategy is task interpolation (Yao et al., 2021b; Lee et al., 2022), which trains a model to learn from a set of interpolated tasks. For example, MLTI (Yao et al., 2021b) performs Manifold Mixup on support and query sets from two tasks for task augmentation.", "figure_data": "Yao et al., 2021a; Vu", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" } ]
Wenfang Sun; Yingjun Du; Xiantong Zhen; Fan Wang; Ling Wang; Cees G M Snoek
[ { "authors": "M Andrychowicz; M Denil; S Gomez; M W Hoffman; D Pfau; T Schaul; B Shillingford; N De Freitas", "journal": "", "ref_id": "b0", "title": "Learning to learn by gradient descent by gradient descent", "year": "2016" }, { "authors": "K Cao; M Brbic; J Leskovec", "journal": "ICLR", "ref_id": "b1", "title": "Concept learners for few-shot learning", "year": "2020" }, { "authors": "H De Vries; F Strub; J Mary; H Larochelle; O Pietquin; A C Courville", "journal": "NeurIPS", "ref_id": "b2", "title": "Modulating early visual processing by language", "year": "2017" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b3", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "V Dumoulin; J Shlens; M Kudlur", "journal": "", "ref_id": "b4", "title": "A learned representation for artistic style", "year": "2016" }, { "authors": "C Finn; P Abbeel; S Levine", "journal": "", "ref_id": "b5", "title": "Model-agnostic metalearning for fast adaptation of deep networks", "year": "2017" }, { "authors": "Y He; W Liang; D Zhao; H.-Y Zhou; W Ge; Y Yu; W Zhang", "journal": "", "ref_id": "b6", "title": "Attribute surrogates learning and spectral tokens pooling in transformers for few-shot learning", "year": "2022" }, { "authors": "S X Hu; D Li; J Stühmer; M Kim; T M Hospedales", "journal": "", "ref_id": "b7", "title": "Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference", "year": "2022-06" }, { "authors": "S Ioffe; C Szegedy", "journal": "PMLR", "ref_id": "b8", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "M A Jamal; G.-J Qi", "journal": "", "ref_id": "b9", "title": "Task agnostic meta-learning for few-shot learning", "year": "2019" }, { "authors": "D P Kingma; J Ba; Adam", "journal": "", "ref_id": "b10", "title": "A method for stochastic optimization", "year": "2014" }, { "authors": "D P Kingma; M Welling", "journal": "", "ref_id": "b11", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "H B Lee; T Nam; E Yang; S J Hwang", "journal": "", "ref_id": "b12", "title": "Meta dropout: Learning to perturb latent features for generalization", "year": "2020" }, { "authors": "J Lee; Y Lee; J Kim; A Kosiorek; S Choi; Y W Teh", "journal": "", "ref_id": "b13", "title": "Set transformer: A framework for attention-based permutation-invariant neural networks", "year": "2019" }, { "authors": "S Lee; B Andreis; K Kawaguchi; J Lee; S J Hwang", "journal": "NeurIPS", "ref_id": "b14", "title": "Set-based meta-interpolation for few-task metalearning", "year": "2022" }, { "authors": "Y Liu; W Zhang; C Xiang; T Zheng; D Cai; X He", "journal": "", "ref_id": "b15", "title": "Learning to affiliate: Mutual centralized learning for few-shot classification", "year": "2022-06" }, { "authors": "M A A Milton", "journal": "", "ref_id": "b16", "title": "Automated skin lesion classification using ensemble of deep neural networks in isic 2018: Skin lesion analysis towards melanoma detection challenge", "year": "2019" }, { "authors": "S Murty; T B Hashimoto; C D Manning; Dreca", "journal": "", "ref_id": "b17", "title": "A general task augmentation strategy for few-shot natural language inference", "year": "2021" }, { "authors": "R Ni; M Goldblum; A Sharaf; K Kong; T Goldstein", "journal": "PMLR", "ref_id": "b18", "title": "Data augmentation for meta-learning", "year": "2021" }, { "authors": "E Perez; F Strub; H De Vries; V Dumoulin; A Courville", "journal": "", "ref_id": "b19", "title": "Film: Visual reasoning with a general conditioning layer", "year": "2018" }, { "authors": "S Ravi; H Larochelle", "journal": "", "ref_id": "b20", "title": "Optimization as a model for few-shot learning", "year": "2017" }, { "authors": "D J Rezende; S Mohamed; D Wierstra", "journal": "PMLR", "ref_id": "b21", "title": "Stochastic backpropagation and approximate inference in deep generative models", "year": "2014" }, { "authors": "J Schmidhuber", "journal": "", "ref_id": "b22", "title": "Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta", "year": "1987" }, { "authors": "J Snell; K Swersky; R Zemel", "journal": "NeurIPS", "ref_id": "b23", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "S Thrun; L Pratt", "journal": "Springer Science & Business Media", "ref_id": "b24", "title": "Learning to learn", "year": "1998" }, { "authors": "V Verma; A Lamb; C Beckham; A Najafi; I Mitliagkas; D Lopez-Paz; Y Bengio", "journal": "", "ref_id": "b25", "title": "Manifold mixup: Better representations by interpolating hidden states", "year": "2019" }, { "authors": "O Vinyals; C Blundell; T Lillicrap; K Kavukcuoglu; D Wierstra", "journal": "", "ref_id": "b26", "title": "Matching networks for one shot learning", "year": "2016" }, { "authors": "T Vu; M.-T Luong; Q V Le; G Simon; M Iyyer", "journal": "", "ref_id": "b27", "title": "Strata: Self-training with task augmentation for better few-shot learning", "year": "2021" }, { "authors": "H Wang; Z.-H Deng", "journal": "", "ref_id": "b28", "title": "Cross-domain few-shot classification via adversarial task augmentation", "year": "2021" }, { "authors": "H Wang; H Mai; Y Gong; Z.-H Deng", "journal": "Artificial Intelligence", "ref_id": "b29", "title": "Towards well-generalizing meta-learning via adversarial task augmentation", "year": "2023" }, { "authors": "X Wang; Y Peng; L Lu; Z Lu; M Bagheri; R M Summers", "journal": "", "ref_id": "b30", "title": "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases", "year": "2017" }, { "authors": "Y Wu; L.-K Huang; Y Wei", "journal": "", "ref_id": "b31", "title": "Adversarial task upsampling for meta-learning", "year": "2022" }, { "authors": "H Yao; L.-K Huang; L Zhang; Y Wei; L Tian; J Zou; J Huang", "journal": "PMLR", "ref_id": "b32", "title": "Improving generalization in metalearning via task augmentation", "year": "2021" }, { "authors": "H Yao; L Zhang; C Finn", "journal": "", "ref_id": "b33", "title": "Meta-learning with fewer tasks through task interpolation", "year": "2008" }, { "authors": "J Zhou; Y Zheng; J Tang; J Li; Z Yang", "journal": "", "ref_id": "b34", "title": "Flipda: Effective and robust data augmentation for few-shot learning", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 307.44, 159.75, 131.64, 12.32 ], "formula_id": "formula_0", "formula_text": "Q t ={(x i , ỹi )} m i=1 (S t , Q t ⊆ X )" }, { "formula_coordinates": [ 2, 327.67, 375.29, 148.12, 13.47 ], "formula_id": "formula_1", "formula_text": "c k = 1 K k f φ (x c,k ), where f φ (x c,k" }, { "formula_coordinates": [ 2, 339.13, 455.2, 202.32, 23.68 ], "formula_id": "formula_2", "formula_text": "p(y q n = k|x q ) = exp(-d(f φ (x q ), c k )) k exp(-d(f φ (x q ), c k )) ,(1)" }, { "formula_coordinates": [ 2, 356.74, 638.06, 184.7, 23.11 ], "formula_id": "formula_3", "formula_text": "y (k) = γ (k) x (k) -E[x (k) ] Var[x (k) ] + + β (k) ,(2)" }, { "formula_coordinates": [ 3, 98.24, 187.61, 191.2, 9.65 ], "formula_id": "formula_4", "formula_text": "∆β = MLP(e q ) ∆γ = MLP(e q ),(3)" }, { "formula_coordinates": [ 3, 101.32, 264.34, 188.12, 12.28 ], "formula_id": "formula_5", "formula_text": "βc = β c + ∆β c γc = γ c + ∆γ c .(4)" }, { "formula_coordinates": [ 3, 115.98, 488.62, 173.46, 12.43 ], "formula_id": "formula_6", "formula_text": "Ĥs,l n = λH s,l i;n + (1 -λ)H s,l j;n ,(5)" }, { "formula_coordinates": [ 3, 115.92, 520.33, 173.52, 12.43 ], "formula_id": "formula_7", "formula_text": "Ĥq,l n = λH q,l i;n + (1 -λ)H q,l j;n ,(6)" }, { "formula_coordinates": [ 3, 320.23, 557.04, 221.21, 13.68 ], "formula_id": "formula_8", "formula_text": "∆β s,l i;n;c = MLP(H s,l i;n ) ∆γ s,l i;n;c = MLP(H s,l i;n ) (7)" }, { "formula_coordinates": [ 4, 99.61, 110.07, 189.83, 29.38 ], "formula_id": "formula_9", "formula_text": "Ĥs,l n = γs,l i;n;c H s,l j;n -E[H s,l j;n ] Var[H s,l j;n ] + + βs,l j;n;c ,(9)" }, { "formula_coordinates": [ 4, 149.55, 154.39, 139.89, 29.38 ], "formula_id": "formula_10", "formula_text": "H q,l j;n -E[H q,l j;n ] Var[H q,l j;n ] + + βq,l j;n;c ,(10)" }, { "formula_coordinates": [ 4, 55.44, 479.59, 237.72, 77.29 ], "formula_id": "formula_11", "formula_text": "max p log p(ŷ|T i , T j ) = max p log p(ŷ|x q , xs )p(x q , xs |T i , T j )dx q dx s = max p log p(ŷ|x q , xs )p(x q , xs |z, T j )p(z|T i )dzdx q dx s (11)" }, { "formula_coordinates": [ 4, 90.69, 636.7, 194.6, 9.79 ], "formula_id": "formula_12", "formula_text": "D KL [q(z, xq , xs |T j )||p(z, xq , xs |T i , T j ). (12" }, { "formula_coordinates": [ 4, 285.29, 637.15, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 4, 57.99, 683.66, 231.45, 33.34 ], "formula_id": "formula_14", "formula_text": "log p(ŷ|Ti, Tj) ≥E q(z,x q ,x s ) [log p(ŷ|x q , xs )] -DKL [q(z, xq , xs |Tj)||p(z, xq , xs |Ti, Tj)](13)" }, { "formula_coordinates": [ 4, 352.34, 241.32, 189.1, 43.21 ], "formula_id": "formula_15", "formula_text": "D KL [q(z, xq , xs )|T i ||p(z, ẑ|T i , T j )] = E q(z,x q ,x s ) log q(z, x |T i ) p(z, x |T i , T j ) ,(14)" }, { "formula_coordinates": [ 4, 342.67, 350.98, 198.77, 93.61 ], "formula_id": "formula_17", "formula_text": "E q(z,x q ,x s ) log q(z, xq , xs |T j ) p(z, xq , xs |T i , T j ) = E q(z,x q ,x s ) log p(x q , xs |z, T i )q(z) p(x q , xs |z, T i )p(z|T i ) = E q(z) log q(z) p(z|T i ) = D KL [q(z)||p(z|T i )] .(16)" }, { "formula_coordinates": [ 4, 324.13, 487.63, 217.31, 26.67 ], "formula_id": "formula_18", "formula_text": "q(z, xq , xs |T i ) ≥ E q(z,x q ,x s ) [log p(ŷ|x q , xs )] -D KL [q(z)||p(z|T i )](17)" }, { "formula_coordinates": [ 5, 61.65, 222.96, 227.79, 28.43 ], "formula_id": "formula_19", "formula_text": "q(z, xq , xs |T i ) ≥ E q(z l ,x q ,x s |z l-1 ) [log p(ŷ|x q , xs )] -D KL q(z l |z l-1 )||p(z l |z l-1 , T i )(18)" }, { "formula_coordinates": [ 5, 60.37, 564.14, 225.34, 28.54 ], "formula_id": "formula_20", "formula_text": "LMTM = 1 T T i ( Ŝi , Qi )∼ Ti LCE + λ (S i ,Q i )∼T i LCE . (19" }, { "formula_coordinates": [ 5, 285.71, 574.98, 3.73, 7.77 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 5, 56.27, 632.49, 233.17, 72.55 ], "formula_id": "formula_22", "formula_text": "LVTM = 1 T T i,j (x q ,ŷ)∈ Q -E q(z,x q ,x s ) [log p(ŷ|x q , xs )] + βDKL [q(z)||p(z|Ti)] + λ 1 T T i (S i ,Q i )∼T i LCE.(20)" }, { "formula_coordinates": [ 5, 307.44, 86.37, 236.8, 28.54 ], "formula_id": "formula_23", "formula_text": "LHVTM = 1 T T i,j (x q ,ŷ)∈ Q -E q(z l ,x q ,x s |z l-1 ) [log p(ŷ|x q , xs )]" }, { "formula_coordinates": [ 5, 348.5, 141.35, 192.94, 37.7 ], "formula_id": "formula_24", "formula_text": "+ λ 1 T T i (S i ,Q i )∼T i LCE,(21)" }, { "formula_coordinates": [ 5, 307.44, 199.14, 238.32, 32.84 ], "formula_id": "formula_25", "formula_text": "LCE = 1 NC NQ d(f φ (x q ), c k ) + log k exp(-d(f φ (x q ), c k )) ,(22)" } ]
10.1023/A:1015674004201
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b15", "b19", "b28", "b31", "b7", "b27", "b26", "b29", "b21", "b20", "b23", "b30", "b28", "b2", "b18", "b35", "b32", "b0", "b13", "b3", "b4", "b16", "b12", "b25", "b14", "b9" ], "table_ref": [], "text": "The practice of targeting weeds and weed patches individually is known as site-specific weed control (SSWC), which offers significant opportunities for more efficient and effective weed management (Lopez-Granados, 2011;Mortensen et al., 1995). SSWC transforms the weed control-cost equation from one based on area to one based on weed density. It is an attractive prospect for growers, with substantial savings in herbicide inputs (Timmermann et al., 2003) and/or can reduce the area of tillage required (Walsh et al., 2020) when weed densities are low. Importantly, it is also an opportunity for the viable deployment of alternative weed control options (e.g., lasers, waterjet cutting and abrasive grit) that would be impractical in large-scale production systems (Coleman et al., 2019). Yet, as identified over many years (Thompson et al., 1991), the major bottleneck for SSWC is in implementing a reliable form of weed detection that provides (1) appropriate performance;\n(2) class specificity ('weed' and 'crop', or species level) and (3) detection granularity (classification, object detection or segmentation) at the desired speed, in variable conditions and within a specific weed control scenario (e.g., fallow or in-crop).\nIn a fallow weed control scenario, weed detection enables the site-specific control of weeds for soil water and nutrient conservation in rainfall limited cropping systems (Thomas et al., 2007;Verburg et al., 2012). Current SSWC in fallow is enabled by the principle that all green, actively growing plants in a field are weeds. This allows the use of simple photoelectric systems for weed detection such as the WeedIT and WeedSeeker (Peteinatos et al., 2014). The approach generally relies on differential reflection in red and near-infrared spectra between plants and the background (soil and/or crop residues) (Palmer and Owen, 1971). The practice is common in some Australian cropping regions, where reflectance-based systems have been widely adopted (SPAA, 2016). Such systems have been available for commercial use since the late 90s (Shearer and Jones, 1991;Visser and Timmermans, 1996) and have allowed for relatively high performing weed detection (approximately 90%) at speeds up to 20 km h-1 (Timmermann et al., 2003).\nSuitable environmental conditions for the safe and effective application of herbicides are frequently limited, thus from a farmer perspective, ground speed is a critical factor in the timely delivery of weed control treatments. For example, in Australia's summer cropping region, the combination of high temperatures, humidity and inversion layers frequently limit the time available for herbicide application. This time limitation is a considerable incentive for adoption of higher ground speeds for herbicide application (Butts et al., 2021) and larger spray equipment sizes, however, associated increases in equipment cost, weight and breakdowns are substantial limitations. Additionally, higher ground speed during application limits the efficacy of herbicide delivery by influencing droplet characteristics and spray patterns as well as increasing the potential for spray drift (Meyer et al., 2016;Wolf et al., 1997). In large-scale grain production systems, the limited available time to cover the production area necessitates a careful balance between ground speed and herbicide efficacy. Importantly, in a SSWC system, ground speed plays a role in influencing weed detection performance.\nWhile reflectance-based weed detection systems set a benchmark for performance and encouraged interest in the use of SSWC systems, contemporary SSWC research has moved towards the use of computer vision for weed detection (Coleman et al., 2022a;Wang et al., 2019). Initial attempts at real-time weed detection were largely restricted to speeds under 1 km h-1 (Åstrand and Baerveldt, 2002;Lee et al., 1999). These systems used large, slow computers with low resolution cameras. More recently, small form factor single board computers (SBCs) with relatively high computational power have been employed for realtime weed detection. For example, Calvert et al. (2021) used an NVIDIA Jetson with embedded GPU to run a MobileNetv2 deep learning model for realtime detection and control of Harrisia cactus (Harrisia martinii) at approximately 7 km h -1 . Chechliński et al. (2019) employed the more resource constrained Raspberry Pi to deploy a custom U-Net and Mobilenet architecture at a ground speed of 4.6 km h -1 . Current commercial systems report effective performance up to 15 km h -1 (Martin, 2021), however, are still in their infancy and are only suitable for certain applications. Demonstrating high speed image capture potential, image data has been collected on all-terrain vehicles at up to 50 km h -1 , although in this case the intent was simply for data collection without deployment of computationally intensive weed detection algorithms (Laursen et al., 2017). In our initial research on open-source, colour-based weed recognition for the detection of weeds in fallow, ground speed was limited to a walking pace of approximately 4 km h -1 (Coleman et al., 2022b). The approach lacked an analysis of speed and was limited by the default camera settings of the Raspberry Pi HQ camera. Whilst computational speed is a primary consideration, other computational, hardware and agronomical factors also limit maximum ground speed.\nBringing together the sequence of events that determine SSWC performance (both speed and quality), we propose an event timeline for site-specific herbicide application. It sets a framework for system events that must be completed to successfully target and control a weed (Figure 1), incorporating three stages of (1) detection, (2) actuation and (3) delivery. The time required for completion of this event timeline determines the maximum ground speed for the weed control operation. From the moment a weed enters the field of view of the sensor, the weed detection stage begins (e1. 1 -e1.3 in Figure 1). The timing of this step is largely dependent on the camera system, the efficiency of the software and the processing speed of the hardware. Following detection, actuation speed of the relay or transistor after receiving the activation signal combined with the delay in activation of the solenoid (e2.1 -e2.2 in Figure 1). Finally, the movement of herbicide from behind the solenoid through the nozzle, across the gap above the plant and onto the target is the final source of delay (e3.1 -e3.2 in Figure 1). This framework contextualises opportunities for improved operational speed in SSWC systems.\nDespite the importance of vehicle forward speed in application efficacy and timeliness, there is limited quantitative analysis on the impact of ground speed on the performance of weed detection hardware and software for e0 -e1.3. In one of the few efforts at measuring the effects of speed on weed detection, Steward et al. (2002) reported weed detection accuracy dropped from 96% at 3.2 -3.9 km h -1 to 86% at 11 -14 km h -1 ; though, processor and camera performance has advanced substantially in the 20 years since this research. Liu et al. (2021) found the recall of three different deep learning architectures, AlexNet, GoogleNet and VGG-16, all decreased by 9% when increasing speed from 1 km h -1 to 5km h -1 . Moreover, understanding the influence of weed morphotypes (e.g., grass or broadleaf) and increasing ground speed on detection accuracy is important in ensuring high performance image-based weed recognition in all detection scenarios. Processing speed is generally reported in weed recognition research as a measure of algorithm relevance for real-time use in field settings (Hasan et al., 2021). Yet, many other factors such as image blur, shutter speed, dust and wind associated with high-speed movement over a field for weed control application will influence performance. Unfortunately, these practicalities are rarely incorporated into weed recognition research.\nTo better understand the impacts of ground speed on camera-based weed detection from stages e0 -e1.3, our study employed the OpenWeedLocator (OWL) -Figure 1 Overview of key events that must occur from the point a weed enters the field of view of a camera before the herbicide is applied to the plant. The overall time available is determined by the distance between camera and application point and ground speed. The duration of each event is influenced by software and hardware design limitations. Algorithm speed evaluations are typically constrained to e1.2. The present study incorporates stages e0 -e1.3. a low-cost open source weed detection unit that was developed for research and community use (Coleman et al., 2022b). We modified the original OWL system to run comparisons of four camera systems. The aims of this study were to i) practically assess camera and algorithm performance for fallow weed detection as influenced by incremental increases in speeds from 5 km h -1 to 30 km h -1 and ii) assess the influence of weed type, (grass or broadleaf) and growth stage on camera system performance." }, { "figure_ref": [], "heading": "3", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Materials and Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "3.1", "publication_ref": [], "table_ref": [], "text": "Field preparation Six 25 x 1 m test transects were established at the University of Sydney Lansdowne Farm, Cobbitty, NSW, Australia (-34.022115, 150.664842) in April 2022. Weeds were controlled before seeds of forage oats (Avena sativa) and tillage radish (Raphanus sativus) were sown in a random pattern across each transect as representative broadleaf and grass 'weeds'. Seed was hand planted to a depth of approximately 2 cm on a weekly basis for five weeks to establish plants of variable growth stage and size at a target density of approximately 3 plants m -2 . To maintain a simulated fallow environment any post germination of tillage radish, forage oats, non-target weed species and moss were selectively spot-treated with non-selective herbicides or removed manually." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "3.2", "publication_ref": [ "b34" ], "table_ref": [], "text": "Data collection system A custom vehicle-mounted system was developed for simultaneous video data collection from multiple cameras (Figure 2). All mounts and camera housing components were designed using Tinkercad design software (Autodesk, San Francisco CA, 2012) and produced using a 3D printer (i3 MK3; Prusa Research, Prague, Czech Republic). Four camera systems were used (Table 1); V2, HQ1, HQ2 and ARD. Cameras were mounted in line with each other, within a 3D printed housing to allow for consistency of each field of view (FOV) and synchronisation of data collection (Figure 2). Cameras were positioned at a height of 1 m above the ground, and the FOV was checked such that all weeds within the 1 m wide transects were visible. Each camera was connected to a Raspberry Pi 4 8GB (Raspberry Pi Foundation, Cambridge, UK) embedded computer to run the OWL detection software and record video. Power was supplied by a 12 V battery located in the rear of the vehicle and converted to 5V for each Raspberry Pi using a Pololu (Pololu Corporation, Las Vegas, Nevada) D24V50F5 5V, 5A step-down voltage regulator. Recording for each camera/Raspberry Pi combination was manually turned on and off at the start and end of each transect using a switch connected to the GPIO pins of each Raspberry Pi. A 12 V, 90 W LED work light (Stedi, Melbourne, Victoria, Australia) provided additional lighting within the camera FOV.\nAll software used for data collection is available from the OpenWeedLocator GitHub repository:\nhttps://github.com/geezacoleman/OpenWeedLocator.\nSpecific camera details and settings that were modified from default are included in Table 1. Pi and OWL software using the library 'Picamera', whilst the ARD camera required the installation of a modified Raspberry Pi kernel driver and used the 'libcamera' library. The OWL detection software relies on green detection using the excess green (ExG) index (Woebbecke et al., 1995) " }, { "figure_ref": [], "heading": "combined with", "publication_ref": [], "table_ref": [], "text": "Table 1 Camera and software configurations used to evaluate the effect of increasing speed on camera and weed detection algorithm performance for tillage radish (Raphanus sativus) and forage oats (Avena sativa). thresholding in the hue, saturation and value (HSV) colour space to reduce false detections in areas with bright reflections. The details of the algorithm are detailed in Coleman et al. (2022)." }, { "figure_ref": [], "heading": "Camera", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "3.3", "publication_ref": [], "table_ref": [], "text": "Data collection With the four-camera system, video data were collected across the six transects at five different speeds, 5, 10, 15, 20 and 30 km h -1 in May 2022 (Figure 2). During each run, vehicle speed was maintained with the vehicle speedometer, whilst separately a rear-mounted GPS unit (ELEMNT ROAM; Wahoo Fitness, Atlanta, GA, USA) logged and checked true ground speed. Video footage from a GoPro Hero5 (GoPro Inc., San Mateo, California) was used as a high-resolution standard against which to compare detections. Camera lenses were checked for dust and contamination after each run. Technical issues with the Arducam AR0234 camera and the automatic gain adjustment resulted in only the first three transects being recorded correctly, this limitation is considered further in the Discussion.\nTillage radish and forage oats in each transect were counted manually prior to video data collection. The diameters of plants within three randomly allocated 1 m 2 quadrats in each transect were also measured prior to data collection (Figure 3)." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "3.4", "publication_ref": [], "table_ref": [], "text": "Analysing video data Following image data collection, video footage was analyzed on a frame-by-frame basis. Detections (red boxes in Figure 4) were compared with a highresolution video, using custom Python-based software (accessible at: https://github.com/geezacoleman/OpenWeedLocato r/blob/main/video_analysis.py), following the method of Coleman et al. (2022). If a detection was made but no plant was observed, then it was noted as a false positive (Figure 4). In some cases, where small plants other than tillage radish and forage oats had emerged in the field of view, the detection was classified as 'other' or 'moss'. This was not included in the false positive or true positive counts.\nBased on these detection results, performance for each camera, at each speed was determined using the metrics of precision (Eq. 1) and recall (Eq. 2). The total number of weeds present was established from the manual plant counts of each transect prior to video collection. Additionally, recall was calculated on a per species basis. Given the detection algorithm does not include classes, species-wise calculation of precision was not possible." }, { "figure_ref": [], "heading": "Precision =", "publication_ref": [], "table_ref": [], "text": "𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇𝑃𝑃 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇𝑃𝑃 + 𝐹𝐹𝐹𝐹𝐹𝐹𝑃𝑃𝑇𝑇 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇𝑃𝑃\n(1)\nRecall = 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇𝑃𝑃 𝑇𝑇𝑃𝑃𝑃𝑃𝐹𝐹𝐹𝐹 𝑤𝑤𝑇𝑇𝑇𝑇𝑤𝑤𝑃𝑃\n(2)" }, { "figure_ref": [], "heading": "3.5", "publication_ref": [ "b8", "b1" ], "table_ref": [], "text": "Blur assessment An estimate of non-referenced, image blur at e1.1 was calculated by analysing changes in the high-frequency components of Fast Fourier Transformed (FFT) images. The high frequency components of an image typically represent fine details such as edges that contribute to the overall clarity of an image. Wholeimage motion blur from a moving vehicle often reduces these high frequency components and results in a lower value overall. The approach is based on a Python implementation using 'numpy' (Harris et al., 2020) and 'OpenCV' (Bradski, 2000) accessible at: https://github.com/geezacoleman/OpenWeedLocato r/blob/0e17c7891f864573c9637bee60197cb41e886a69/ utils/blur_algorithms.py." }, { "figure_ref": [], "heading": "3.6", "publication_ref": [ "b11", "b33" ], "table_ref": [], "text": "Statistical analysis Performance metrics (recall and precision) and blur data were analyzed in R Studio (RStudio Team, 2015). Fisher's protected Least Significance Difference (LSD; α = 0.05) was used for pairwise comparisons of recall for the speed × camera interaction using the 'agricolae' package (de Mendiburu, 2020). Normality and homoscedasticity of the data were confirmed with Shapiro-Wilkes and Bartlett's tests, respectively. Differences in precision were compared using nonparametric Kruskal-Wallis test, followed by the Wilcoxon Rank Sum Test for pair-wise comparisons (α = 0.05), using a Benjamini-Hochberg adjustment. Due to technical camera issues, ARD 20 km h -1 had only two replicates and was removed from pairwise comparisons; however, it was included in regression analyses.\nTwo-factor (camera × class) regression analyses for speed and recall were performed in base R with the lm() function. Comparisons of performance between individual classes were conducted using Analysis of Covariance (ANCOVA) method, controlling for vehicle speed to observe underlying differences with species. Pairwise comparisons were made with the 'emmeans' package using the Bonferroni adjustment for cameras at each class level and classes at each camera level. Prior to analysis interactions between groups were checked with a Type II ANOVA. The normality of residuals was confirmed with the Shapiro test (P > 0.05) from the 'rstatix' package (Kassambara, 2023). Homogeneity of residuals was assessed with the Levene test (P > 0.05). Data were visualized and all figures were created using 'ggplot2' (Wickham, 2016)." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "4.1", "publication_ref": [], "table_ref": [], "text": "Speed and performance ARD had the highest (P < 0.05) recall of all camera systems tested (Table 2), with up to 95.7% of weeds recalled at 5 km h -1 and 85.7% at 30 km h -1 . Although Table 2 Summary of camera performance at each of the five speeds tested. Letters indicating significant differences for recall using Fisher's protected least significance difference (LSD; α = 0.05) are presented. The non-parametric pairwise Wilcoxon Rank Sum Test was performed to compare means for precision using a Benjamini-Hochberg adjustment (α = 0.05). The top performing recall for each speed is indicated with red text.\nꝉ ARD at 20 km h -1 was excluded from pairwise means comparisons due to its low sample size from technical issues with the camera. there was an apparent decline in ARD performance with increasing speed, there were no differences (P < 0.05) in recall between the lowest and highest speeds (Figure 5). The lowest (P < 0.05) recall results were recorded for the HQ1 and V2 cameras at 30 km h -1 and 15 -30 km h -1 speeds, respectively. Decreases in recall (P < 0.05) were observed when speed increased from 5 -30 km h -1 for HQ1, HQ2 and V2. The largest decrease of 25.7% was observed for HQ1. The same camera with updated software (HQ2) declined by a similar 24.3%, though performance was between 18.0 -19.5% higher (P < 0.05) at all speeds." }, { "figure_ref": [], "heading": "Camera", "publication_ref": [], "table_ref": [], "text": "HQ1, HQ2 and V2 had 100% precision for all speeds tested, dipping to 99.7% for HQ2 at 20 km h -1 and HQ1 at 5 km h -1 . ARD recorded the lowest precision (P < 0.05), of 95.9% at 5 km h -1 and 98.3% at 10 km h -1 .\nThe effect of increasing speed on recall for the ARD camera system was not linear for oats (P = 0.12) or tillage radish (P = 0.18), though a negative trend was apparent. There were negative linear relationships (P < 0.05) for speed and recall with the HQ1, HQ2 and V2 systems for both forage oats and tillage radish (Figure 5). The steepest slope was observed with the HQ1 camera system with 1.12% reduction in recall per km h -1 for tillage radish and 0.90% reduction per km h -1 for forage oats (Table 3).\nA qualitative inspection of individual camera frames, (Figure 6) illustrates the change in image detection quality for forage oats and tillage radish over different speeds. The same plants for forage oats and tillage radish are observed in each frame shown. During e1.1, whole image motion blur appears to increase along with a greater wind effect from vehicle movement. This increases plant movement and dust around the camera, resulting in smaller detection sizes. With entirely default settings, the exposure of HQ1 is long and the images are overexposed." }, { "figure_ref": [], "heading": "4.2", "publication_ref": [], "table_ref": [], "text": "Influence of plant species on performance The broadleaved tillage radish was consistently detected more successfully (P < 0.05) than forage oats by all cameras (Figure 7). ARD consistently outperformed the other camera systems tested, with a mean recall of 95.8% for tillage radish and 83.7% for forage oats. Additionally, the 12.1% decrease in recall between tillage radish and forage oats by the ARD camera system was much less than the reductions in recall of 26.8%, 38.7% and 30.9% for HQ1, HQ2 and V2, respectively." }, { "figure_ref": [], "heading": "4.3", "publication_ref": [], "table_ref": [], "text": "Image blur The reported FFT blur value is a measure of the proportion of high frequencies components (e.g., edges and high contrast regions) within a Fourier-transformed image, whereby blur results in fewer high frequencies present. It separates algorithm performance at e1.2 from hardware effects at e1.1 (Figure 1). Speed had a comparatively reduced impact on FFT-measured blur for ARD camera system, with a non-significant (P = 0.055) relationship. The negative relationship between speed and FFT-based blur for HQ1, HQ2 and V2 camera systems confirms that speed resulted in increasingly blurry images (Figure 8). For the ARD camera system, there was a strong positive (R = 0.85; P < 0.01) linear relationship between image blur and recall of forage oats; however, there was not a similar relationship (P=0.083) for tillage radish (Figure 9), where recall of the broadleaf plant remained high. A significant positive correlation (R = 0.54; P < 0.01) was also found for HQ1 and tillage radish recall." }, { "figure_ref": [], "heading": "5", "publication_ref": [], "table_ref": [], "text": "Discussion The ARD camera system outperformed the other three camera systems tested in maintaining high weed detection performance across all working speeds used in this evaluation. The recall of ARD of 95.7% of weeds at 5 km h -1 and 85.7% at 30 km h -1 was at least 20% higher than those of the other camera systems (Table 2). Importantly, the high recall was achieved whilst maintaining precision above 95.9%, up to 99.6% at 30 km h -1 . The increase in precision with speed, suggests the ARD is more sensitive to false positives at low speeds. The lower native resolution Figure 7 Comparison of camera recall of forage oats (Avena sativa) and tillage radish (Raphanus sativus);. A two-way analysis of covariance (ANCOVA) with emmeans was used to control for speed to determine effects of plant class effects at a camera level. Controlling for speed all cameras were capable of higher recall on tillage radish than forage oats. Tests were conducted with Bonferroni adjustment and significance at P < 0.0125." }, { "figure_ref": [], "heading": "Figure 8", "publication_ref": [ "b25", "b14", "b10", "b2", "b9" ], "table_ref": [], "text": "Using fast-Fourier transform (FFT)-based analysis, blur reduces the proportion of high frequency components (e.g., edges) within an image, thus a higher FFT blur value indicates a less blurry image. There was no significant (P = 0.055) correlation between speed and based blur analysis for ARD. Negative correlations (P < 0.05) were observed for HQ2, HQ1 and V2 camera systems, indicating speed resulted in increased blur. Pearson's correlation coefficient was used for analysis.\nFigure 9 Investigating the relationship between image blur, as measured using Fast-Fourier Transform (FFT)-based frequency analysis, and recall for forage oats and tillage radish. Significant positive correlations were observed for ARD and forage oat recall, and HQ1 and tillage radish recall. Higher FTT blur values indicate less blurry images, thus a positive relationship suggests blur is impacting detection performance, however, existing low performance (in the case of HQ1 and V2) would obscure correlations, where performance is unlikely to decrease further.\nbut global shutter ARD has the clearest images (Figure 6), with a substantially larger pixel area of 9 µm 2 compared to the other camera hardware tested. The larger pixel area acts as a greater 'catchment area' for photons, increasing the signal to the sensor. Thus, it can often improve the dynamic range of the camera, with each pixel capable of storing more charge, reducing clipping across brightness levels. Colour accuracy is also improved with larger pixel sizes resulting in a higher signal-to-noise ratio. These factors are likely a contributing factor to the high performance of ARD over the other systems tested.\nWhilst similar research targeting the combined e1.1 -1.3 stages is limited, the 1.11 % decrease in recall reported in Steward et al. (2002) per km h -1 increase in speed is substantially greater than a suggested 0.4% decrease in recall per km h -1 for ARD between 5 -30 km h -1 found in this study. The decline here, is also considerably lower than that of Liu et al. (2021), who reported a decrease by 2.25% decrease per km h -1 when increasing speed up to 5km h -1 , even with the use of more advanced deep learning, image classification algorithms. In that study, the authors used low cost 'webcams' (Aluratek AWC01F) and found that image blur was substantial above 3 km h - 1 . Whilst image augmentation was included in the training pipeline, the training dataset was collected while stationary with high quality cameras, limiting model exposure to blur and likely reducing generalizability (Hu et al., 2021).\nThe performance of HQ1 and HQ2 are consistent with the results reported in our previous work (Coleman et al., 2022b), whilst the maximum speed tested was increased from 4km h -1 up to 30 km h -1 , a more realistic range of speeds for large-scale weed control systems (Butts et al., 2021). Comparing the default settings on the Raspberry Pi HQ camera (HQ1) with the modifications made to the camera settings and image processing code in the OWL repository (HQ2), the HQ2 camera consistently outperformed HQ1, even with the lower resolution. Software changes between HQ1 and HQ2 were made to improve the efficiency of the system after identifying unnecessarily high exposure levels in default settings and inefficient methods of post-detection management. The higher resolution of the HQ1 camera resulted in fewer frames collected per transect and the missing of key weeds not just by the detection algorithm but by complete lack of capture in the recorded videos, particularly at faster travel speeds. The higher default exposure/brightness levels of the HQ1 camera are also more adversely impacted by blur, as the camera prioritises higher brightness images by lengthening exposure times. The HQ2 is now the current standard camera and software with the OWL system. Whilst the V2 camera has the same software-based settings as the HQ2, the image is of a lower quality, likely linked to the smaller sensor size (1.25 µm 2 vs 2.40 µm 2 pixel size respectively). With many machine vision applications using down-sampled images from the native resolution of the camera to reduce processing time (Hasan et al., 2021), pixel size, rather than native sensor resolution is likely the most important factor to improving weed detection performance, given the high performance of ARD.\nImportantly, the results here underscore the strong and differential impact of speed on the image-based detection of two species with different growth habits, namely broadleaved tillage radish and the 'grassy' forage oats. Controlling for speed, there was a 38.7% difference in performance for HQ2 between the broadleaf tillage radish and the graminaceous forage oats. Observing individual frames from videos in Figure 6, the thinner leaves of forage oats are impacted more by blur and plant movement than the broad, lobed leaves of the tillage radish. This is confirmed by the strong correlation between forage oats recall and image blur for the ARD camera. With differential performance based on plant species, there is a risk that such vision-based systems may unintentionally alter the in-field distributions of weed species, by preferentially removing one over others. Increasing speed of the weed detection device may result in uneven and unexpected declines in performance. Combined with detection size analysis, it appears that small detections will be most likely missed at higher speeds, with larger weeds continuing to be found based on larger central green masses." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Our study has shown that it is possible to reliably detect weeds in fallow at speeds of up to 30 km h -1 using colour based algorithms and relatively low-cost hardware. The negative relationship between speed and weed detection performance for three of the four camera systems tested in this study highlights that camera system selection should be a top priority in the development of new SSWC systems. The impact of camera system and speed on detection performance is amplified when we consider the differences in detection performance for broadleaf (more easily detected, less affected by speed) vs. grass weeds (less easily detected, more affected by speed). It is likely that larger pixel area will provide advantages over cameras with smaller pixel areas, even if the overall sensor has a lower native resolution, due to benefits in dynamic range and signal-to-noise ratio for colour fidelity. Whilst the results presented indicate that digital camera systems can now perform at the speeds required by industry, it is important to remember that weed detection by the camera system is the first step in a sequence of events required to achieve appropriate weed control with camera based SSWC systems (outlined in Figure 1). The next step in understanding how to reliably achieve appropriate real time weed control performance is investigation of complete system performance, from the weed entering the camera field of view to completion of weed control, under a variety of conditions (e.g., day, night, sunny, overcast). For operation at the speeds tested here, this will require fast signaling from computer systems, fast actuation of solenoids and fast delivery of herbicide from spray nozzle to the weed." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported by the Grains Research and Development Corporation grant Innovative crop weed control for northern region cropping systems (US00084)." }, { "figure_ref": [], "heading": "Declaration of Competing Interest", "publication_ref": [], "table_ref": [], "text": "The authors declare that they do not have any commercial or associative interest that represents a conflict of interest in connection with the work submitted." } ]
Site-specific weed control (SSWC) can provide considerable reductions in weed control costs and herbicide usage. Despite the promise of machine vision for SSWC systems and the importance of ground speed in weed control efficacy, there has been little investigation of the role of ground speed and camera characteristics on weed detection performance. Here, we compare the performance of four camera/software combinations using the open-source OpenWeedLocator platform -(1) default settings on a Raspberry Pi HQ camera; (2) optimised software settings on a HQ camera, (3) optimised software settings on the Raspberry Pi v2 camera; and (4) a global shutter Arducam AR0234 camera -at speeds ranging from 5 km h-1 to 30 km h-1. A combined excess green (ExG) and hue, saturation, value (HSV) thresholding algorithm was used for testing under fallow conditions using tillage radish (Raphanus sativus) and forage oats (Avena sativa) as representative broadleaf and grass weeds, respectively. ARD demonstrated the highest recall among camera systems, with up to 95.7% of weeds detected at 5 km h - 1 and 85.7% at 30 km h -1 . HQ1 and V2 cameras had the lowest recall of 31.1% and 26.0% at 30 km h -1 , respectively. All cameras experienced a decrease in recall as speed increased. The highest rate of decrease was observed for HQ1 with 1.12% and 0.90% reductions in recall for every km h -1 increase in speed for tillage radish and forage oats, respectively. Detection of the 'grassy' forage oats was worse (P<0.05) than the broadleaved tillage radish for all cameras. Despite the variations in recall, HQ1, HQ2, and V2 maintained near-perfect precision at all tested speeds. The variable effect of ground speed and camera system on detection performance of grass and broadleaf weeds, indicates that careful hardware and software considerations must be made when developing SSWC systems.
Investigating image-based fallow weed detection performance on R aphanus sativus and A vena sativa at speeds up to 30 km h -1
[ { "figure_caption": "Figure 22Figure2Aerial view of the field site showing the six, 25 × 1 m transects (left) and the four cameras mounted to the data collection system on the back of the test vehicle (right). Data for each camera were collected simultaneously at each speed and transect to ensure consistency in environmental conditions such as lighting and speed. The vehicle was run at set speeds using a speedometer and GPS-based speed checker over each transect whilst cameras were recording.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 33Figure 3 Box and whisker plots of measured diameters of forage oat (n = 43) and tillage radish (n = 47) plants sampled within three randomly assigned 1 m 2 quadrats from each of the six transects.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure 4The custom, frame-by-frame analysis tool developed in Python used to determine true and false positive weed detection rates for each video collected. Red squares indicate a detection. If needed, these were compared with a high-resolution video to determine if a weed is present in the field of view of the camera.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 Table 3 Figure 6536Figure 5 Regression analysis for the influence of speed on recall by camera (ARD, HQ2, HQ1, and V2), and class; forage oats (Avena sativa; left) and tillage radish (Raphanus sativus; right). Differences were found for the slope (P < 0.05) and intercept (P < 0.05) for different camera-software combinations. Error bars with standard errors of the mean (n[HQ1, HQ2, V2]=6; n[ARD:5-15, 30km h -1 ]=3; n[ARD:20km h -1 ]=2) are included. Equation: y = recall ~ speed * camera * class.", "figure_data": "", "figure_id": "fig_4", "figure_label": "536", "figure_type": "figure" } ]
Guy R Y Coleman; Angus Macintyre; Michael J Walsh; William T Salter
[ { "authors": "B Åstrand; A J Baerveldt", "journal": "Auton Robots", "ref_id": "b0", "title": "An agricultural mobile robot with vision-based perception for mechanical weed control", "year": "2002" }, { "authors": "G Bradski", "journal": "Dr Dobbs J Softw Tools", "ref_id": "b1", "title": "The OpenCV Library", "year": "2000" }, { "authors": "T R Butts; L T Barber; J K Norsworthy; J Davis", "journal": "Weed Technol", "ref_id": "b2", "title": "Survey of ground and aerial herbicide application practices in Arkansas agronomic crops", "year": "2021" }, { "authors": "B Calvert; A Olsen; J Whinney; M R Azghadi", "journal": "Plants", "ref_id": "b3", "title": "Robotic spot spraying of Harrisia cactus (Harrisia martinii) in grazing pastures of the Australian rangelands", "year": "2021" }, { "authors": "Ł Chechliński; B Siemiątkowska; M Majewski", "journal": "Sensors", "ref_id": "b4", "title": "A system for weeds and crops identification-reaching over 10 fps on Raspberry Pi with the usage of MobileNets, DenseNet and custom modifications", "year": "2019" }, { "authors": "Gry Coleman; A Bender; K Hu; S M Sharpe; A W Schumann; Z Wang", "journal": "Weed Technol", "ref_id": "b5", "title": "Weed detection to weed recognition: reviewing 50 years of research to identify constraints and opportunities for large-scale cropping systems", "year": "2022" }, { "authors": "Gry Coleman; W Salter; M Walsh", "journal": "Sci Rep", "ref_id": "b6", "title": "OpenWeedLocator (OWL): an open-source, low-cost device for fallow weed detection", "year": "2022" }, { "authors": "Gry Coleman; A Stead; M P Rigter; Z Xu; D Johnson; G M Brooker", "journal": "Weed Technol", "ref_id": "b7", "title": "Using energy requirements to compare the suitability of alternative methods for broadcast and site-specific weed control", "year": "2019" }, { "authors": "C R Harris; K J Millman; Sjvd Walt; R Gommers; P Virtanen; D Cournapeau", "journal": "Nature", "ref_id": "b8", "title": "Array programming with NumPy", "year": "2020" }, { "authors": "Asmm Hasan; F Sohel; D Diepeveen; H Laga; Mgk Jones", "journal": "Comput Electron Agric", "ref_id": "b9", "title": "A survey of deep learning techniques for weed detection from images", "year": "2021" }, { "authors": "C Hu; B B Sapkota; J A Thomasson; M V Bagavathiannan", "journal": "Remote Sens", "ref_id": "b10", "title": "Influence of image quality and light consistency on the performance of convolutional neural networks for weed mapping", "year": "2021" }, { "authors": "A Kassambara", "journal": "", "ref_id": "b11", "title": "rstatix: Pipe-Friendly Framework for Basic Statistical Tests", "year": "2023" }, { "authors": "M S Laursen; R N Jørgensen; M Dyrmann; R Poulsen", "journal": "Int J Agric Biosyst Eng", "ref_id": "b12", "title": "RoboWeedSupport -Sub Millimeter Weed Image Acquisition in Cereal Crops with Speeds up till 50 Km/h", "year": "2017" }, { "authors": "W S Lee; D C Slaughter; D K Giles", "journal": "Precis Agric", "ref_id": "b13", "title": "Robotic weed control system for tomatoes", "year": "1999" }, { "authors": "J Liu; I Abbas; R S Noor", "journal": "Agronomy", "ref_id": "b14", "title": "Development of Deep Learning-Based Variable Rate Agrochemical Spraying System for Targeted Weeds Control in Strawberry Crop", "year": "2021" }, { "authors": "F Lopez-Granados", "journal": "Weed Res", "ref_id": "b15", "title": "Weed detection for site-specific weed management: mapping and real-time approaches", "year": "2011" }, { "authors": "S Martin", "journal": "NVIDIA Blog", "ref_id": "b16", "title": "Harvesting AI: Startup's weed recognition for herbicides grows yield for farmers", "year": "2021-04-15" }, { "authors": "F De Mendiburu", "journal": "", "ref_id": "b17", "title": "agricolae: Statistical Procedures for Agricultural Research", "year": "2020" }, { "authors": "C J Meyer; J K Norsworthy; G R Kruger; T Barber", "journal": "Weed Technol", "ref_id": "b18", "title": "Effects of Nozzle Selection and Ground Speed on Efficacy of Liberty and Engenia Applications and Their Implication on Commercial Field Applications", "year": "2016" }, { "authors": "D A Mortensen; G A Johnson; D Y Wyse; A R Martin", "journal": "Site-Specif. Manag. Agric. Syst", "ref_id": "b19", "title": "Managing Spatially Variable Weed Populations", "year": "1995" }, { "authors": "J Palmer; G M Owen", "journal": "J Agric Eng Res", "ref_id": "b20", "title": "Automatic control of sugar beet singling and thinning by means of an on-line digital computer", "year": "1971" }, { "authors": "G G Peteinatos; M Weis; D Andújar; Rueda Ayala; V Gerhards; R ", "journal": "Pest Manag Sci", "ref_id": "b21", "title": "Potential use of ground-based sensor technologies for weed detection", "year": "2014" }, { "authors": " ", "journal": "", "ref_id": "b22", "title": "RStudio: Integrated Development Environment for R", "year": "2015" }, { "authors": "S A Shearer; P T Jones", "journal": "Trans ASAE", "ref_id": "b23", "title": "Selective application of postemergence herbicides using photoelectrics", "year": "1991" }, { "authors": "", "journal": "Society of Precision Agriculture Australia", "ref_id": "b24", "title": "SPAA. SPAA Precision Ag Fact Sheet: Weed Sensing", "year": "2016" }, { "authors": "B L Steward; L F Tian; L Tang", "journal": "Trans ASAE", "ref_id": "b25", "title": "Distance-based control system for machine vision-based selective spraying", "year": "2002" }, { "authors": "G A Thomas; G W Titmarsh; D M Freebairn; B J Radford", "journal": "", "ref_id": "b26", "title": "No-tillage and conservation farming practices in grain growing areas of Queensland -a review of 40 years of development", "year": "2007" }, { "authors": "J F Thompson; J V Stafford; Pchh Miller", "journal": "Crop Prot", "ref_id": "b27", "title": "Potential for automatic weed detection and selective herbicide application", "year": "1991" }, { "authors": "C Timmermann; R Gerhards; W Kühbauch; W Kuhbauch; W Kühbauch", "journal": "Precis Agric", "ref_id": "b28", "title": "The economic impact of sitespecific weed control", "year": "2003" }, { "authors": "K Verburg; W J Bond; J R Hunt", "journal": "Field Crops Res", "ref_id": "b29", "title": "Fallow management in dryland agriculture : Explaining soil water accumulation using a pulse paradigm", "year": "2012" }, { "authors": "R Visser; A Timmermans", "journal": "Opt. Agric. For. Biol. Process", "ref_id": "b30", "title": "Weed-It: a new selective weed control system", "year": "1996" }, { "authors": "M J Walsh; C C Squires; Gry Coleman; M J Widderick; A B Mckiernan; B S Chauhan", "journal": "Weed Technol", "ref_id": "b31", "title": "Tillage based, sitespecific weed control for conservation cropping systems", "year": "2020" }, { "authors": "A Wang; W Zhang; X Wei", "journal": "Comput Electron Agric", "ref_id": "b32", "title": "A review on weed detection using ground-based machine vision and image processing techniques", "year": "2019" }, { "authors": "H Wickham", "journal": "Springer-Verlag", "ref_id": "b33", "title": "ggplot2: Elegant Graphics for Data Analysis", "year": "2016" }, { "authors": "D Woebbecke; G Meyer; Von Bargen; K Mortensen; D ", "journal": "Trans Am Soc Agric Eng", "ref_id": "b34", "title": "Color indices for weed identification under various soil, residue, and lighting conditions", "year": "1995" }, { "authors": "T M Wolf; S H Liu; B C Caldwell; A I Hsiao", "journal": "Weed Technol", "ref_id": "b35", "title": "Calibration of Greenhouse Spray Chambers: The Importance of Dynamic Nozzle Patternation", "year": "1997" } ]
[ { "formula_coordinates": [ 6, 330.42, 369.54, 90.57, 18.58 ], "formula_id": "formula_0", "formula_text": "Recall = 𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑇𝑇𝑃𝑃 𝑇𝑇𝑃𝑃𝑃𝑃𝐹𝐹𝐹𝐹 𝑤𝑤𝑇𝑇𝑇𝑇𝑤𝑤𝑃𝑃" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b72", "b21", "b64", "b67", "b64", "b72", "b72", "b23", "b46", "b5", "b72", "b21", "b58", "b13", "b52", "b0", "b27", "b50", "b76" ], "table_ref": [], "text": "Given a series of calibrated images from different views in one scene, Multi-view Stereo (MVS) aims to recover the 3D information of the observed scene. It is a fundamental problem in computer vision and widely applied to robot navigation, autonomous driving, augmented reality, and etc. Recent learning-based MVS networks [Yao et al., 2018;Gu et al., 2020;Wang et al., 2021b] have achieved inspiring success both in the quality and the efficiency of 3D reconstruction. Generally, deep MVS approaches consist of the following five steps: feature extraction from multi-view images via CNN network with shared weights, differentiable warping to align all source features to the reference view, matching cost computation from reference features and aligned source features, matching cost aggregation or regularization, depth or disparity regression. Current progresses in learning-based MVS primarily concentrate on the limitation of reconstruction quality [Wei et al., 2021;Yang et al., 2020a], memory consumption [Yan et al., 2020;Wei et al., 2021], and efficiency [Wang et al., 2021b;Wang et al., 2021a]. The basic network architecture of these works is based on the pioneering backbone network called MVSNet [Yao et al., 2018], which provides an elegant and stable baseline. However, instead of taking the inheritance of network design principle in MVSNet [Yao et al., 2018] for granted, we can rethink the task of MVS problem as a dense correspondence problem [Hosni et al., 2012] alternatively. The core of MVS is a dense pixelwise correspondence estimation problem that searches the corresponding pixel of a specific pixel in the reference image along the epipolar line in all warped source images. No matter which task this correspondence estimation problem is applied to, the matching task can be boiled down to a classical matching pipeline [Scharstein and Szeliski, 2002]: (1) feature extraction, and (2) cost aggregation. In learning-based MVS methods, the transition from traditional hand-crafted features to CNN-based features inherently solves the former step of the classical matching pipeline via providing powerful feature representation learned from large-scale data. However, handling the cost aggregation step by matching similarities between features without any prior usually suffers from the challenges due to ambiguities generated by repetitive patterns or background clutters [Cho et al., 2021]. Consequently, a typical solution in MVSNet and its variants [Yao et al., 2018;Gu et al., 2020;Wang et al., 2021b] is to apply a 3D CNN or an RNN to reg-ularize the cost volume among reference and source views, rather than directly rely on the quality of the initial correlation clues in cost volume. Although formulated variously in previous methods, these methods either use hand-crafted techniques that are agnostic to severe deformations or inherit the limitation of CNNs, e.g. limited receptive fields, unable to discriminate incorrect matches that are locally consistent.\nIn this work, we focus on the cost aggregation step of cost volume and propose a novel cost aggregation Transformer (CostFormer) to tackle the issues above. Our CostFormer is based on Transformer [Vaswani et al., 2017], which is renowned for its global receptive field and long-range dependent representation. By aggregating the matching cost in the cost volume, our aggregation network can explore global correspondences and refine the ambiguous matching points effectively with the help of the self-attention (SA) mechanism in Transformer. Though the promising performances of Vision Transformers have been proven in many applications [Dosovitskiy et al., 2020;Sun et al., 2021], the time and memory complexity of the key-query dot product interaction in conventional SA grow quadratically with the spatial resolution of inputs. Hence, replacing 3D CNN with Transformer may result in unexpected extra occupancy in memory and latency in inference. Inspired by [Wang et al., 2021b], we further introduce the Transformer architecture into an iterative multi-scale learnable PatchMatch pipeline. It inherits the advantages of the long-range receptive field in Transformers, improving the reconstruction performance substantially. Meantime, it also maintains a balanced trade-off between efficiency and performance, which is competitive in the inference speed and parameters magnitude compared with other methods.\nOur main contributions are as follows:\n(1) In this paper, we propose a novel Transformer-based cost aggregation network called CostFormer, which can be plugged into learning-based MVS methods to improve cost volume effectively. (2) CostFormer applies an efficient Residual Depth-Aware Cost Transformer to cost volume, extending 2D spatial attention to 3D depth and spatial attention. (3) CostFormer applies an efficient Residual Regression Transformer between cost aggregation and depth regression, keeping spatial attention. (4) The proposed CostFormer brings benefits to learning-based MVS methods when evaluating DTU [Aanaes et al., 2016], Tanks & Temples [Knapitsch et al., 2017] ETH3D [Schöps et al., 2017] and BlendedMVS [Yao et al., 2020] datasets." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Learning-based MVS Methods", "publication_ref": [ "b72", "b64", "b21", "b3" ], "table_ref": [], "text": "Powered by the great success of deep learning-based techniques, many learning-based methods have been proposed to boost the performance of Multi-view Stereo. MVSNet [Yao et al., 2018] is a landmark for the end-to-end network that infers the depth map on each reference view for the MVS task. Feature maps extracted by a 2D CNN on each view are reprojected to the same reference view to build a variance-based cost volume. A 3D CNN is further used to regress the depth map. Following this pioneering work, lots of efforts have been devoted to boosting speed and reducing memory occupation. To relieve the burden of huge memory cost, recurrent neural networks are utilized to regularize the cost volume in AA-RMVSNet [Wei et al., 2021]. Following a coarse-to-fine manner to develop a computationally efficient network, a recent strand of works divide the single cost volume into several cost volumes at multiple stages, like CasMVSNet [Gu et al., 2020], CVP-MVSNet [Yang et al., 2020a], UCSNet [Cheng et al., 2020], and etc. Inspired by the traditional PatchMatch stereo algorithm, PatchMatchNet [Wang et al., 2021b] inherits the pipeline in PatchMatch stereo in an iterative manner and extend it into a learning-based end-to-end network." }, { "figure_ref": [], "heading": "Vision Transformer", "publication_ref": [ "b58", "b13", "b35", "b13", "b31", "b35", "b5", "b52", "b31", "b9" ], "table_ref": [], "text": "The success of Transformer [Vaswani et al., 2017] and its variants [Dosovitskiy et al., 2020;Liu et al., 2021] have motivated the development of Neural Language Processing in recent years. Borrowing inspiration from these works, Transformer has been successfully extended to vision tasks and proven to boost the performance of image classification [Dosovitskiy et al., 2020]. Following the pioneering work, many efforts are devoted to boosting the development of various vision tasks with the powerful representation ability of Transformer.\nIn [Li et al., 2021], the application of Transformer in the classic stereo disparity estimation task is investigated thoughtfully. Swin Transformer [Liu et al., 2021] involves the hierarchical structure into Vision Transformers and computes the representation with shifted windows. Considering Transformer's superiority in extracting global content information via attention mechanism, many works attempt to utilize it in the task of feature matching. Given a pair of images, CATs [Cho et al., 2021] explore global consensus among correlation maps extracted from a Transformer, which can fully leverage the self-attention mechanism and model long-range dependencies among pixels. LoFTR [Sun et al., 2021] also leverages Transformers with a coarse-to-fine manner to model dense correspondence. STTR [Li et al., 2021] extends the feature matching Transformer architecture to the task of stereo depth estimation task in a sequence-to-sequence matching perspective. TransMVSNet [Ding et al., 2021] is the most relevant concurrent work compared with ours, which utilizes a Feature Matching Transformer (FMT) to leverage self-attention and cross-attention to aggregate long-range context information within and across images. Specifically, the focus of TransMVSNet is on the enhancement of feature extraction before cost aggregation, while our proposed Cost-Former aims to improve the cost aggregation process on cost volume." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the detailed architecture of the proposed CostFormer which focuses on the cost aggregation step of cost volume. CostFormer contains two specially designed modules called Residual-Depth Aware Cost Transformer (RDACT) and Residual Regression Transformer (RRT), which are utilized to explore the relation between pixels within a long range and the relation between different depth hypotheses during the evaluation process. In Section Preliminary, we give a brief preliminary on the pipeline of our method. Then we show the construction of RDACT and RRT respectively. Finally, we show experiments." }, { "figure_ref": [ "fig_1" ], "heading": "Preliminary", "publication_ref": [], "table_ref": [], "text": "In general, the proposed RDACT and RRT can be integrated with arbitrary cost volume of learning-based MVS networks.\nBased on the patch match architecture [Wang et al., 2021b], we further explore the issue of cost aggregation on cost volume. As shown in Figure 2, CostFormer based on Patch-MatchNet [Wang et al., 2021b] extracts feature maps from multi-view images and performs initialization and propagation to warp the features maps in source views to reference view. Given a pixel p at the reference view and its corresponding pixel p i,j at the i-th source view under the j-th depth hypothesis d j is defined as:\np i,j = K i • (R 0,i • (K -1 0 • p • d j ) + t 0,i )(1)\nwhere R 0,i and t 0,i denote the rotation and translation between the reference view and i-th source view. K 0 and K i are the intrinsic matrices of the reference and i-th source view.\nThe warped feature maps at the i-th source view F i (p i,j ) are bilinearly interpolated to remain the original resolution. Then, a cost volume is constructed from the similarity of feature maps, and 3D CNNs are applied to regularize the cost volume. Warped features from all source views are integrated into a single cost for each pixel p and depth hypothesis d j by computing the cost per hypothesis S i (p, j) g via group-wise correction as follows:\nS i (p, j) g = G C < F 0 (p) g , F i (p i,j ) g >∈ R G (2)\nwhere G is the group number, C is the channel number, < •, • > is the inner product, F 0 (p) g and F i (p i,j ) g are grouped reference feature map and grouped source feature map at the i-th view respectively. Then they aggregate over the views with a pixel-wise view weight w i (p) to get S(p, j).\nTaking no account of Transformer at the cost aggregation (CA) step, a CA module firstly utilizes a small network with 3D convolution with 1×1×1 kernels to obtain a single cost, C ∈ R H×W ×D . For a spatial window of K e pixels {p k } Ke k=1 can be organized as a grid, per pixel additional offsets {∆p k } Ke k=1 can be learned for spatial adaptation. The aggregated spatial cost C(p, j) is defined as:\nC(p, j) = 1 Ke k=1 w k d k Ke k=1 w k d k C(p + p k + ∆p k , j) (3)\nwhere w k and d k weight the cost C based on feature and depth similarity. Given the sampling positions (p + p k + ∆p k ) Ke k=1 , corresponding features from F 0 are extracted via bilinear interpolation. Then group-wise correlation is applied between the features at each sampling location and p. The results are concatenated into a volume on which 3D convolution layers with 1×1×1 kernels and sigmoid non-linearities are applied to output normalized weights {w k } Ke k=1 . The absolute difference in inverse depth between each sampling point and pixel p with their j-th hypotheses are collected. Then a sigmoid function on the inverted differences is applied to obtain {d k } Ke k=1 . The remarkable thing is that such cost aggregation inevitably suffers from challenges due to ambiguities generated by repetitive patterns or background clutters. The local mechanisms in ambiguities exist in many operations, such as local propagation and spatial adaptation by small learnable slight offset. CostFormer significantly alleviates these problems through RDACT and RRT. The original CA module is also repositioned between RDACT and RRT.\nAfter RRT, soft argmin is applied to get the regressed depth. Finally, a depth refinement module is designed to refine the depth regression.\nFor CascadeMVS and other cascade architectures, Cost-Former can be plugged into similarly." }, { "figure_ref": [], "heading": "Residual Depth-Aware Cost Transformer", "publication_ref": [], "table_ref": [], "text": "In this section, we explore the details of the Residual Depth-Aware Cost Transformer (RDACT). Each RDACT consists of two parts. The first part is a stack of Depth-Aware Transformer layer (DATL) and Depth-Aware Shifted Transformer layer (DASTL), which deal with the cost volumes to explore the relations sufficiently. The second part is the Re- \nGround Truth Ours PatchmatchNet UCSNet\nC k = DASTL k (DATL k (C k-1 )), k = 1, 2, ..., L(4)\nwhere DATL k is the k-th Depth-Aware Transformer layer with regular windows, DASTL k is the k-th Depth-Aware Transformer layer with shifted windows, E is the embedding dimension number of DATL k and DASTL k . Then a Re-Embedding Cost layer is applied to the last C k , namely C L , to recover G from E. The output of RDACT is formulated as:\nC out = REC(C L ) + C 0 (5\n)\nwhere REC is the Re-Embedding Cost layer, and it can be a 3D convolution with G output channels. If E = G, C out can be simply formulated as: Depth-Aware Self-Attention Mechanism: For a cost window token X ∈ R hs×ws×ds×G , the query, key, and value matrices Q, K and V ∈ R hs×ws×ds×G are computed as:\nC out = C L + C 0(\nQ = XP Q , K = XP K , V = XP V(7)\nwhere P Q , P K , and P V ∈ R G×G are projection matrices shared across different windows. By introducing depth and spatial aware relative position bias B1 ∈ R (hs×hs)×(ws×ws)×(ds×ds) for each head, the depth-aware self-attention(DA-SA1) matrix within a 3D local window is thus computed as:\nDA-SA1 = Attention1(Q1, K1, V 1) = Sof tM ax( Q1K1 T √ G + B1)V 1 (8)\nWhere Q1, K1 and V 1 ∈ R hswsds×G are reshaped from Q, K and V ∈ R hs×ws×ds×G . The process of DATL with LayerNorm(LN) and multi-head DA-SA1 at the current level is formulated as:\nX l = DA-MSA1((LN(X l-1 )) + X l-1(9)\nBy introducing depth-aware relative position bias B2 ∈ R ds×ds for each head, the depth-aware self-attention(DA-SA2) matrix along the depth dimension is an alternative module to DATL and thus computed as:\nDA-SA2 = Attention2(Q2, K2, V 2) = Sof tM ax( Q2K2 T √ G + B2)V 2 (10)\nWhere Q2, K2 and V 2 ∈ R hsws×ds×G are reshaped from Q, K and V ∈ R hs×ws×ds×G . B1 and B2 will be along the depth dimension and lie in the range of\n[-d s + 1, d s -1].\nAlong the height and width dimension, B1 lies in the range of [-h s + 1, h s -1] and [-w s + 1, w s -1]. In practice, we parameterize a smaller-sized bias matrix B1 ∈ R (2hs-1)×(2ws-1)×(2ds-1) from B1 and perform the attention functionfor f times in parallel, and then concatenate the depth-aware multi-head self-attention (DA-MSA) outputs. The process of DATL with LayerNorm(LN), multi-head DA-SA1, and DA-SA2 at the current level is formulated as:\nX l = DA-MSA1(LN(DA-MSA2(LN(X l-1 )))) + X l-1 (11) Then, an MLP module that has two fully-connected layers with GELU non-linearity between them is used for further feature transformations:\nX l = MLP(LN( X l ))) + X l(12)\nCompared with global attention, local attention makes it possible for computation in high resolution. However, there is no connection across local windows with fixed partitions. Therefore, regular and shifted window partitions are used alternately to enable cross-window connections. So at the next level, the window partition configuration is shifted along the height, width, and depth axes by ( hs 2 , ws 2 , ds 2 ). Depth-aware self-attention will be computed in these shifted windows(DAS-MSA); the whole process of DASTL can be formulated as:\nX l+1 = DAS-MSA1(LN(DAS-MSA2(LN(X l )))) + X l (13) X l+1 = MLP(LN( X l+1 )) + X l+1(14)\nDAS-MSA1 and DAS-MSA2 correspond to multi-head Attention1 and Attention2 within a shifted window, respectively. Assuming the number of stages is n, there are n RDACT blocks in CostFormer." }, { "figure_ref": [ "fig_1" ], "heading": "Residual Regression Transformer", "publication_ref": [ "b35" ], "table_ref": [], "text": "After aggregation, the cost C ∈ R HXW XD will be used for depth regression. To further explore the spatial relation under some depth, a Transformer block is applied to C before softmax. Inspired by the RDACT, the whole process of Residual Regression Transformer(RRT) can be formulated as: where RT k is the k-th Regression Transformer layer with regular windows, RST k is the k-th Regression Transformer layer with shifted windows, RER is the re-embedding layer to recover the depth dimension from C L , and it can be a 2D convolution with D output channels.\nC k = RST k (RT k ( C k-1 )), k = 1, 2, ..., L(15)\nC out = RER( C L ) + C 0 (16)\nRRT also computes self-attention in a local window. Compared with RDACT, RRT focuses more on spatial relations. Compared with regular Swin [Liu et al., 2021] Transformer block, RRT treats the depth as a channel, the number of channels is actually 1 and this channel is squeezed before the Transformer. The embedding parameters are set to fit the cost aggregation of different iterations. If the embedding dimension number equals D, C out can be simply formulated as:\nC out = C L + C 0(17)\nAs a stage may iterate many times with different depth hypotheses, the number of RRT blocks should be set the same as the number of iterations. The whole RRT is shown in the yellow window in Figure 2." }, { "figure_ref": [], "heading": "Training", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Loss function", "publication_ref": [], "table_ref": [], "text": "Final loss combines with the losses of all iterations at all stages and the loss from the final refinement module:\nLoss = s k=1 n i=1 L k i + L ref(18)\nwhere L k i is the regression or unification loss of the i-th iteration at k-th stage. L ref is the regression or unification loss from refinement module. If refinement module does not exist, the L ref loss is set to zero." }, { "figure_ref": [], "heading": "Common training settings", "publication_ref": [ "b41", "b72" ], "table_ref": [], "text": "CostFormer is implemented by Pytorch [Paszke et al., 2019]. For RDACT, we set the depth number at stages 3, 2, 1 as 4, 2, 2; patch size at height, width and depth axes as 4, 4, 1; window size at height, width and depth axes as 7, 7, 2. If the backbone is set as PatchMatchNet, embedding dimension number at stages 3, 2, 1 are set as 8, 8, 4. For RRT, we set the depth number as 2 at all stages, patch size as 1 at all axes; window size as 8 at all axes. If the backbone is set as Patch-MatchNet, embedding dimension number at iteration 2, 2, 1 at stages 3, 2, 1 as 32, 64, 16, 16, 8. All models are trained on Nvidia GTX V100 GPUs. After depth estimation, we reconstruct point clouds similar to MVSNet [Yao et al., 2018]." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce multiple MVS datasets and evaluate our method on these datasets. The results will be further reported in detail." }, { "figure_ref": [], "heading": "DATASETS", "publication_ref": [ "b0", "b76", "b50", "b27", "b54" ], "table_ref": [], "text": "The datasets used in the evaluation are DTU [Aanaes et al., 2016], BlendedMVS [Yao et al., 2020], ETH3D [Schöps et al., 2017], Tanks & Temples [Knapitsch et al., 2017], and YFCC-100M [Thomee et al., 2016]. The DTU dataset is an indoor multi-view stereo dataset with 124 different scenes, there are 49 views under seven different lighting conditions in one scene. Tanks & Temples is collected in a more complex and realistic environment, and it's divided into the intermediate and advanced set. ETH3D benchmark consists of calibrated high-resolution images of scenes with strong viewpoint variations. It is divided into training and test datasets. While the training dataset contains 13 scenes, the test dataset contains 12 scenes. BlendedMVS dataset is a large-scale synthetic dataset, consisting of 113 indoor and outdoor scenes and split into 106 training scenes and 7 validation scenes." }, { "figure_ref": [ "fig_2" ], "heading": "Main Settings and Results on DTU", "publication_ref": [ "b0", "b21", "b17", "b56", "b19", "b48", "b25", "b72", "b74", "b37", "b1", "b79", "b21", "b3", "b77", "b64", "b44" ], "table_ref": [ "tab_2" ], "text": "For the evaluation on the DTU [Aanaes et al., 2016] evaluation set, we only use the DTU training set. During the training phase, we set the image resolution to 640 × 512. We compare our method to recent learning-based MVS methods, including CasMVSNet [Gu et al., 2020] and PatchMatchNet [Wang et al., 2021b] which are also set as backbones of Cost-Former. We follow the evaluation metrics provided by the DTU dataset. The quantitative results on the DTU evaluation set are summarized in Table 2, which indicates that the plugand-play CostFormer improves the cost aggregation. Partial visualization results of Table 2 are shown in Figure 3. Complexity Analysis: For the complexity analysis of Cost-Former, we plug it into PatchMatchNet [Wang et al., 2021b] and first compare the memory consumption and run-time with Methods Acc. (mm)\nComp. (mm) Overall (mm) Furu [Furukawa and Ponce, 2010] 0.613 0.941 0.777 Tola [Tola et al., 2012] 0.342 1.190 0.766 Gipuma [Galliani et al., 2015] 0.283 0.873 0.578 Colmap [Schönberger and Frahm, 2016] 0.400 0.644 0.532 SurfaceNet [Ji et al., 2017] 0.450 1.040 0.745 MVSNet [Yao et al., 2018] 0.396 0.527 0.462 R-MVSNet [Yao et al., 2019] 0.383 0.452 0.417 P-MVSNet [Luo et al., 2019] 0.406 0.434 0.420 Point-MVSNet [Chen et al., 2019] 0.342 0.411 0.376 Fast-MVSNet [Yu and Gao, 2020] 0.336 0.403 0.370 CasMVSNet [Gu et al., 2020] 0.325 0.385 0.355 UCS-Net [Cheng et al., 2020] 0.338 0.349 0.344 CVP-MVSNet [Yang et al., 2020b] 0.296 0.406 0.351 PVA-MVSNet [Yi et al., 2020] 0.379 0.336 0.357 PatchMatchNet [Wang et al., 2021b] 0.427 0.277 0.352 AA-RMVSNet [Wei et al., 2021] 0.376 0.339 0.357 UniMVSNet [Peng et al., 2022] 0 " }, { "figure_ref": [], "heading": "Main Settings and Results on Tanks & Temples", "publication_ref": [ "b27", "b0", "b76", "b44", "b27" ], "table_ref": [], "text": "For the evaluation on Tanks & Temples [Knapitsch et al., 2017], we use the DTU [Aanaes et al., 2016] dataset and the Blended MVS [Yao et al., 2020] dataset. We compare our method to those recent learning-based MVS methods, including PatchMatchNet [Wang et al., 2021b] and UniMVS-Net [Peng et al., 2022] which are also set as backbones of CostFormer. The quantitative results on the Tanks & Temples [Knapitsch et al., 2017] " }, { "figure_ref": [], "heading": "Main Settings and Results on ETH3D", "publication_ref": [ "b27", "b50", "b15", "b19", "b17", "b48", "b66", "b29" ], "table_ref": [], "text": "We use the PatchMatchNet [Wang et al., 2021b] as backbone and adopt the trained model used in the Tanks & Temples dataset [Knapitsch et al., 2017] to evaluate the ETH3D [Schöps et al., 2017] dataset. As shown in Table 5, our method outperforms others on both the training and particularly challenging test datasets(higher is better).\nMethods Training Testing F1 score ↑ Time(s) ↓ F1 score ↑ Time(s) ↓ MVE [Fuhrmann et al., 2014] 20.47 13278.69 30.37 10550.67 Gipuma [Galliani et al., 2015] 36.38 587.77 45.18 689.75 PMVS [Furukawa and Ponce, 2010] 46.06 836.66 44.16 957.08 COLMAP [Schönberger and Frahm, 2016] 67.66 2690.62 73.01 1658.33 PVSNet [Xu and Tao, 2020] 67.48 -72.08 829.5 IterMVS [Wang et al., 2021a] 66.36 -74.29 -PatchMatchNet [Wang et al., 2021b] 64.21 452.63 73.12 492.52 PatchMatch-RL [Lee et al., 2021] 67.78 -72.38 -CostFormer(Ours) 68.92(+4.71) 566.18 75.24(+2.12) 547.64\nTable 5: Quantitative results of different methods on ETH3D." }, { "figure_ref": [], "heading": "Main Settings and Results on BlendedMVS dataset", "publication_ref": [ "b75", "b72", "b7", "b80", "b21", "b39", "b9" ], "table_ref": [ "tab_6" ], "text": "We use the model used in ETH3D. On BlendedMVS [Yao et al., 2020] evaluation set, we set N = 5 and image resolution as 576 × 768. End point error (EPE), 1 pixel error (e1), and 3 pexels error (e3) are used as the evaluation metrics. Quantitative results(lower is better) of different methods are shown in Table 6.\nMethod EPE e1 (%) e3 (%) MVSNet [Yao et al., 2018] 1.49 21.98 8.32 MVSNet-s [Darmon et al., 2021] 1.35 25.91 8.55 CVP-MVSNet [Yang et al., 2020a] 1.90 19.73 10.24 VisMVSNet [Zhang et al., 2020] 1.47 18.47 7.59 CasMVSNet [Gu et al., 2020] 1.98 15.25 7.60 EPPMVSNet [Ma et al., 2021] 1.17 12.66 6.20 TransMVSNet [Ding et al., 2021] 0 " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b0", "b27", "b50", "b76" ], "table_ref": [], "text": "In this work, we explore whether cost Transformer can improve the cost aggregation and propose a novel CostFormer with the cascade RDACT and RRT modules. The experimental results on DTU [Aanaes et al., 2016] , Tanks & Temples [Knapitsch et al., 2017], ETH3D [Schöps et al., 2017], and\nBlendedMVS [Yao et al., 2020] show that our method is competitive, efficient, and plug-and-play. Cost Transformer can be your need for better cost aggregation in multi-view stereo." } ]
The core of Multi-view Stereo(MVS) is the matching process among reference and source pixels. Cost aggregation plays a significant role in this process, while previous methods focus on handling it via CNNs. This may inherit the natural limitation of CNNs that fail to discriminate repetitive or incorrect matches due to limited local receptive fields. To handle the issue, we aim to involve Transformer into cost aggregation. However, another problem may occur due to the quadratically growing computational complexity caused by Transformer, resulting in memory overflow and inference latency. In this paper, we overcome these limits with an efficient Transformer-based cost aggregation network, namely CostFormer. The Residual Depth-Aware Cost Transformer(RDACT) is proposed to aggregate long-range features on cost volume via self-attention mechanisms along the depth and spatial dimensions. Furthermore, Residual Regression Transformer(RRT) is proposed to enhance spatial attention. The proposed method is a universal plugin to improve learning-based MVS methods.
CostFormer:Cost Transformer for Cost Aggregation in Multi-view Stereo
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison with state-of-the-art MVS methods on DTU. Relationship between error, GPU memory and run-time with image size 1152×864.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Structure of CostFormer based on PatchMatchNet.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of different methods on the DTU evaluation set. The backbone of CostFormer is PatchMatchNet here.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of different methods on Tanks&Temples. The Recall reported by official benchmark is presented.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Quantitative results of different methods on the Tanks & Temples benchmark (higher is better). * is pretrained on DTU and fine-tuned on BlendedMVS. -is not pretrained on DTU and trained from scratch on BlendedMVS", "figure_data": "MethodsMeanFam.Intermediate Group (F-score ↑) Fra. Hor. Lig. M60Pan.Pla.Tra.MeanAdvanced Group (F-score ↑) Aud. Bal. Cou. Mus.Pal.Tem.MVSNet [Yao et al., 2018]43.4855.99 28.55 25.07 50.79 53.96 50.86 47.90 34.69-------CasMVSNet [Gu et al., 2020]56.8476.37 58.45 46.26 55.81 56.11 54.06 58.18 49.5131.1219.81 38.46 29.10 43.87 27.36 28.11UCS-Net [Cheng et al., 2020]54.8376.09 53.16 43.03 54.00 55.60 51.49 57.38 47.89-------CVP-MVSNet [Yang et al., 2020b]54.0376.50 47.74 36.34 55.12 57.28 54.28 57.43 47.54-------PVA-MVSNet [Yi et al., 2020]54.4669.36 46.80 46.01 55.74 57.23 54.75 56.70 49.06-------AA-RMVSNet [Wei et al., 2021]61.5177.77 59.53 51.53 64.02 64.05 59.47 60.85 54.9033.5320.96 40.15 32.05 46.01 29.28 32.71PatchmatchNet [Wang et al., 2021b]53.1566.99 52.64 43.24 54.87 52.87 49.54 54.21 50.8132.3123.69 37.73 30.04 41.80 28.31 32.29UniMVSNet [Peng et al., 2022]64.3681.20 66.34 53.11 63.46 66.09 64.84 62.23 57.5338.9628.33 44.36 39.74 52.89 33.80 34.63MVSTR [Zhu et al., 2021]56.9376.92 59.82 50.16 56.73 56.53 51.22 56.58 47.4832.8522.83 39.04 33.87 45.46 27.95 27.97TransMVS [Ding et al., 2022]63.5280.92 65.83 56.94 62.54 63.06 60.00 60.20 58.6737.0024.84 44.59 34.77 46.49 34.69 36.62MVSTER [Wang et al., 2022]---------37.5326.68 42.14 35.65 49.37 32.16 39.19CostFormer(PatchMatchNet)56.27(+3.12) 72.46 52.59 54.27 55.83 56.80 50.88 55.05 52.32 34.07(+1.76) 24.05 39.20 32.17 43.95 28.62 36.46CostFormer(PatchMatchNet*)57.10(+3.95) 74.22 56.27 54.41 56.65 54.46 51.45 57.65 51.70 34.31(+2.00) 26.77 39.13 31.58 44.55 28.79 35.03CostFormer(UniMVSNet -)64.40(+0.04) 81.45 66.22 53.88 62.94 66.12 65.35 61.31 57.90 39.55(+0.59) 28.61 45.63 40.21 52.81 34.40 35.62CostFormer(UniMVSNet*)64.51(+0.15) 81.31 65.51 55.57 63.46 66.24 65.39 61.27 57.30 39.43(+0.47) 29.18 45.21 39.88 53.38 34.07 34.87", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative results of different methods on DTU.this backbone. For a fair comparison, a fixed input size of 1152 × 864 is used to evaluate the computational cost on a single GPU of NVIDIA Telsa V100. Memory consumption and run-time of PatchMatchNet[Wang et al., 2021b] are 2323MB and 0.169s. They are only increased to 2693MB and 0.231s by the plug-in.Based on the reports of PatchMatchNet[Wang et al., 2021b], we then get the comparison results of other state-ofthe-art learning-based methods. Memory consumption and run-time are reduced by 61.9% and 54.8% compared to Cas-MVSNet[Gu et al., 2020], by 48.8% and 50.7% compared to UCSNet[Cheng et al., 2020] and by 63.5% and 77.3% compared toCVP-MVSNet [Yang et al., 2020b]. Combining the results(lower is better) are shown in Table3and Figure1, GPU memory and run-time of CostFormer are set as 100%.", "figure_data": "MethodGPU Memory (%)Run-time (%)Overall (mm)CasMVSNet [Gu et al., 2020]262.47%221.24%0.355UCSNet [Cheng et al., 2020]195.31%202.84%0.344CVP-MVSNet [Yang et al., 2020b]273.97%440.53%0.351Ours100.00%100.00%0.343", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with other SOTA learning-based MVS methods on DTU. Relationship between overall performance, GPU memory and run-time.Comparison with TransformersWe also compare Cost-Former with other Transformers[Zhu et al., 2021;Wang et al., 2022;Ding et al., 2021;Liao et al., 2022] which are used in MVS methods and not plug-and-play. For a fair comparison, only direct improvements(higer is better) and incremental cost of run time(low is better) from pure Transformers under similar depth hypotheses are summarized in Table4.", "figure_data": "MethodTrans Improvement (mm)Delta Time (s)Delta Time (%)MVSTR [Zhu et al., 2021]+0.0140+0.359s+78.21%TransMVS [Ding et al., 2021]+0.0160+0.367s+135.42%WT-MVSNet(CT) [Liao et al., 2022]+0.0130+0.265s-MVSTER(CNN Fusion) [Wang et al., 2022]+0.0040+0.016s+13.34%CostFormer(CNN Fusion)+0.0097+0.062s+36.69%", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Quantitative improvement of performance and incremental cost of run time of different Transformers on DTU evaluation set.", "figure_data": "", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "set are summarized in Table 1, which indicates the robustness of CostFormer. Partial visualization results of Table 1 are shown in Figure 4. We would like to clarify that UniMVSNet -in Table 1 only uses BlendedMVS for training which uses less data (no DTU) than the UniMVS-Net baseline.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Quantitative results of different methods on BlendedMVS", "figure_data": ".738.323.62", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Weitao Chen; Hongbin Xu; Zhipeng Zhou; Yang Liu; Baigui Sun; Wenxiong Kang; Xuansong Xie; Alibaba Group
[ { "authors": " Aanaes", "journal": "Int. J. Comput. Vis", "ref_id": "b0", "title": "Large-scale data for multiple-view stereopsis", "year": "2016" }, { "authors": "Chen ", "journal": "", "ref_id": "b1", "title": "", "year": "2019" }, { "authors": "Rui Chen; Songfang Han; Jing Xu; Hao Su", "journal": "IEEE", "ref_id": "b2", "title": "Point-based multi-view stereo network", "year": "2019" }, { "authors": " Cheng", "journal": "", "ref_id": "b3", "title": "", "year": "2020" }, { "authors": "Shuo Cheng; Zexiang Xu; Shilin Zhu; Zhuwen Li; Li Erran Li; Ravi Ramamoorthi; Hao Su", "journal": "IEEE", "ref_id": "b4", "title": "Deep stereo using adaptive thin volume representation with uncertainty awareness", "year": "2020" }, { "authors": " Cho", "journal": "", "ref_id": "b5", "title": "", "year": "2021" }, { "authors": "Seokju Cho; Sunghwan Hong; Sangryul Jeon; Yunsung Lee; Kwanghoon Sohn; Seungryong Kim", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "Cats: Cost aggregation transformers for visual correspondence", "year": "2021" }, { "authors": " Darmon", "journal": "", "ref_id": "b7", "title": "", "year": "2021" }, { "authors": "Bénédicte Franc ¸ois Darmon; Jean-Clément Bascle; Pascal Devaux; Mathieu Monasse; Aubry", "journal": "", "ref_id": "b8", "title": "Deep multi-view stereo gone wild", "year": "2021" }, { "authors": " Ding", "journal": "", "ref_id": "b9", "title": "", "year": "2021" }, { "authors": "Yikang Ding; Wentao Yuan; Qingtian Zhu; Haotian Zhang; Xiangyue Liu; Yuanjiang Wang; Xiao Liu", "journal": "", "ref_id": "b10", "title": "Transmvsnet: Global context-aware multiview stereo network with transformers", "year": "2021" }, { "authors": " Ding", "journal": "", "ref_id": "b11", "title": "", "year": "2022" }, { "authors": "Yikang Ding; Wentao Yuan; Qingtian Zhu; Haotian Zhang; Xiangyue Liu; Yuanjiang Wang; Xiao Liu", "journal": "", "ref_id": "b12", "title": "Transmvsnet: Global context-aware multi-view stereo network with transformers", "year": "2022" }, { "authors": " Dosovitskiy", "journal": "", "ref_id": "b13", "title": "", "year": "2020" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b14", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": " Fuhrmann", "journal": "", "ref_id": "b15", "title": "", "year": "2014" }, { "authors": "Simon Fuhrmann; Fabian Langguth; Michael Goesele", "journal": "Eurographics Association", "ref_id": "b16", "title": "Mve -a multi-view reconstruction environment", "year": "2014" }, { "authors": "Ponce Furukawa", "journal": "", "ref_id": "b17", "title": "", "year": "2010" }, { "authors": "Yasutaka Furukawa; Jean Ponce", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b18", "title": "Accurate, dense, and robust multiview stereopsis", "year": "2010" }, { "authors": " Galliani", "journal": "", "ref_id": "b19", "title": "", "year": "2015" }, { "authors": "Silvano Galliani; Katrin Lasinger; Konrad Schindler", "journal": "IEEE Computer Society", "ref_id": "b20", "title": "Massively parallel multiview stereopsis by surface normal diffusion", "year": "2015" }, { "authors": " Gu", "journal": "", "ref_id": "b21", "title": "", "year": "2020" }, { "authors": "Xiaodong Gu; Zhiwen Fan; Siyu Zhu; Zuozhuo Dai; Feitong Tan; Ping Tan", "journal": "", "ref_id": "b22", "title": "Cascade cost volume for high-resolution multi-view stereo and stereo matching", "year": "2020" }, { "authors": " Hosni", "journal": "", "ref_id": "b23", "title": "", "year": "2012" }, { "authors": "Asmaa Hosni; Christoph Rhemann; Michael Bleyer; Carsten Rother; Margrit Gelautz", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b24", "title": "Fast cost-volume filtering for visual correspondence and beyond", "year": "2012" }, { "authors": "Ji ", "journal": "", "ref_id": "b25", "title": "", "year": "2017" }, { "authors": "Mengqi Ji; Juergen Gall; Haitian Zheng; Yebin Liu; Lu Fang", "journal": "IEEE Computer Society", "ref_id": "b26", "title": "Surfacenet: An end-to-end 3d neural network for multiview stereopsis", "year": "2017" }, { "authors": " Knapitsch", "journal": "", "ref_id": "b27", "title": "", "year": "2017" }, { "authors": "Arno Knapitsch; Jaesik Park; Qian-Yi Zhou; Vladlen Koltun", "journal": "ACM Trans. Graph", "ref_id": "b28", "title": "Tanks and temples: benchmarking large-scale scene reconstruction", "year": "2017" }, { "authors": " Lee", "journal": "", "ref_id": "b29", "title": "", "year": "2021" }, { "authors": "Jae Yong; Lee ; Joseph Degol; Chuhang Zou; Derek Hoiem", "journal": "", "ref_id": "b30", "title": "Patchmatch-rl: Deep mvs with pixelwise depth, normal, and visibility", "year": "2021-10" }, { "authors": " Li", "journal": "", "ref_id": "b31", "title": "", "year": "2021" }, { "authors": "Zhaoshuo Li; Xingtong Liu; Nathan Drenkow; Andy Ding; Russell H Francis X Creighton; Mathias Taylor; Unberath", "journal": "", "ref_id": "b32", "title": "Revisiting stereo depth estimation from a sequence-to-sequence perspective with transformers", "year": "2021" }, { "authors": " Liao", "journal": "", "ref_id": "b33", "title": "", "year": "2022" }, { "authors": "Jinli Liao; Yikang Ding; Yoli Shavit; Dihe Huang; Shihao Ren; Jia Guo; Wensen Feng; Kai Zhang", "journal": "", "ref_id": "b34", "title": "Wt-mvsnet: Window-based transformers for multi-view stereo", "year": "2022" }, { "authors": " Liu", "journal": "", "ref_id": "b35", "title": "", "year": "2021" }, { "authors": "Ze Liu; Yutong Lin; Yue Cao; Han Hu; Yixuan Wei; Zheng Zhang; Stephen Lin; Baining Guo", "journal": "", "ref_id": "b36", "title": "Swin transformer: Hierarchical vision transformer using shifted windows", "year": "2021" }, { "authors": " Luo", "journal": "", "ref_id": "b37", "title": "", "year": "2019" }, { "authors": "Keyang Luo; Tao Guan; Lili Ju; Haipeng Huang; Yawei Luo", "journal": "IEEE", "ref_id": "b38", "title": "P-mvsnet: Learning patch-wise matching confidence aggregation for multi-view stereo", "year": "2019" }, { "authors": " Ma", "journal": "", "ref_id": "b39", "title": "", "year": "2021" }, { "authors": "Xinjun Ma; Yue Gong; Qirui Wang; Jingwei Huang; Lei Chen; Fan Yu", "journal": "", "ref_id": "b40", "title": "Epp-mvsnet: Epipolarassembling based depth prediction for multi-view stereo", "year": "2021" }, { "authors": " Paszke", "journal": "", "ref_id": "b41", "title": "", "year": "2019" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b42", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "" }, { "authors": "H Wallach; H Larochelle; A Beygelzimer; F ", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "", "year": "" }, { "authors": " Peng", "journal": "", "ref_id": "b44", "title": "", "year": "2022" }, { "authors": "Rui Peng; Rongjie Wang; Zhenyu Wang; Yawen Lai; Ronggang Wang", "journal": "", "ref_id": "b45", "title": "Rethinking depth estimation for multi-view stereo: A unified representation", "year": "2022" }, { "authors": "Szeliski Scharstein", "journal": "", "ref_id": "b46", "title": "", "year": "2002" }, { "authors": "Daniel Scharstein; Richard Szeliski", "journal": "International journal of computer vision", "ref_id": "b47", "title": "A taxonomy and evaluation of dense two-frame stereo correspondence algorithms", "year": "2002" }, { "authors": "Frahm Schönberger", "journal": "", "ref_id": "b48", "title": "", "year": "2016" }, { "authors": "Johannes L Schönberger; Jan-Michael Frahm", "journal": "IEEE Computer Society", "ref_id": "b49", "title": "Structure-from-motion revisited", "year": "2016" }, { "authors": " Schöps", "journal": "", "ref_id": "b50", "title": "", "year": "2017" }, { "authors": "Thomas Schöps; Johannes L Schönberger; Silvano Galliani; Torsten Sattler; Konrad Schindler; Marc Pollefeys; Andreas Geiger", "journal": "IEEE Computer Society", "ref_id": "b51", "title": "A multi-view stereo benchmark with high-resolution images and multi-camera videos", "year": "2017" }, { "authors": " Sun", "journal": "", "ref_id": "b52", "title": "", "year": "2021" }, { "authors": "Jiaming Sun; Zehong Shen; Yuang Wang; Hujun Bao; Xiaowei Zhou", "journal": "", "ref_id": "b53", "title": "Loftr: Detector-free local feature matching with transformers", "year": "2021" }, { "authors": " Thomee", "journal": "", "ref_id": "b54", "title": "", "year": "2016" }, { "authors": "Bart Thomee; David A Shamma; Gerald Friedland; Benjamin Elizalde; Karl Ni; Douglas Poland; Damian Borth; Li-Jia Li", "journal": "Commun. ACM", "ref_id": "b55", "title": "Yfcc100m: the new data in multimedia research", "year": "2016" }, { "authors": " Tola", "journal": "", "ref_id": "b56", "title": "", "year": "2012" }, { "authors": "Engin Tola; Christoph Strecha; Pascal Fua", "journal": "Mach. Vis. Appl", "ref_id": "b57", "title": "Efficient large-scale multi-view stereo for ultra highresolution image sets", "year": "2012" }, { "authors": " Vaswani", "journal": "", "ref_id": "b58", "title": "", "year": "2017" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b59", "title": "Attention is all you need", "year": "2017" }, { "authors": " Wang", "journal": "", "ref_id": "b60", "title": "Itermvs: Iterative probability estimation for efficient multi-view stereo", "year": "2021" }, { "authors": " Wang", "journal": "", "ref_id": "b61", "title": "Patchmatchnet: Learned multi-view patchmatch stereo", "year": "2021" }, { "authors": " Wang", "journal": "", "ref_id": "b62", "title": "", "year": "2022" }, { "authors": "Xiaofeng Wang; Zheng Zhu; Fangbo Qin; Yun Ye; Guan Huang; Xu Chi; Yijia He; Xingang Wang", "journal": "", "ref_id": "b63", "title": "Mvster: Epipolar transformer for efficient multiview stereo", "year": "2022" }, { "authors": " Wei", "journal": "", "ref_id": "b64", "title": "", "year": "2021" }, { "authors": "Zizhuang Wei; Qingtian Zhu; Chen Min; Yisong Chen; Guoping Wang", "journal": "", "ref_id": "b65", "title": "Aa-rmvsnet: Adaptive aggregation recurrent multi-view stereo network", "year": "2021" }, { "authors": "Tao Xu; Qingshan Xu; Wenbing Tao", "journal": "", "ref_id": "b66", "title": "Pvsnet: Pixelwise visibility-aware multi-view stereo network", "year": "2020" }, { "authors": "Yan ", "journal": "", "ref_id": "b67", "title": "", "year": "2020" }, { "authors": "Jianfeng Yan; Zizhuang Wei; Hongwei Yi; Mingyu Ding; Runze Zhang; Yisong Chen; Guoping Wang; Yu-Wing Tai", "journal": "Springer", "ref_id": "b68", "title": "Dense hybrid recurrent multiview stereo net with dynamic consistency checking", "year": "2020" }, { "authors": "Yang ", "journal": "", "ref_id": "b69", "title": "Cost volume pyramid based depth inference for multi-view stereo", "year": "2020" }, { "authors": "Yang ", "journal": "IEEE", "ref_id": "b70", "title": "Cost volume pyramid based depth inference for multi-view stereo", "year": "2020" }, { "authors": " Yao", "journal": "", "ref_id": "b71", "title": "", "year": "2018" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tian Fang; Long Quan", "journal": "", "ref_id": "b72", "title": "Mvsnet: Depth inference for unstructured multi-view stereo", "year": "2018" }, { "authors": " Yao", "journal": "", "ref_id": "b73", "title": "", "year": "2019" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Tianwei Shen; Tian Fang; Long Quan", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b74", "title": "Recurrent mvsnet for high-resolution multi-view stereo depth inference", "year": "2019" }, { "authors": " Yao", "journal": "", "ref_id": "b75", "title": "", "year": "2020" }, { "authors": "Yao Yao; Zixin Luo; Shiwei Li; Jingyang Zhang; Yufan Ren; Lei Zhou; Tian Fang; Long Quan", "journal": "CVPR", "ref_id": "b76", "title": "Blendedmvs: A large-scale dataset for generalized multiview stereo networks", "year": "2020" }, { "authors": " Yi", "journal": "", "ref_id": "b77", "title": "", "year": "2020" }, { "authors": "Hongwei Yi; Zizhuang Wei; Mingyu Ding; Runze Zhang; Yisong Chen; Guoping Wang; Yu-Wing Tai", "journal": "Springer", "ref_id": "b78", "title": "Pyramid multi-view stereo net with self-adaptive view aggregation", "year": "2020" }, { "authors": "Gao ; Yu; Zehao Yu; Shenghua Gao", "journal": "IEEE", "ref_id": "b79", "title": "Fastmvsnet: Sparse-to-dense multi-view stereo with learned propagation and gauss-newton refinement", "year": "2020" }, { "authors": " Zhang", "journal": "", "ref_id": "b80", "title": "", "year": "2020" }, { "authors": "Jingyang Zhang; Yao Yao; Shiwei Li; Zixin Luo; Tian Fang", "journal": "", "ref_id": "b81", "title": "Visibility-aware multi-view stereo network", "year": "2020" }, { "authors": " Zhu", "journal": "", "ref_id": "b82", "title": "", "year": "2021" }, { "authors": "Jie Zhu; Bo Peng; Wanqing Li; Haifeng Shen; Zhe Zhang; Jianjun Lei", "journal": "", "ref_id": "b83", "title": "Multi-view stereo with transformer", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 95.18, 448.68, 201.82, 13.04 ], "formula_id": "formula_0", "formula_text": "p i,j = K i • (R 0,i • (K -1 0 • p • d j ) + t 0,i )(1)" }, { "formula_coordinates": [ 3, 87.63, 591.2, 209.37, 22.31 ], "formula_id": "formula_1", "formula_text": "S i (p, j) g = G C < F 0 (p) g , F i (p i,j ) g >∈ R G (2)" }, { "formula_coordinates": [ 3, 325.97, 311.29, 232.03, 30.66 ], "formula_id": "formula_2", "formula_text": "C(p, j) = 1 Ke k=1 w k d k Ke k=1 w k d k C(p + p k + ∆p k , j) (3)" }, { "formula_coordinates": [ 4, 81.81, 423.07, 215.19, 9.65 ], "formula_id": "formula_3", "formula_text": "C k = DASTL k (DATL k (C k-1 )), k = 1, 2, ..., L(4)" }, { "formula_coordinates": [ 4, 130.05, 523.15, 163.08, 9.65 ], "formula_id": "formula_4", "formula_text": "C out = REC(C L ) + C 0 (5" }, { "formula_coordinates": [ 4, 293.13, 523.47, 3.87, 8.64 ], "formula_id": "formula_5", "formula_text": ")" }, { "formula_coordinates": [ 4, 143.61, 579.4, 145.65, 9.65 ], "formula_id": "formula_6", "formula_text": "C out = C L + C 0(" }, { "formula_coordinates": [ 5, 104.78, 330.53, 192.22, 9.65 ], "formula_id": "formula_7", "formula_text": "Q = XP Q , K = XP K , V = XP V(7)" }, { "formula_coordinates": [ 5, 60.14, 419.79, 236.86, 25.53 ], "formula_id": "formula_8", "formula_text": "DA-SA1 = Attention1(Q1, K1, V 1) = Sof tM ax( Q1K1 T √ G + B1)V 1 (8)" }, { "formula_coordinates": [ 5, 96.17, 497.22, 200.83, 11.03 ], "formula_id": "formula_9", "formula_text": "X l = DA-MSA1((LN(X l-1 )) + X l-1(9)" }, { "formula_coordinates": [ 5, 60.14, 566.02, 236.86, 25.53 ], "formula_id": "formula_10", "formula_text": "DA-SA2 = Attention2(Q2, K2, V 2) = Sof tM ax( Q2K2 T √ G + B2)V 2 (10)" }, { "formula_coordinates": [ 5, 220.54, 616.12, 76.46, 9.65 ], "formula_id": "formula_11", "formula_text": "[-d s + 1, d s -1]." }, { "formula_coordinates": [ 5, 379.06, 345.65, 178.94, 11.03 ], "formula_id": "formula_12", "formula_text": "X l = MLP(LN( X l ))) + X l(12)" }, { "formula_coordinates": [ 5, 322.39, 481.44, 235.61, 44.01 ], "formula_id": "formula_13", "formula_text": "X l+1 = DAS-MSA1(LN(DAS-MSA2(LN(X l )))) + X l (13) X l+1 = MLP(LN( X l+1 )) + X l+1(14)" }, { "formula_coordinates": [ 5, 355.99, 666.13, 202.01, 9.65 ], "formula_id": "formula_14", "formula_text": "C k = RST k (RT k ( C k-1 )), k = 1, 2, ..., L(15)" }, { "formula_coordinates": [ 5, 391.05, 695.2, 166.95, 9.65 ], "formula_id": "formula_15", "formula_text": "C out = RER( C L ) + C 0 (16)" }, { "formula_coordinates": [ 6, 143.61, 382.95, 153.39, 9.65 ], "formula_id": "formula_16", "formula_text": "C out = C L + C 0(17)" }, { "formula_coordinates": [ 6, 122.2, 519.89, 174.8, 30.55 ], "formula_id": "formula_17", "formula_text": "Loss = s k=1 n i=1 L k i + L ref(18)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6", "b9", "b7", "b21", "b0", "b12", "b2", "b10", "b15", "b10", "b17", "b0", "b20", "b1", "b19", "b4", "b5", "b18" ], "table_ref": [], "text": "Cardiac Single-Photon Emission Computed Tomography (SPECT) is the most widely performed non-invasive exam for clinical diagnosis of ischemic heart diseases [7,10]. Reducing the tracer dose can lower patient radiation exposure, but it will result in increased image noise [8,22]. Acquiring projections in fewer angles using fewer detectors allows for faster scanning and lower hardware costs, but it also leads to decreased reconstruction accuracy [1,13]. Additionally, in clinical practice, computed tomography (CT)-derived attenuation maps (µ-maps) are commonly used for SPECT attenuation correction (AC) [3,11]. However, most SPECT scanners are stand-alone without the assistance of CT [16]. The CT scan also causes additional radiation exposure and SPECT-CT misalignments [11,18].\nDeep learning-based methods have been extensively explored to address the aforementioned limitations individually. To reduce image noise in low-dose (LD) SPECT, convolutional neural networks (CNNs) were employed to process the LD projection, producing the full-dose (FD) projection for SPECT reconstruction [1,21]. Similarly, to perform limited-angle (LA) reconstruction, the LA projection was input to CNNs to predict the full-angle (FA) projection [2,20,23]. In addition, a dual-domain approach, known as Dual-domain Sinogram Synthesis (DuDoSS), utilized the image-domain output as a prior estimation for the projection domain to predict the FA projection [5]. For the CT-free AC, CNNs were used to generate pseudo attenuation maps (µ-maps) from SPECT emission images [6,19].\nAlthough various methods have been developed to address these limitations individually, it is of great interest to address all these limitations simultaneously to enable CT-free, low-dose, low-cost, and accelerated SPECT, which could potentially lead to better performance on those separated but correlated tasks. Thus, we propose a Cross-Domain Iterative Network (CDI-Net) for simultaneous denoising, LA reconstruction, and CT-free AC in cardiac SPECT. In CDI-Net, projection and image-domain networks are end-to-end connected to fuse the predicted emission and anatomical features across domains and iterations. Adaptive Weight Recalibrators (AWR) calibrate the fused features to improve the prediction accuracy. We tested CDI-Net using clinical data and compared it to existing methods. Ablation studies were conducted to verify the impact of cross-domain, cross-iteration fusions and AWR on enhancing network performance." }, { "figure_ref": [], "heading": "Materials and Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "The aim of this study is to generate the predicted FD&FA projection ( PF ) and µ-map (μ) with the LD&LA projection (P L ) as the inputs, formulated as:\n[ PF , μ] = H (P L ) ,(1)\nwhere H (•) is the pre-processing and neural network operator. The output labels are the ground-truth FD&FA projection (P F ) and CT-derived µ-map (µ). Then, PF and μ are utilized in an offline maximum-likelihood expectation maximization (ML-EM, 30 iterations) module to reconstruct the target FD&FA SPECT image with AC. Thus, predicting PF performs the denoising and LA reconstruction, while predicting μ enables the CT-free AC." }, { "figure_ref": [], "heading": "𝑃𝑃𝐿𝐿 𝐼𝐼𝐿𝐿", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "ML-EM Recon", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Img-Net1", "publication_ref": [], "table_ref": [], "text": "𝑨𝑨𝑨𝑨𝑨𝑨𝟏𝟏 𝑰𝑰 " }, { "figure_ref": [], "heading": "Proj-Net1", "publication_ref": [], "table_ref": [], "text": "� 𝑃𝑃𝐹𝐹 1 � 𝜇𝜇 1 C C Img-Net2 𝑨𝑨𝑨𝑨𝑨𝑨𝟐𝟐 𝑰𝑰 Proj-Net2 𝑨𝑨𝑨𝑨𝑨𝑨𝟐𝟐 𝑷𝑷 � 𝑃𝑃𝐹𝐹 2 C C � 𝜇𝜇 2" }, { "figure_ref": [], "heading": "Img-Netm", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Dataset and Pre-processing", "publication_ref": [ "b3", "b8" ], "table_ref": [ "tab_4" ], "text": "This work includes 474 anonymized clinical hybrid SPECT/CT myocardial perfusion imaging (MPI) studies. Each study was conducted following the injection of 99m Tc-tetrofosmin on a GE NM/CT 570c system. The GE 530c/570c system has 19 pinhole detectors placed in three columns on a cylindrical surface [4]. The clinical characteristics of the dataset are listed in supplementary Table S1.\nWe extracted the 9 angles in the central column to simulate the configurations of the latest cost-effective MyoSPECT ES system [9] as shown in supplementary Figure S1, generating the LA projection. The 10% LD projection was produced by randomly decimating the list-mode data using a 10% downsampling rate. P L was generated by combining the pipelines used for generating the LA and LD projections, and P F was the original FD&FA projection. The ground-truth CTderived µ-maps (µ) were well registered with the reconstructed SPECT images." }, { "figure_ref": [ "fig_0" ], "heading": "Cross-Domain Iterative Network", "publication_ref": [], "table_ref": [], "text": "The overview framework of CDI-Net is shown in Fig. 1. P L is first fed into an ML-EM reconstruction (30 iterations) module, producing the LD&LA reconstructed SPECT image I L , which is then employed for the µ-map generation.\nCross-Domain Residual Connection. The projection-(Proj-Net) and imagedomain networks (Img-Net) are both U-Net modules connected through crossdomain residual connections (CD-RC) facilitated by forward projection (FP) and back projection (BP) operators. In the 1 st iteration, P L is input to Proj-Net 1 to generate P 1 F as:\nP 1 F = P 1 (P L ) ,(2)\nwhere P 1 (•) is Proj-Net 1 operator. P 1 F is then processed by BP and introduced to Img-Net 1 through CD-RC, providing emission information for the µ-map generation. I L and the BP of P 1 F is first fed into the AWR I 1 (described in subsection 2.4) for multi-channel recalibration, and then input to Img-Net 1 to generate μ1 :\nμ1 = I 1 (A I 1 ( I L , T b ( P 1 F ) )),(3)\nwhere I 1 (•) is the Img-Net 1 operator. A I 1 (•) refers to the AWR I 1 (superscript I means image-domain). {•} is concatenation and T b (•) refers to BP. Next, the FP of μ1 is added to Proj-Net 2 of the next iteration by CD-RC, providing anatomical information for the projection prediction. This is the initial attempt of employing anatomical features for the estimation of FD&FA projection in cardiac SPECT. " }, { "figure_ref": [], "heading": "� 𝑭𝑭𝑨𝑨𝑨𝑨𝑨𝑨", "publication_ref": [], "table_ref": [], "text": "Element-wise addition.\n+ + Fig. 2. Adaptive weight recalibrator. The channel weights of the input F M ul is first recalibrated. A global residual connection is then added to retain the original features.\nCross-Iteration Dense Connection. In the m th (m ≥ 2) iteration, the predicted projections from previous iterations, P j F (j < m), are incorporated into Proj-Net m through cross-iteration dense connections (CI-DC). The FP of μj from Img-Net j (j < m) are also added to Proj-Net m through CD-RC as additional input anatomical information. The multi-channel input of Proj-Net m is:\nU m P = P L , P 1 F , P 2 F , • • • , P (m-1) F , T f (μ 1 ), T f (μ 2 ), • • • , T f (μ (m-1) ) ,(4)\nwhere T f (•) refers to the FP. Then, U m P is fed into AWR P m for recalibration and input to Proj-Net m to generate P m F , formulated as:\nP m F = P m (A P m (U m P )),(5)\nwhere P m (•) refers to Proj-Net m and A P m (•) is AWR P m . Similarly, the predicted µ-maps from previous iterations, μj (j < m), are integrated into Img-Net m by CI-DC. The BP of P j F (j ≤ m) are also added to Img-Net m by CD-RC as additional input emission information. The multi-channel input of Img-Net m is:\nU m I = I L , μ1 , μ2 , • • • , μ(m-1) , T b ( P 1 F ), T b ( P 2 F ), • • • , T b ( P (m-1) F ), T b ( P m F ) .(6)\nThen, U m I is recalibrated by AWR I m and input to Img-Net m to produce μm as:\nμm = I m (A I m (U m I )),(7)\nwhere\nI m (•) is Img-Net m and A I m (•) is the AWR I m operator.\nLoss function. The network outputs are P N F and μN , where N is the number of iterations (default: 5). The overall loss function L is formulated as:\nL = N i=1 (w P P i F -P F 1 + w µ μi -µ 1 ),(8)\nwhere w P and w µ are the weights of the projection-and image-domain losses. In our experiment, we empirically set w P = 0.5 and w µ = 0.5 for balanced training." }, { "figure_ref": [], "heading": "Adaptive Weight Recalibrator", "publication_ref": [], "table_ref": [], "text": "The diagram of AWR is shown in Fig. 2. As presented in Eq. 4 and 6, the multi-channel input consists of emission and anatomical features, formulated as:\nF M ul = [f 1 , f 2 , . . . , f C ],(9)\nwhere f i ∈ R H×W ×D indicates the emission or anatomical feature in each individual channel. F M ul is flattened using 3D average pooling, producing α 0 that embeds the channel weights. A recalibration vector α is generated using fullyconnected layers and a Sigmoid function. α is applied to F M ul , described as:\nFChl = [f 1 α1 , f 2 α2 , . . . , f C αC ],(10)\nwhere αi ∈ [0, 1] indicates the channel recalibration factor. Then, a global residual connection is applied to retain the original information, producing the output of AWR as FAW R = FChl + F M ul . Thus, AWR adaptively adjusts the weight of each input channel to better integrate the emission and anatomical information for higher prediction accuracy." }, { "figure_ref": [ "fig_5", "fig_5" ], "heading": "Implementation Details", "publication_ref": [ "b16", "b13", "b4", "b14", "b11", "b23" ], "table_ref": [], "text": "In this study, we tested CDI-Net against many existing methods in terms of the predicted FD&FA projections, µ-maps, and AC SPECT images. U-Net (labeled as Separate-UNet) [17], Attention U-Net (labeled as Separate-AttnUNet) [14], and DuDoSS (labeled as Separate-DuDoSS) [5] were applied to generate PF with P L as input. U-Net and Attention U-Net were also employed to predict μ with I L as input. We also tested ablation study groups w/o CI-DC, CD-RC, or AWR. Then, PF and μ were utilized to reconstruct the AC images. All networks were developed using PyTorch [15] with Adam optimizers [12]. The image-and projection-domain modules were trained with initial learning rates (LR) of 10 -3 and 10 -4 with a decay rate of 0.99/epoch [24]. The networks that predict µ-maps or projections separately were trained for 200 epochs, while CDI-Net was trained for 50 epochs. The performance of CDI-Net with different iterations (1 to 6, default 5) is presented in section 3 (Fig. 6), and the impact of multiple LD levels (1 to 80%, default 10%) is shown in section 3 (Fig. 6). " }, { "figure_ref": [], "heading": "Separate-UNet Separate-AttnUNet Separate-DuDoSS CDI-Net (w/o CI-DC) CDI-Net (w/o CD-RC) CDI-Net (w/o AWR) CDI-Net (proposed)", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Side-view Projections", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Central-angle Projections", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Bottom-angle Projections", "publication_ref": [], "table_ref": [], "text": "NMSE" }, { "figure_ref": [ "fig_3" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_2" ], "text": "Fig. 3 shows the predicted FD&FA projections in multiple views. We can observe that CDI-Net outputs more accurate projections than other groups. Conversely, Separate-UNet, Separate-AttnUNet, and Separate-DuDoSS display underestimations in cardiac regions. This indicates the advantages of fusing emission and anatomical features for simultaneous prediction as in the CDI-Net. Moreover, CDI-Net shows superior performance to the ablation study groups w/o CI-DC, CD-RC, or AWR, confirming the significance of CI-DC, CD-RC, and AWR in enhancing network performance. Table 1 lists the quantitative comparison of the predicted projections. CDI-Net produces more accurate quantitative results than groups conducting separate predictions and ablation study groups (p < 0.001).\nFig. 4 presents the predicted µ-maps. It can be observed that CDI-Net outputs more accurate µ-maps than other testing groups. The µ-maps predicted by Separate-UNet and Separate-AttnUNet display obvious inconsistency with the ground truth, particularly in the inner boundaries. This indicates that CDI-Net improves the accuracy of generating µ-maps by incorporating emission and anatomical information. Moreover, the µ-map predicted by CDI-Net is more accurate than ablation study groups w/o CI-DC, CD-RC, or AWR. Table 2 lists the quantitative evaluation of the predicted µ-maps. The µ-maps predicted by CDI-Net exhibit lower quantitative errors than other methods (p < 0.001)." }, { "figure_ref": [], "heading": "CT-derived μ-Map (Ground Truth)", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_5", "fig_5" ], "heading": "Separate-UNet Separate-AttnUNet CDI-Net (w/o CI-DC) CDI-Net (w/o CD-RC) CDI-Net (w/o AWR) CDI-Net (proposed)", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Difference\nThe predicted projections and µ-maps are then utilized in SPECT reconstruction. As shown in Fig. 5, CDI-Net produces the most accurate AC images. The groups conducting separate predictions or ablation study groups show overor under-estimations of the myocardial perfusion intensities compared to the ground truth. The quantitative evaluation listed in Table 3 shows that CDI-Net leads to higher reconstruction accuracy than other testing groups (p < 0.001). Segment-wise evaluation of the AC images is shown in supplementary Fig. S2. Moreover, we tested the performance of CDI-Net with different iterations as shown in Fig. 6 (left). The errors of the predicted projections and µ-maps by CDI-Net decrease as the number of iterations increases, with convergence occurring at 5 iterations. Additionally, we generated more datasets with multiple LD levels to test these methods as shown in Fig. 6 (mid, right). It can be observed that CDI-Net demonstrates consistently higher prediction accuracy of projections and µ-maps than other groups across multiple LD levels. " }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose CDI-Net that simultaneously achieves denoising, LA reconstruction, and CT-free AC for low-dose cardiac SPECT. The CD-RC and CI-DC components effectively fuse the predicted anatomical and emission features. The fused features are adaptively calibrated by AWR and then jointly employed for the prediction of projections and µ-maps. Thus, CDI-Net effectively combines the cross-domain information that is then used for image estimations in both domains. This approach also marks the initial investigation in employing anatomical features to assist the projection estimation of cardiac SPECT. Experiments using clinical data with different LD levels demonstrated the superiority of CDI-Net over existing methods in predicting projections and µ-maps, as well as in reconstructing AC SPECT images.\nFor potential clinical impact, CDI-Net enables accurate AC SPECT reconstruction in LD, LA, and CT-less scenarios. This could potentially promote the 1 Supplementary Information " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "clinical adoption of the latest cost-effective SPECT scanners with fewer detectors and lower dose levels and without CT. Thus, we can achieve accurate cardiac AC SPECT imaging with reduced hardware expenses and lower radiation exposure." } ]
Single-Photon Emission Computed Tomography (SPECT) is widely applied for the diagnosis of ischemic heart diseases. Low-dose (LD) SPECT aims to minimize radiation exposure but leads to increased image noise. Limited-angle (LA) SPECT enables faster scanning and reduced hardware costs but results in lower reconstruction accuracy. Additionally, computed tomography (CT)-derived attenuation maps (µmaps) are commonly used for SPECT attenuation correction (AC), but it will cause extra radiation exposure and SPECT-CT misalignments. In addition, the majority of SPECT scanners in the market are not hybrid SPECT/CT scanners. Although various deep learning methods have been introduced to separately address these limitations, the solution for simultaneously addressing these challenges still remains highly under-explored and challenging. To this end, we propose a Cross-domain Iterative Network (CDI-Net) for simultaneous denoising, LA reconstruction, and CT-free AC in cardiac SPECT. In CDI-Net, paired projectionand image-domain networks are end-to-end connected to fuse the emission and anatomical information across domains and iterations. Adaptive Weight Recalibrators (AWR) adjust the multi-channel input features to enhance prediction accuracy. Our experiments using clinical data showed that CDI-Net produced more accurate µ-maps, projections, and reconstructions compared to existing approaches that addressed each task separately. Ablation studies demonstrated the significance of cross-domain and cross-iteration connections, as well as AWR, in improving the reconstruction performance. The source code is released at https://**.com.
Cross-domain Iterative Network for Simultaneous Denoising, Limited-angle Reconstruction, and Attenuation Correction of Low-dose Cardiac SPECT
[ { "figure_caption": "FPCFig. 1 .1Fig. 1. Cross-Domain Iterative Networks. Projection-(Proj-Net) and image-domain networks (Img-Net) are end-to-end connected by cross-domain residual connections (CD-RC). Cross-iteration dense connections (CI-DC) enhance feature extraction across iterations. Adaptive Weight Recalibrators (AWR) adjust the multi-channel input.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": ". Fully-connected layers.Sigmoid function. × Channel-wise multiplication.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Predicted µ-maps. White arrows denote the prediction inconsistency.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Reconstructed SPECT images with attenuation correction (AC) using the predicted FD&FA projections and µ-maps. White arrows denote the inconsistency.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Evaluation of CDI-Net with different iterations (left). Evaluation of multiple methods based on datasets with multiple low-dose levels (mid, right).", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "1. 2 Fig. 1 .21Fig. 1. (SI) Configurations of GE NM/CT 530c/570c and the latest SPECT system with fewer detectors. GE NM/CT 570c scanner comprises of 19 pinhole detectors arranged in three columns on a cylindrical surface (left blue box) with 5, 9, 5 detectors placed on bottom, central, and top columns, respectively (right red box). The most recent few-angle scanner only employ the 9 detectors at the central column for minimizing hardware expenses.", "figure_data": "", "figure_id": "fig_6", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "1. 3 FullFig. 2 .32Fig. 2. (SI) The AC SPECT images were analyzed using standard 17-segment polar maps, with white arrows indicating prediction inconsistencies. CDI-Net produces the most accurate polar maps, while the groups conducting separate prediction and ablation study groups display over-or underestimation of the segment intensities.", "figure_data": "", "figure_id": "fig_7", "figure_label": "32", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "𝑨𝑨𝑨𝑨𝑨𝑨𝒎𝒎 𝑷𝑷Proj-Netm� 𝑃𝑃𝐹𝐹 𝑚𝑚𝑨𝑨𝑨𝑨𝑨𝑨𝒎𝒎 𝑰𝑰� 𝜇𝜇 𝑚𝑚", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Evaluation", "figure_data": "/SSIM0.5454/0.46463.42/0.92263.28/0.92803.21/0.92712.05/0.94041.81/0.94071.88/0.94151.41/0.950068< 0.001Separate-UNet [17]4.21 ± 1.48 16.69 ± 2.24 0.9276 ± 0.0195 30.58 ± 1.79< 0.001Separate-AttnUNet [14] 3.45 ± 1.13 15.45 ± 2.56 0.9368 ± 0.0205 31.43 ± 1.65< 0.001Separate-DuDoSS [5]3.19 ± 1.11 14.57 ± 2.29 0.9416 ± 0.0187 31.79 ± 1.65< 0.001CDI-Net (w/o CI-DC)2.56 ± 0.85 13.22 ± 1.81 0.9505 ± 0.0144 32.73 ± 1.65< 0.001CDI-Net (w/o CD-RC) 2.39 ± 0.78 13.39 ± 1.94 0.9486 ± 0.0160 33.02 ± 1.65< 0.001CDI-Net (w/o AWR)2.42 ± 0.83 13.40 ± 2.00 0.9478 ± 0.0173 32.98 ± 1.65< 0.001CDI-Net (proposed)2.15 ± 0.69 12.64 ± 1.77 0.9542 ± 0.0142 33.47 ± 1.68-† P-values of the paired t-tests of NMSE between the current method and CDI-Net (proposed).", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation of the predicted µ-maps. The best results are marked in red. AttnUNet [14] 12.45 ± 4.49 22.20 ± 5.49 0.2829 ± 0.0582 17.34 ± 1.82 < 0.001 CDI-Net (w/o CI-DC) 11.88 ± 4.18 22.69 ± 5.37 0.2993 ± 0.0624 17.54 ± 1.81 < 0.001 CDI-Net (w/o CD-RC) 11.90 ± 4.69 21.95 ± 5.60 0.3041 ± 0.0660 17.56 ± 1.80 < 0.001 CDI-Net (w/o AWR) 11.84 ± 4.69 21.96 ± 5.48 0.3047 ± 0.0627 17.60 ± 1.89 < 0.001 CDI-Net (proposed) 11.42 ± 4.31 21.54 ± 5.30 0.3066 ± 0.0607 17.83 ± 1.85 - † P-values of the paired t-tests of NMSE between the current method and CDI-Net (proposed).", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Evaluation of the reconstructed AC SPECT. The best results are in red. values of the paired t-tests of NMSE between the current method and CDI-Net (proposed).", "figure_data": "MethodsNMSE(%) NMAE(%)SSIMPSNRP-values †Baseline LD&LA35.80 ± 10.83 54.36 ± 6.13 0.6646 ± 0.0344 24.00 ± 1.80< 0.001Separate-UNet [17]6.63 ± 2.26 23.78 ± 3.53 0.8576 ± 0.0248 31.33 ± 1.72< 0.001Separate-AttnUNet [14] 5.85 ± 1.76 22.46 ± 2.96 0.8655 ± 0.0239 31.56 ± 1.67< 0.001Separate-DuDoSS [5]5.68 ± 1.81 22.02 ± 3.11 0.8706 ± 0.0242 32.00 ± 1.70< 0.001CDI-Net (w/o CI-DC)5.45 ± 1.61 21.67 ± 2.92 0.8742 ± 0.0207 32.15 ± 1.69< 0.001CDI-Net (w/o CD-RC)5.55 ± 1.81 21.66 ± 3.13 0.8722 ± 0.0231 32.12 ± 1.69< 0.001CDI-Net (w/o AWR)5.49 ± 1.66 21.59 ± 2.92 0.8729 ± 0.0224 32.13 ± 1.70< 0.001CDI-Net (proposed)4.82 ± 1.44 20.28 ± 2.65 0.8829 ± 0.0194 32.69 ± 1.65-† P-", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "(SI) The gender, height, weight, and BMI distributions of the enrolled patients. † , 92 F ‡ ) Range 27 -86 1.32 -2.03 44.91 -127.00 18.10 -48.05 Mean ± Std. 65.0 ± 11.6 1.68 ± 0.11 85.67 ± 20.62 30.29 ± 6.57 Validation (52 M, 22 F) Range 41 -84 1.47 -1.85 54.34 -103.87 19.53 -38.11 Mean ± Std. 65.5 ± 10.1 1.70 ± 0.09 81.28 ± 12.14 28.33 ± 4.43 Testing (104 M, 96 F) Range 39 -87 1.47 -1.98 45.00 -140.00 18.26 -48.44 Mean ± Std. 64.2 ± 10.7 1.69 ± 0.11 86.41 ± 18.91 30.28 ± 5.81 † M stands for male.", "figure_data": "1.1 Clinical characteristics of the patients in the datasetDatasetsAge (year) Height (m) Weight (kg)BMITraining (108 M", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" } ]
Xiongchao Chen; Bo Zhou; Huidong Xie; Xueqi Guo; Qiong Liu; Albert J Sinusas; Chi Liu
[ { "authors": "Aghakhan Olia; N Kamali-Asl; A Hariri Tabrizi; S Geramifar; P Sheikhzadeh; P Farzanefar; S Arabi; H Zaidi; H ", "journal": "European journal of nuclear medicine and molecular imaging", "ref_id": "b0", "title": "Deep learning-based denoising of low-dose spect myocardial perfusion images: quantitative assessment and clinical performance", "year": "2022" }, { "authors": "M Amirrashedi; S Sarkar; H Ghadiri; P Ghafarian; H Zaidi; M R Ay", "journal": "IEEE", "ref_id": "b1", "title": "A deep neural network to recover missing data in small animal pet imaging: Comparison between sinogram-and image-domain implementations", "year": "2021" }, { "authors": "S Blankespoor; X Xu; K Kaiki; J Brown; H Tang; C Cann; B Hasegawa", "journal": "IEEE Transactions on Nuclear Science", "ref_id": "b2", "title": "Attenuation correction of spect using x-ray ct on an emission-transmission ct system: myocardial perfusion assessment", "year": "1996" }, { "authors": "C Chan; J Dey; Y Grobshtein; J Wu; Y H Liu; R Lampert; A J Sinusas; C Liu", "journal": "Medical physics", "ref_id": "b3", "title": "The impact of system matrix dimension on small fov spect reconstruction with truncated projections", "year": "2016" }, { "authors": "X Chen; B Zhou; H Xie; T Miao; H Liu; W Holler; M Lin; E J Miller; R E Carson; A J Sinusas", "journal": "Medical Physics", "ref_id": "b4", "title": "Dudoss: Deep-learning-based dual-domain sinogram synthesis from sparsely sampled projections of cardiac spect", "year": "2022" }, { "authors": "X Chen; B Zhou; H Xie; L Shi; H Liu; W Holler; M Lin; Y H Liu; E J Miller; A J Sinusas", "journal": "European Journal of Nuclear Medicine and Molecular Imaging", "ref_id": "b5", "title": "Direct and indirect strategies of deep-learning-based attenuation correction for general purpose and dedicated cardiac spect", "year": "2022" }, { "authors": "I Danad; P G Raijmakers; R S Driessen; J Leipsic; R Raju; C Naoum; J Knuuti; M Mäki; R S Underwood; J K Min", "journal": "JAMA cardiology", "ref_id": "b6", "title": "Comparison of coronary ct angiography, spect, pet, and hybrid imaging for diagnosis of ischemic heart disease determined by fractional flow reserve", "year": "2017" }, { "authors": "A J Einstein", "journal": "Journal of the American College of Cardiology", "ref_id": "b7", "title": "Effects of radiation exposure from cardiac imaging: how good are the data", "year": "2012" }, { "authors": " Ge-Healthcare", "journal": "", "ref_id": "b8", "title": "Ge myospect es: A perfect fit for today's practice of cardiology", "year": "2023" }, { "authors": "A Gimelli; G Rossi; P Landi; P Marzullo; G Iervasi; A L'abbate; D Rovai", "journal": "Journal of Nuclear Medicine", "ref_id": "b9", "title": "Stress/rest myocardial perfusion abnormalities by gated spect: still the best predictor of cardiac events in stable ischemic heart disease", "year": "2009" }, { "authors": "S Goetze; T L Brown; W C Lavely; Z Zhang; F M Bengel", "journal": "Journal of Nuclear Medicine", "ref_id": "b10", "title": "Attenuation correction in myocardial perfusion spect/ct: effects of misregistration and value of reregistration", "year": "2007" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "S Niu; Y Gao; Z Bian; J Huang; W Chen; G Yu; Z Liang; J Ma", "journal": "Physics in Medicine & Biology", "ref_id": "b12", "title": "Sparseview x-ray ct reconstruction via total generalized variation regularization", "year": "2014" }, { "authors": "O Oktay; J Schlemper; L L Folgoc; M Lee; M Heinrich; K Misawa; K Mori; S Mcdonagh; N Y Hammerla; B Kainz", "journal": "", "ref_id": "b13", "title": "Attention u-net: Learning where to look for the pancreas", "year": "2018" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "", "ref_id": "b14", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "2019" }, { "authors": "M A Rahman; Y Zhu; E Clarkson; M A Kupinski; E C Frey; A K Jha", "journal": "Inverse problems", "ref_id": "b15", "title": "Fisher information analysis of list-mode spect emission data for joint estimation of activity and attenuation distribution", "year": "2020" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "Springer", "ref_id": "b16", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "L Saleki; P Ghafarian; A Bitarafan-Rajabi; N Yaghoobi; B Fallahi; M R Ay", "journal": "Iranian Journal of Nuclear Medicine", "ref_id": "b17", "title": "The influence of misregistration between ct and spect images on the accuracy of ct-based attenuation correction of cardiac spect/ct imaging: Phantom and clinical studies", "year": "2019" }, { "authors": "L Shi; J A Onofrey; H Liu; Y H Liu; C Liu", "journal": "European Journal of Nuclear Medicine and Molecular Imaging", "ref_id": "b18", "title": "Deep learning-based attenuation map generation for myocardial perfusion spect", "year": "2020" }, { "authors": "I Shiri; K Amirmozafari Sabet; H Arabi; M Pourkeshavarz; B Teimourian; M R Ay; H Zaidi", "journal": "Journal of Nuclear Cardiology", "ref_id": "b19", "title": "Standard spect myocardial perfusion estimation from half-time acquisitions using deep convolutional residual neural networks", "year": "2020" }, { "authors": "J Sun; H Jiang; Y Du; C Y Li; T H Wu; Y H Liu; B H Yang; G S Mok", "journal": "Journal of Nuclear Cardiology", "ref_id": "b20", "title": "Deep learning-based denoising in projection-domain and reconstruction-domain for low-dose myocardial perfusion spect", "year": "2022" }, { "authors": "R G Wells", "journal": "", "ref_id": "b21", "title": "Dose reduction is good but it is image quality that matters", "year": "2020" }, { "authors": "W Whiteley; J Gregor", "journal": "Physics in Medicine & Biology", "ref_id": "b22", "title": "Cnn-based pet sinogram repair to mitigate defective block detectors", "year": "2019" }, { "authors": "K You; M Long; J Wang; M I Jordan", "journal": "", "ref_id": "b23", "title": "How does learning rate decay help modern neural networks?", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 269.19, 570.85, 211.4, 12.17 ], "formula_id": "formula_0", "formula_text": "[ PF , μ] = H (P L ) ,(1)" }, { "formula_coordinates": [ 3, 218.75, 134.96, 106.17, 46.16 ], "formula_id": "formula_1", "formula_text": "� 𝑃𝑃𝐹𝐹 1 � 𝜇𝜇 1 C C Img-Net2 𝑨𝑨𝑨𝑨𝑨𝑨𝟐𝟐 𝑰𝑰 Proj-Net2 𝑨𝑨𝑨𝑨𝑨𝑨𝟐𝟐 𝑷𝑷 � 𝑃𝑃𝐹𝐹 2 C C � 𝜇𝜇 2" }, { "formula_coordinates": [ 3, 277.96, 609.97, 202.63, 13.14 ], "formula_id": "formula_2", "formula_text": "P 1 F = P 1 (P L ) ,(2)" }, { "formula_coordinates": [ 4, 247.12, 140.39, 233.47, 13.14 ], "formula_id": "formula_3", "formula_text": "μ1 = I 1 (A I 1 ( I L , T b ( P 1 F ) )),(3)" }, { "formula_coordinates": [ 4, 163.73, 431, 316.86, 14.22 ], "formula_id": "formula_4", "formula_text": "U m P = P L , P 1 F , P 2 F , • • • , P (m-1) F , T f (μ 1 ), T f (μ 2 ), • • • , T f (μ (m-1) ) ,(4)" }, { "formula_coordinates": [ 4, 263.72, 490.15, 216.87, 13.14 ], "formula_id": "formula_5", "formula_text": "P m F = P m (A P m (U m P )),(5)" }, { "formula_coordinates": [ 4, 141.65, 571.54, 338.95, 14.22 ], "formula_id": "formula_6", "formula_text": "U m I = I L , μ1 , μ2 , • • • , μ(m-1) , T b ( P 1 F ), T b ( P 2 F ), • • • , T b ( P (m-1) F ), T b ( P m F ) .(6)" }, { "formula_coordinates": [ 4, 263.92, 620.49, 216.67, 12.69 ], "formula_id": "formula_7", "formula_text": "μm = I m (A I m (U m I )),(7)" }, { "formula_coordinates": [ 4, 163.57, 641.81, 233.16, 12.98 ], "formula_id": "formula_8", "formula_text": "I m (•) is Img-Net m and A I m (•) is the AWR I m operator." }, { "formula_coordinates": [ 5, 215.88, 153.1, 264.71, 30.32 ], "formula_id": "formula_9", "formula_text": "L = N i=1 (w P P i F -P F 1 + w µ μi -µ 1 ),(8)" }, { "formula_coordinates": [ 5, 257.21, 298.39, 223.39, 9.71 ], "formula_id": "formula_10", "formula_text": "F M ul = [f 1 , f 2 , . . . , f C ],(9)" }, { "formula_coordinates": [ 5, 243.16, 376.33, 237.44, 12.17 ], "formula_id": "formula_11", "formula_text": "FChl = [f 1 α1 , f 2 α2 , . . . , f C αC ],(10)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b5", "b8", "b14", "b6", "b9", "b15", "b29", "b0", "b23", "b13", "b19", "b22", "b24", "b21", "b25", "b1", "b3", "b7", "b26" ], "table_ref": [], "text": "Myocardial perfusion imaging (MPI) through Single-Photon Emission Computed Tomography (SPECT) is the most commonly employed exam for diagnosing cardiovascular diseases [6,9,15]. However, exposure to ionizing radiation from SPECT radioactive tracers presents potential risks to both patient and healthcare provider [7]. While reducing the injected dose can lower the radiation exposure, it will lead to increased image noise [10]. Additionally, acquiring projections in fewer angles using fewer detectors is a viable strategy for shortening scanning time and reducing hardware expenses. However, fewer-angle projections can lead to lower reconstruction accuracy and higher image noise [16,30].\nMany deep learning methods by Convolutional Neural Networks (CNNs) have been developed for denoising or few-angle reconstruction in nuclear medicine. Existing techniques for denoising in nuclear medicine were implemented either in the projection or image domain. Low-dose (LD) projection or image was input to CNN to predict the corresponding full-dose (FD) projection [1,24] or image [14,20,23,25]. Previous approaches for few-angle reconstruction in nuclear medicine were developed based on projection-, image-, or dual-domain frameworks. In the projection-or image-domain methods, the few-angle projection or image was input to CNN to generate the full-angle projection [22,26] or image [2], respectively. The dual-domain method, Dual-domain Sinogram Synthesis (DuDoSS), utilized the image-domain output as an initial estimation for the prediction of the full-angle projection in the projection domain [4].\nThe latest dedicated cardiac SPECT scanners tend to employ fewer detectors to minimize hardware costs [8,27]. Although deep learning-enabled denoising or few-angle reconstruction in nuclear medicine has been extensively studied in previous works, end-to-end joint denoising and few-angle reconstruction for the latest dedicated scanners still remains highly under-explored. Here, we present a dual-domain iterative network with learnable Adaptive Data Consistency (ADC) modules for joint denoising and few-angle reconstruction of cardiac SPECT. The image-domain network provides a prior estimate for the prediction in the projection domain. Paired primary and auxiliary modules are interconnected for progressive denoising and few-angle restoration. ADC modules are incorporated to enhance prediction accuracy by fusing the predicted projections from primary and auxiliary modules. We evaluated the proposed method using clinical data and compared it to existing projection-, image-, and dual-domain methods. In addition, we also conducted ablation studies to assess the impact of the imagedomain prior estimate and ADC modules on the network performance." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_8" ], "heading": "Dataset and Pre-processing", "publication_ref": [ "b26", "b2", "b7" ], "table_ref": [ "tab_4" ], "text": "A dataset consisting of 474 anonymized clinical hybrid SPECT-CT MPI studies was collected. Each study was conducted following the injection of 99m Tctetrofosmin on a GE NM/CT 570c [27]. The clinical characteristics of enrolled patients are listed in supplementary Table S1.\nThe GE 530c/570c scanners comprise of 19 pinhole detectors arranged in three columns on a cylindrical surface [3]. The few-angle projections were generated by selecting the 9 angles (9A) in the central column, simulating the configurations of the latest cost-effective MyoSPECT ES few-angle scanner [8] as shown in supplementary Fig. S1. The 10%-dose LD projections were produced by randomly decimating the list-mode data at a 10% downsampling rate. The simulated LD&9A projection is the input, and the original FD&19A projection is the label. We used 200, 74, and 200 cases for training, validation, and testing.\n� 𝑿𝑿𝑭𝑭𝑳𝑳&𝟏𝟏𝟗𝟗𝟗𝟗 � 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 1 � 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 1 ADC1 Ŝ𝑆𝐹𝐹𝐹𝐹&19𝐴𝐴 1 � 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 2 � 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 2 ADC2 Ŝ𝑆𝐹𝐹𝐹𝐹&19𝐴𝐴 2 � 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 𝑖𝑖 � 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 𝑖𝑖" }, { "figure_ref": [ "fig_0" ], "heading": "Dual-domain Iterative Network", "publication_ref": [ "b20" ], "table_ref": [], "text": "The dual-domain iterative network is shown in Fig. 1. The LD&9A projection P LD&9A is first input into a Maximum-Likelihood Expectation Maximization (MLEM, 30 iterations) module, reconstructing the LD&9A image I LD&9A .\nImage-Domain Prior Estimation. I LD&9A is then input to the image-domain network Img-Net, i.e. a CNN module, to produce the predicted image ÎF D&19A , supervised by the ground-truth FD&19A image I F D&19A . The image-domain loss L I is:\nL I = ÎF D&19A -I F D&19A 1 .(1)\nThen, ÎF D&19A is fed into a forward projection (FP) operator of GE 530c/570c, producing X F D&19A as the prior estimate of the ground-truth FD&19A projection P F D&19A . The image-domain prediction can be formulated as:\nX F D&19A = F(I(I LD&9A )),(2)\nwhere I(•) is the Img-Net operator and F(•) is the FP operator.\nProjection-Domain Iterative Prediction. The prior estimate X F D&19A is then channel-wise concatenated with P LD&9A to generate P comb , which serves as the input to the projection-domain networks, formulated as:\nP comb = X F D&19A , P LD&9A ,(3)\nwhere {•} refers to channel-wise concatenation of 3D projections.\nGiven the difficulty in performing joint denoising and few-angle restoration directly, we split the two tasks and assign them to two parallel Attention U-Net modules [21]: the auxiliary module for denoising (DN-Net) and the primary module for joint prediction (Joint-Net). In each iteration, DN-Net solely focuses on denoising and produces an auxiliary projection. Joint-Net performs both denoising and few-angle restoration, producing the primary projection. The auxiliary and primary projections are then fused in an ADC module (described in subsection 2.3), producing a fused projection of higher accuracy. In the 1 st iteration block, P comb is input to DN-Net 1 to produce the auxiliary projection P 1 F D&9A . It is also input to Joint-Net 1 to produce the primary projection P 1 F D&19A . Then, P 1 F D&9A and P 1 F D&19A are fused in the ADC 1 module, producing the fused projection Ŝ1 F D&19A , formulated as:\nŜ1 F D&19A = A 1 (D 1 ( P comb ), J 1 ( P comb )),(4)\nwhere A 1 (•) is the ADC 1 operator. D 1 (•) is the DN-Net 1 operator, and J 1 (•) is the Joint-Net 1 operator. In the i th (i ≥ 2) iteration, the output of the (i -1) th iteration, Ŝ(i-1) F D&19A , was added to the input of DN-Net i to assist the denoising of the auxiliary module. Ŝ(i-1) F D&19A is concatenated with the output of DN-Net (i-1) and then fed into DN-Net i to produce the auxiliary projection P i F D&9A in the i th iteration:\nP i F D&9A = D i ( Ŝ(i-1) F D&19A , P (i-1) F D&9A ),(5)\nwhere D i (•) is the DN-Net i operator. Then, the outputs of all the (i-1) previous iterations, Ŝm F D&19A (m < i), are densely connected with P comb as the input to Joint-Net i to produce the primary projection in the i th iteration:\nP i F D&19A = J i ( P comb , Ŝ1 F D&19A , Ŝ2 F D&19A , • • • , Ŝ(i-1) F D&19A ),(6)\nwhere J i (•) is the Joint-Net i operator. Then, the auxiliary and primary projections are fused in ADC i for recalibration, generating the fused Ŝi F D&19A as:\nŜi F D&19A = A i ( P i F D&9A , P i F D&19A ),(7)\nwhere A i (•) is the ADC i operator. The overall network output ŜN F D&19A is the output of the N th iteration, where N is the total number of iterations with a default value of 4. The projection-domain loss is formulated as:\nL P = N i=1 ( P i F D&9A -P F D&9A 1 + Ŝi F D&19A -P F D&19A1\n),\nwhere P F D&9A is the FD&9A projection. The total loss function L is the weighted summation of the image-domain loss L I and the projection-domain loss L P :\nL = w I L I + w P L P ,(9)\nwhere the weights w I and w P were empirically set as 0.5 in our experiment. " }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Adaptive Data Consistency", "publication_ref": [ "b4", "b18", "b28", "b3", "b11", "b10" ], "table_ref": [], "text": "The initial data consistency (DC) was used to fuse the predicted and groundtruth k-spaces, thereby ensuring the consistency of the MRI image in k-space [5,19,29]. It is also utilized in the DuDoSS for the few-angle reconstruction of cardiac SPECT imaging [4]. However, in our study, the ground-truth FD&9A projection P F D&9A , which is a pre-requisite for applying DC to PF D&19A , is not available as input. Thus, we generate P i F D&9A using DN-Net i as the intermediate auxiliary information to improve PF D&19A . The proposed ADC generates a voxel-wise adaptive projection mask for the fusion of P i F D&9A and P i F D&19A . As presented in Fig. 2, in the i th iteration, P i F D&9A and P i F D&19A are first concatenated and input to a densely-connected [12] CNN module for spatial feature extraction. Then, a voxel-wise adaptive projection mask γ is generated from the extracted features using a Sigmoid operator, which determines the voxel-wise weights (from 0 to 1) for the summation of P i F D&9A and P i F D&19A . The weighted projections of the central columns are generated as:\nP i F D&9A DC = P i F D&9A * ∆ * γ,(10)\nP i F D&19A DC = P i F D&19A * ∆ * (1 -γ), (11\n)\nwhere * is the voxel-wise multiplication, and ∆ refers to the binary mask of the few-angle projection (shown in Fig. 2). In addition, the outer columns of PF D&19A is computed as:\nP i F D&19A O = P i F D&19A * (1 -∆)\n. Then, the above three weighted projections are concatenated and input to a Channel-wise Weight Recalibration module, a squeeze-excitation [11] selfattention mechanism, to generate a channel recalibration vector r = [r 1 , r 2 , r 3 ]. The output of ADC is the weighted summation of the recalibrated projections as:\nŜi F D&19A = r 1 P i F D&9A DC + r 2 P i F D&19A DC + r 3 P i F D&19A O . (12\n)" }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Implementation Details", "publication_ref": [ "b20", "b16", "b25", "b3", "b17", "b12", "b27" ], "table_ref": [], "text": "We evaluated Joint-DuDo against various deep learning methods in this study. Projection-domain methods using U-Net (designated as UNet-Proj) [21] or Attention U-Net (designated as AttnUNet-Proj) [17], the image-domain method using Attention U-Net (designated as AttnUNet-Img) [26], and the dual-domain method DuDoSS [4] were tested. We also included ablation study groups without ADC (but with normal DC, designated as Joint-DuDo (w/o ADC)) or without the image-domain prior estimate (designated as Joint-DuDo (w/o Prior)). Networks were developed using PyTorch [18] and trained with Adam optimizers [13]. The initial learning rate was 10 -3 for image and projection modules and 10 -4 for ADC modules, with a decay rate of 0.99 per epoch to avoid overfitting [28]. Joint-DuDo and ablation groups were trained for 50 epochs and the other groups were trained for 200 epochs. The default number of iterations of Joint-DuDo was 4. Evaluations of Joint-DuDo using multiple iterations (1 to 6) are shown in section 3 (Fig. 5 left). Evaluations of more datasets with different LD levels (1 to 80%, default 10%) are shown in section 3 (Fig. 5 mid,right). " }, { "figure_ref": [ "fig_2", "fig_4", "fig_1", "fig_2", "fig_6", "fig_6" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_1", "tab_3" ], "text": "Fig. 3 presents the qualitative comparison of the predicted FD&19A projections in different projection views. It can be observed that Joint-DuDo generates more accurate predicted projections at all views compared to the projection-and dualdomain approaches. Joint-DuDo also demonstrates higher accuracy compared to the ablation study groups without the image-domain prior estimate or ADC modules, proving the roles of the prior estimate and ADC in enhancing network performance. Table 1 outlines the quantitative evaluations of the predicted projections. Joint-DuDo outperforms existing projection-and dual-domain approaches and the ablation study groups (p < 0.001). domain, projection-domain, and dual-domain approaches as well as the ablation groups. Segment-wise visualizations of the images in Fig. 4 are shown in supplementary Fig. S2 andS3. With or without AC, Joint-DuDo outperforms the other methods (p < 0.001) as indicated by the quantitative comparison in Table 2. As shown in Fig. 5 (left), the performance of Joint-DuDo improves as the number of iterations (N ) increases and reaches convergence at N = 4. In addition, we generated more datasets with different LD levels (1% to 80%) to test the network performance as shown in Fig. 5 (mid and right). It can be observed that our proposed Joint-DuDo demonstrates consistently higher prediction accuracy across various LD levels compared to other testing methods." }, { "figure_ref": [], "heading": "FD&19A (Ground Truth) LD&9A (Baseline) UNet-Proj AttnUNet-Proj AttnUNet-Img DuDoSS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we propose Joint-DuDo, a novel dual-domain iterative network with learnable ADC modules, for the joint denoising and few-angle reconstruction of low-dose cardiac SPECT. Joint-DuDo employs the output of the image domain as an initial estimate for the projection prediction in the projection domain. This initial estimate enables the input closer to the target, thus enhancing the overall prediction accuracy. The ADC modules produce adaptive projection masks to fuse the predicted auxiliary and primary projections for higher output accuracy. Experiments using clinical data showed that the proposed Joint-DuDo led to higher accuracy in the projections and reconstructions than existing projection-, image-, and dual-domain approaches.\nThe potential clinical significance of our work is that it shows the feasibility of simultaneously performing denoising and few-angle reconstruction in low-dose cardiac SPECT. Using the proposed method, we could potentially promote the clinical adoption and market coverage of the latest cost-effective fewer-angle SPECT scanners with reduced radiation dose.\n1 Supplementary Information " }, { "figure_ref": [], "heading": "Patient clinical characteristics in the dataset", "publication_ref": [], "table_ref": [], "text": "" } ]
Myocardial perfusion imaging (MPI) by single-photon emission computed tomography (SPECT) is widely applied for the diagnosis of cardiovascular diseases. Reducing the dose of the injected tracer is essential for lowering the patient's radiation exposure, but it will lead to increased image noise. Additionally, the latest dedicated cardiac SPECT scanners typically acquire projections in fewer angles using fewer detectors to reduce hardware expenses, potentially resulting in lower reconstruction accuracy. To overcome these challenges, we propose a dual-domain iterative network for end-to-end joint denoising and reconstruction from low-dose and few-angle projections of cardiac SPECT. The image-domain network provides a prior estimate for the projectiondomain networks. The projection-domain primary and auxiliary modules are interconnected for progressive denoising and few-angle reconstruction. Adaptive Data Consistency (ADC) modules improve prediction accuracy by efficiently fusing the outputs of the primary and auxiliary modules. Experiments using clinical MPI data show that our proposed method outperforms existing image-, projection-, and dual-domain techniques, producing more accurate projections and reconstructions. Ablation studies confirm the significance of the image-domain prior estimate and ADC modules in enhancing network performance. The source code is released at https://***.com.
Joint Denoising and Few-angle Reconstruction for Low-dose Cardiac SPECT Using a Dual-domain Iterative Network with Adaptive Data Consistency
[ { "figure_caption": "Fig. 1 .1Fig. 1. Dual-domain iterative network. The forward projection of the image-domain output (top row) serves as a prior estimate for the projection domain (bottom row). The primary (Joint-Net) and the auxiliary modules (DN-Net) are interconnected in every iteration for progressive denoising and few-angle restoration.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Adaptive data consistency (ADC) is composed of an Adaptive Mask Generation module (left) to fuse the auxiliary and the primary projections and a Channel-wise Weight Recalibration module (right) to adjust weights of the combined projections.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Predicted FD&19A projections in side, central-angle, and bottom-angle views with NMSE/SSIM annotated. White arrows denote prediction inconsistency.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Reconstructed or predicted FD&19A SPECT images w/ or w/o the CT-based attenuation correction (AC). White arrows denote the prediction inconsistency.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 44Fig. 4 shows the qualitative comparison of the reconstructed or predicted FD&19A images with or without the CT-based attenuation correction (AC). Joint-DuDo results in more accurate SPECT images compared to other image-", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Evaluation of Joint-DuDo using multiple iterations (left). Evaluation of various approaches based on datasets with different LD levels (mid, right).", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "1. 11Configurations of the few-angle dedicated scanner", "figure_data": "", "figure_id": "fig_7", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .1Fig. 1. (SI) Configurations of GE NM/CT 530c/570c and the latest few-angle scanners. GE 570c scanner comprises of 19 pinhole detectors arranged in three columns on a cylindrical surface (left blue box) with 5, 9, 5 detectors placed on bottom, central, and top columns, respectively (right red box). The most recent few-angle scanner only employ the 9 detectors at the central column for minimizing hardware expenses.", "figure_data": "", "figure_id": "fig_8", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "� 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 𝑁𝑁ADCi Ŝ𝑆𝐹𝐹𝐹𝐹&19𝐴𝐴 𝑖𝑖� 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 𝑁𝑁ADCNŜ𝑆𝐹𝐹𝐹𝐹&19𝐴𝐴Forward ProjForward projection operator of the dedicated scanner.C Channel-wise volume concatenation.ADCi Adaptive data consistency module in the i th iteration block.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Evaluation of the predicted FD&19A projections using normalized mean square error (NMSE), normalized mean absolute error (NMAE), structural similarity (SSIM), and peak signal-to-noise ratio (PSNR). The best results are in red. Prior) 2.43 ± 0.82 13.36 ± 1.90 0.9507 ± 0.0150 32.95 ± 1.68 < 0.001 Joint-DuDo (proposed) 2.16 ± 0.71 12.51 ± 1.73 0.9548 ± 0.0137 33.47 ± 1.67 - † P-values of the paired t-tests of NMSE between the current method and Joint-DuDo (proposed).", "figure_data": "MethodsNMSE(%) NMAE(%)SSIMPSNRP-values †Baseline LD-9A54.46 ± 2.46 62.44 ± 2.53 0.4912 ± 0.0260 19.23 ± 1.68< 0.001UNet-Proj [21]4.26 ± 1.58 16.65 ± 2.42 0.9247 ± 0.0248 30.54 ± 1.79< 0.001AttnUNet-Proj [17]3.43 ± 1.17 15.34 ± 2.40 0.9372 ± 0.0193 31.47 ± 1.65< 0.001DuDoSS [4]3.10 ± 0.78 14.54 ± 1.59 0.9429 ± 0.0153 31.82 ± 1.50< 0.001Joint-DuDo (w/o ADC) 2.42 ± 0.81 13.04 ± 1.89 0.9509 ± 0.0156 32.97 ± 1.60< 0.001Joint-DuDo (w/o", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Evaluation of the reconstructed or predicted FD&19A SPECT images w/ or w/o attenuation correction (AC). The best results are marked in red. ± 2.56 23.59 ± 3.75 31.77 ± 1.94 5.39 ± 2.02 20.85 ± 3.34 32.26 ± 1.75 AttnUNet-Proj [17] 6.14 ± 2.08 22.32 ± 3.29 32.33 ± 1.96 4.70 ± 1.65 19.61 ± 2.94 32.85 ± 1.76 AttnUNet-Img [26] 6.07 ± 1.44 21.80 ± 2.26 32.29 ± 1.87 4.66 ± 1.05 19.50 ± 1.95 32.78 ± 1.59 DuDoSS [4] 5.57 ± 1.77 21.70 ± 3.11 32.61 ± 1.93 4.44 ± 1.40 18.89 ± 2.84 33.09 ± 1.75 Joint-DuDo (w/o ADC) 5.32 ± 1.78 20.57 ± 3.12 32.95 ± 1.91 4.22 ± 1.44 18.42 ± 2.83 33.32 ± 1.73 Joint-DuDo (w/o Prior) 5.45 ± 1.83 20.75 ± 3.20 32.84 ± 1.93 4.34 ± 1.51 18.60 ± 2.85 33.20 ± 1.76 Joint-DuDo (proposed) 4.68 ± 1.46 19.27 ± 2.70 33.49 ± 1.87 3.72 ± 1.19 17.32 ± 2.48 33.85 ± 1.71", "figure_data": "MethodsReconstructed Images w/o AC NMSE(%) NMAE(%) PSNRReconstructed Images w/ AC NMSE(%) NMAE(%) PSNRBaseline LD-9A30.66 ± 11.27 47.76 ± 6.74 25.41 ± 2.12 22.57 ± 9.42 42.23 ± 7.02 26.17 ± 2.05UNet-Proj [21]7.00", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "(SI) The gender, height, weight, and BMI distributions of the enrolled patients in the dataset. M stands for male, and F stands for female. -2.03 44.91 -127.00 18.10 -48.05 Mean ± Std. 65.0 ± 11.6 1.68 ± 0.11 85.67 ± 20.62 30.29 ± 6.57 Validation (52 M, 22 F) Range 41 -84 1.47 -1.85 54.34 -103.87 19.53 -38.11 Mean ± Std. 65.5 ± 10.1 1.70 ± 0.09 81.28 ± 12.14 28.33 ± 4.43 Testing (104 M, 96 F) Range 39 -87 1.47 -1.98 45.00 -140.00 18.26 -48.44 Mean ± Std. 64.2 ± 10.7 1.69 ± 0.11 86.41 ± 18.91 30.28 ± 5.81", "figure_data": "DatasetsAge (year) Height (m) Weight (kg)BMITraining (108 M, 92 F)Range27 -861.32", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" } ]
Xiongchao Chen; Bo Zhou; Huidong Xie; Xueqi Guo; Qiong Liu; Albert J Sinusas; Chi Liu
[ { "authors": "Aghakhan Olia; N Kamali-Asl; A Hariri Tabrizi; S Geramifar; P Sheikhzadeh; P Farzanefar; S Arabi; H Zaidi; H ", "journal": "European journal of nuclear medicine and molecular imaging", "ref_id": "b0", "title": "Deep learning-based denoising of low-dose spect myocardial perfusion images: quantitative assessment and clinical performance", "year": "2022" }, { "authors": "M Amirrashedi; S Sarkar; H Ghadiri; P Ghafarian; H Zaidi; M R Ay", "journal": "IEEE", "ref_id": "b1", "title": "A deep neural network to recover missing data in small animal pet imaging: Comparison between sinogram-and image-domain implementations", "year": "2021" }, { "authors": "C Chan; J Dey; Y Grobshtein; J Wu; Y H Liu; R Lampert; A J Sinusas; C Liu", "journal": "Medical physics", "ref_id": "b2", "title": "The impact of system matrix dimension on small fov spect reconstruction with truncated projections", "year": "2016" }, { "authors": "X Chen; B Zhou; H Xie; T Miao; H Liu; W Holler; M Lin; E J Miller; R E Carson; A J Sinusas", "journal": "Medical Physics", "ref_id": "b3", "title": "Dudoss: Deep-learning-based dual-domain sinogram synthesis from sparsely sampled projections of cardiac spect", "year": "2022" }, { "authors": "J Chlemper; J Caballero; J Hajnal; A Price; D Rueckert", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b4", "title": "A deep cascade of convolutional neural networks for dynamic mr image reconstructio", "year": "2017" }, { "authors": "I Danad; P G Raijmakers; R S Driessen; J Leipsic; R Raju; C Naoum; J Knuuti; M Mäki; R S Underwood; J K Min", "journal": "JAMA cardiology", "ref_id": "b5", "title": "Comparison of coronary ct angiography, spect, pet, and hybrid imaging for diagnosis of ischemic heart disease determined by fractional flow reserve", "year": "2017" }, { "authors": "A J Einstein", "journal": "Journal of the American College of Cardiology", "ref_id": "b6", "title": "Effects of radiation exposure from cardiac imaging: how good are the data", "year": "2012" }, { "authors": " Ge-Healthcare", "journal": "", "ref_id": "b7", "title": "Ge myospect es: A perfect fit for today's practice of cardiology", "year": "2023" }, { "authors": "A Gimelli; G Rossi; P Landi; P Marzullo; G Iervasi; A L'abbate; D Rovai", "journal": "Journal of Nuclear Medicine", "ref_id": "b8", "title": "Stress/rest myocardial perfusion abnormalities by gated spect: still the best predictor of cardiac events in stable ischemic heart disease", "year": "2009" }, { "authors": "M J Henzlova; W L Duvall; A J Einstein; M I Travin; H J Verberne", "journal": "Journal of Nuclear Cardiology", "ref_id": "b9", "title": "Asnc imaging guidelines for spect nuclear cardiology procedures: Stress, protocols, and tracers", "year": "2016" }, { "authors": "J Hu; L Shen; G Sun", "journal": "", "ref_id": "b10", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "G Huang; Z Liu; L Van Der Maaten; K Q Weinberger", "journal": "", "ref_id": "b11", "title": "Densely connected convolutional networks", "year": "2017" }, { "authors": "D P Kingma; J Ba", "journal": "", "ref_id": "b12", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "H Liu; H Yousefi; N Mirian; M Lin; D Menard; M Gregory; M Aboian; A Boustani; M K Chen; L Saperstein", "journal": "IEEE Transactions on Radiation and Plasma Medical Sciences", "ref_id": "b13", "title": "Pet image denoising using a deeplearning method for extremely obese patients", "year": "2021" }, { "authors": "T Nishimura; K Nakajima; H Kusuoka; A Yamashina; S Nishimura", "journal": "European journal of nuclear medicine and molecular imaging", "ref_id": "b14", "title": "Prognostic study of risk stratification among japanese patients with ischemic heart disease using gated myocardial perfusion spect: J-access study", "year": "2008" }, { "authors": "S Niu; Y Gao; Z Bian; J Huang; W Chen; G Yu; Z Liang; J Ma", "journal": "Physics in Medicine & Biology", "ref_id": "b15", "title": "Sparseview x-ray ct reconstruction via total generalized variation regularization", "year": "2014" }, { "authors": "O Oktay; J Schlemper; L L Folgoc; M Lee; M Heinrich; K Misawa; K Mori; S Mcdonagh; N Y Hammerla; B Kainz", "journal": "", "ref_id": "b16", "title": "Attention u-net: Learning where to look for the pancreas", "year": "2018" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "", "ref_id": "b17", "title": "Pytorch: An imperative style, highperformance deep learning library", "year": "2019" }, { "authors": "C Qin; J Schlemper; J Caballero; A N Price; J V Hajnal; D Rueckert", "journal": "IEEE transactions on medical imaging", "ref_id": "b18", "title": "Convolutional recurrent neural networks for dynamic mr image reconstruction", "year": "2018" }, { "authors": "A J Ramon; Y Yang; P H Pretorius; K L Johnson; M A King; M N Wernick", "journal": "IEEE transactions on medical imaging", "ref_id": "b19", "title": "Improving diagnostic accuracy in low-dose spect myocardial perfusion imaging with convolutional denoising networks", "year": "2020" }, { "authors": "O Ronneberger; P Fischer; T Brox", "journal": "Springer", "ref_id": "b20", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "I Shiri; K Amirmozafari Sabet; H Arabi; M Pourkeshavarz; B Teimourian; M R Ay; H Zaidi", "journal": "Journal of Nuclear Cardiology", "ref_id": "b21", "title": "Standard spect myocardial perfusion estimation from half-time acquisitions using deep convolutional residual neural networks", "year": "2020" }, { "authors": "J Sun; Y Du; C Li; T H Wu; B Yang; G S Mok", "journal": "Quantitative Imaging in Medicine and Surgery", "ref_id": "b22", "title": "Pix2pix generative adversarial network for low dose myocardial perfusion spect denoising", "year": "2022" }, { "authors": "J Sun; H Jiang; Y Du; C Y Li; T H Wu; Y H Liu; B H Yang; G S Mok", "journal": "Journal of Nuclear Cardiology", "ref_id": "b23", "title": "Deep learning-based denoising in projection-domain and reconstruction-domain for low-dose myocardial perfusion spect", "year": "2022" }, { "authors": "Y Wang; B Yu; L Wang; C Zu; D S Lalush; W Lin; X Wu; J Zhou; D Shen; L Zhou", "journal": "Neuroimage", "ref_id": "b24", "title": "3d conditional generative adversarial networks for high-quality pet image estimation at low dose", "year": "2018" }, { "authors": "W Whiteley; J Gregor", "journal": "Physics in Medicine & Biology", "ref_id": "b25", "title": "Cnn-based pet sinogram repair to mitigate defective block detectors", "year": "2019" }, { "authors": "J Wu; C Liu", "journal": "Physics in Medicine & Biology", "ref_id": "b26", "title": "Recent advances in cardiac spect instrumentation and imaging methods", "year": "2019" }, { "authors": "K You; M Long; J Wang; M I Jordan", "journal": "", "ref_id": "b27", "title": "How does learning rate decay help modern neural networks?", "year": "2019" }, { "authors": "B Zhou; S K Zhou", "journal": "", "ref_id": "b28", "title": "Dudornet: learning a dual-domain recurrent network for fast mri reconstruction with deep t1 prior", "year": "2020" }, { "authors": "Z Zhu; K Wahid; P Babyn; D Cooper; I Pratt; Y Carter", "journal": "Computational and mathematical methods in medicine", "ref_id": "b29", "title": "Improved compressed sensing-based algorithm for sparse-view ct image reconstruction", "year": "2013" } ]
[ { "formula_coordinates": [ 3, 183.76, 137.95, 246.23, 64.95 ], "formula_id": "formula_0", "formula_text": "� 𝑿𝑿𝑭𝑭𝑳𝑳&𝟏𝟏𝟗𝟗𝟗𝟗 � 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 1 � 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 1 ADC1 Ŝ𝑆𝐹𝐹𝐹𝐹&19𝐴𝐴 1 � 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 2 � 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 2 ADC2 Ŝ𝑆𝐹𝐹𝐹𝐹&19𝐴𝐴 2 � 𝑃𝑃𝐹𝐹𝐹𝐹&9𝐴𝐴 𝑖𝑖 � 𝑃𝑃𝐹𝐹𝐹𝐹&19𝐴𝐴 𝑖𝑖" }, { "formula_coordinates": [ 3, 241.46, 462.25, 239.13, 17.65 ], "formula_id": "formula_1", "formula_text": "L I = ÎF D&19A -I F D&19A 1 .(1)" }, { "formula_coordinates": [ 3, 247.35, 536.04, 233.25, 9.71 ], "formula_id": "formula_2", "formula_text": "X F D&19A = F(I(I LD&9A )),(2)" }, { "formula_coordinates": [ 3, 239.08, 630.86, 241.52, 9.71 ], "formula_id": "formula_3", "formula_text": "P comb = X F D&19A , P LD&9A ,(3)" }, { "formula_coordinates": [ 4, 224.88, 465.97, 255.71, 13.14 ], "formula_id": "formula_4", "formula_text": "Ŝ1 F D&19A = A 1 (D 1 ( P comb ), J 1 ( P comb )),(4)" }, { "formula_coordinates": [ 4, 228.07, 577.07, 252.52, 14.3 ], "formula_id": "formula_5", "formula_text": "P i F D&9A = D i ( Ŝ(i-1) F D&19A , P (i-1) F D&9A ),(5)" }, { "formula_coordinates": [ 4, 176.7, 652.11, 303.89, 13.68 ], "formula_id": "formula_6", "formula_text": "P i F D&19A = J i ( P comb , Ŝ1 F D&19A , Ŝ2 F D&19A , • • • , Ŝ(i-1) F D&19A ),(6)" }, { "formula_coordinates": [ 5, 232.04, 152.34, 248.55, 13.14 ], "formula_id": "formula_7", "formula_text": "Ŝi F D&19A = A i ( P i F D&9A , P i F D&19A ),(7)" }, { "formula_coordinates": [ 5, 172.54, 215.43, 263.13, 30.32 ], "formula_id": "formula_8", "formula_text": "L P = N i=1 ( P i F D&9A -P F D&9A 1 + Ŝi F D&19A -P F D&19A1" }, { "formula_coordinates": [ 5, 264.91, 287.83, 215.68, 9.71 ], "formula_id": "formula_10", "formula_text": "L = w I L I + w P L P ,(9)" }, { "formula_coordinates": [ 6, 243.89, 274.4, 236.7, 13.75 ], "formula_id": "formula_11", "formula_text": "P i F D&9A DC = P i F D&9A * ∆ * γ,(10)" }, { "formula_coordinates": [ 6, 227.47, 311.44, 248.7, 13.75 ], "formula_id": "formula_12", "formula_text": "P i F D&19A DC = P i F D&19A * ∆ * (1 -γ), (11" }, { "formula_coordinates": [ 6, 476.16, 313.9, 4.43, 8.8 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 6, 251.05, 355.18, 136.94, 14.11 ], "formula_id": "formula_14", "formula_text": "P i F D&19A O = P i F D&19A * (1 -∆)" }, { "formula_coordinates": [ 6, 183.8, 429.45, 292.36, 13.75 ], "formula_id": "formula_15", "formula_text": "Ŝi F D&19A = r 1 P i F D&9A DC + r 2 P i F D&19A DC + r 3 P i F D&19A O . (12" }, { "formula_coordinates": [ 6, 476.17, 431.91, 4.43, 8.8 ], "formula_id": "formula_16", "formula_text": ")" } ]
10.18653/v1/2021.acl-long.568
2023-05-17
[ { "figure_ref": [ "fig_1" ], "heading": "Introduction", "publication_ref": [ "b4", "b8", "b31", "b36", "b47", "b52", "b3", "b5", "b6", "b9", "b24", "b27", "b29", "b39", "b40", "b61", "b63", "b64", "b67", "b53", "b61", "b59", "b14", "b55", "b2", "b10", "b13", "b14", "b15", "b14", "b2", "b14", "b15", "b2", "b61", "b39" ], "table_ref": [], "text": "Pre-training then fine-tuning has become a prevalent training paradigm with the remarkable success of large-scale pre-trained models in Natural Language Processing (NLP) [5,9,32,37,48,53]. Recently, more researchers are striving to apply this paradigm to graph-based tasks with Graph Transformer Networks (GTNs) [4,6,7,10,25,28,30,40,41,62,64,65,68]. For instance, based on multi-layer Transformer encoders [54], Graphormer [62] first performs the well-designed unsupervised tasks on large-scale molecular datasets, and then fine-tunes the entire pre-trained parameters of the model on downstream molecular tasks of interest, which is also known as full fine-tuning. However, full fine-tuning poses several issues in practice: (i) Given that the labels of graph data from some domains (e.g., chemistry, biology) are inaccessible without the expertise and labor-heavy annotations [60], it is common that there are insufficient labeled samples in downstream tasks of interest. Hence, full fine-tuning would incur serious over-fitting and catastrophic forgetting issues [15,56]. (ii) When Figure 1: The comparison between PEFTs (Adapter, LoRA, BitFit, and G-Adapter) and full finetuning on large-and small-scale datasets. (a) Based on the pre-trained Graphormer, we first average the results of each PEFT on two large-scale datasets, and then compute the performance gap compared to full fine-tuning. (b) Similarly, we calculate the performance gap of each PEFT on seven small-scale datasets, based on another pre-trained model MAT. Refer to Sec. 3.1 for more descriptions.\nhandling multiple diverse downstream tasks, full fine-tuning has to duplicate a modified copy of all parameters per task, which hinders the flexibility and applicability of large-scale models, especially in scenarios with constrained storage resources (e.g., mobile detection devices).\nRecently, Parameter-Efficient Fine-Tuning (PEFT), as an alternative to full fine-tuning, has been proposed and widely investigated in NLP [3,11,[14][15][16]. PEFT aims to achieve competitive performance with full fine-tuning while consuming computation and storage resources as few as possible. Instead of updating the entire parameters during the fine-tuning phase, PEFT only updates a small fraction of parameters within the original model or additionally introduced modules, while freezing the remaining parameters. For example, Adapter [15] inserts two compact modules in each encoder of Transformer, while BitFit [3] only updates the bias terms in the model parameters, as shown in Fig. 3. Despite the remarkable achievements of traditional PEFTs in natural language understanding tasks, the question is still under-explored whether these PEFTs from the language domain are feasible for various GTNs under graph-based tasks, given that the intrinsic discrepancy between graph and text modalities (e.g., the graph has rich structure information). Therefore, in this paper, we shall fill this gap by answering the following questions: Can PEFTs from the language domain be transferred directly to graph-based tasks? If not, how to design a graph-specific PEFT method?\nTo start with, we comprehensively examine the performance of mainstream PEFTs (Adapter [15], LoRA [16], and BitFit [3]) on popular molecular graph datasets based on two pre-trained GTNs (Graphormer [62] and MAT [40]). The overall comparison is shown in Fig. 1, in which we unfortunately observe a significant gap between traditional PEFTs and full fine-tuning, especially on large-scale datasets. Further, our exploration reveals the feature distribution shift issue due to the absence of graph structure in the fine-tuning process (see Fig. 2 and Sec. 3.1 for more discussions).\nTo alleviate these concerns, we propose a novel structure-aware PEFT method, G-Adapter, which leverages graph convolution operation to introduce graph structure as the inductive bias to guide the updating process. Moreover, we apply the low-rank decomposition to the learnable weights, which makes G-Adapter highly lightweight. Besides, we propose Bregman proximal point optimization to further ease the feature distribution shift by preventing the model from aggressive update.\nTo verify the effectiveness of our approach, we conduct extensive experiments on a variety of graphbased downstream tasks based on pre-trained GTNs. The results demonstrate that our proposed G-Adapter can effectively address the feature distribution shift issue and significantly enhance the performance. Specifically, (i) G-Adapter obtains the state-of-the-art performance than baselines on both large-and small-scale datasets. Even compared to full fine-tuning, G-Adapter could achieve competitive (or superior) results with fewer trainable parameters. For example, full fine-tuning achieves 0.804 AUC with 100% trainable parameters on MolHIV, while G-Adapter gains 0.790 AUC with only 0.24% trainable parameters. (ii) G-Adapter enjoys remarkable advantages over full Figure 2: The illustration of feature distribution shift, where Full-FT denotes full fine-tuning. For the identical input, the feature distribution of traditional PEFTs (Adapter, LoRA, and BitFit) has a significant offset (dark region) compared to full fine-tuning. In contrast, our proposed G-Adapter has a highly similar behavior to full fine-tuning. Here, Jensen-Shannon divergence is utilized to measure the discrepancy between two distributions. Refer to Sec. 3.1 for more discussions.\nfine-tuning in terms of memory footprint. For instance, full fine-tuning stores 161MB checkpoint per task while G-Adapter merely requests 0.4MB2 checkpoint for each downstream task. Additionally, the introduced G-Adapter modules barely degrade the training efficiency and inference speed, and extensive ablation experiments also confirm the rationality of each component in our design.\nTo summarize, our contributions are as follows:\n• To the best of our knowledge, this is the first work formally to investigate the parameterefficient fine-tuning of graph-based tasks and models. And, we benchmark several widely used PEFTs from the language domain on a range of graph-based downstream tasks.\n• We exhibit the phenomenon of feature distribution shift when directly applying existing PEFTs to graph-based tasks. Further, our study empirically shows that the graph structure and Bregman proximal point optimization could alleviate this concern well.\n• We propose a structure-aware parameter-efficient method (G-Adapter) for adapting pretrained GTNs to various graph-based downstream tasks, in which G-Adapter introduces graph structure as an inductive bias to guide the updating process.\n• Extensive experiments demonstrate that G-Adapter outperforms the counterparts by a significant margin. Furthermore, compared to full fine-tuning, our method yields tremendous memory footprint benefits almost without sacrificing the efficiency of training and inference." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b53", "b4", "b8", "b31", "b36", "b47", "b52", "b41", "b59", "b19", "b48", "b66", "b4", "b45", "b46", "b20", "b66", "b8", "b19", "b48", "b6", "b41", "b59", "b3", "b5", "b9", "b24", "b27", "b29", "b40", "b61", "b63", "b67", "b61", "b39", "b29", "b67", "b26", "b41", "b10", "b13", "b43", "b62", "b14", "b49", "b37", "b11", "b25", "b42", "b51", "b55", "b56", "b0", "b15", "b65", "b23", "b2", "b30", "b32", "b35", "b54", "b57", "b10", "b7", "b16", "b22", "b38", "b68", "b13", "b10" ], "table_ref": [], "text": "Graph Transformer Networks. Transformer [54], as one of the most popular network architectures so far, has demonstrated remarkable success in NLP [5,9,32,37,48,53], which spurs extensive research on transferring Transformer to graph representation learning [42,60]. Considering the intrinsic discrepancy between graph and text modalities, current efforts mainly focus on two aspects: the design of pre-training tasks and the encoding of nodes and edges. For the first aspect, there are generally three folds: (i) Supervised learning: the preset supervised task is constructed by measuring the labels of graph data using professional tools [20,49,67]. (ii) Graph autoregressive modeling: similar to the GPT-style pre-training tasks [5,46,47] in NLP, some nodes and edges in the graph are randomly masked first, and then the masked elements are recovered in a step-by-step manner [21,67].\n(iii) Masked components modeling: this approach is analogous to the MLM task in BERT [9], where all masked elements in the graph are predicted simultaneously [20,49]. For the second aspect, each node (e.g., an atom in the molecular graph) is regarded as a \"token\" in text sequence, and then the hidden representation of the node is learned similar to Transformers in NLP [7]. Compared to the simple sequential relationship between tokens in text sequence, the relationship between edges in the graph could be more complex and essential [42,60]. Therefore, substantial works focus on modeling graph structures [4,6,10,25,28,30,41,62,64,68]. For example, Graphormer [62] leverages the centrality and spatial encoding as the graph structural signal, and MAT [40] augments the attention mechanism in Transformer using inter-atomic distances and the molecular graph structure. Kreuzer In the right part, we depict the architecture of each PEFT method. Here, each color represents an approach, where the grey blocks are frozen during the fine-tuning process.\nOur proposed G-Adapter is marked and demonstrated in purple, in which S indicates the introduced graph structure information (e.g., the graph adjacent matrix).\net al. [30] propose the learnable structural encoding via Laplacian spectrum, which can learn the position of each node in the graph. Moreover, Zhao et al. [68] proposes a proximity-enhanced multihead attention to capture the multi-hop graph structure, and Khoo et al. [27] design a structure-aware self-attention for modeling the tree-structured graphs. Additionally, Min et al. [42] systematically investigate the effectiveness and application of Transformers in the graph domain.\nParameter-Efficient Transfer Learning. Parameter-Efficient Fine-Tuning (PEFT) is receiving considerably growing attention in diverse domains [11,14,44,63]. Adapter [15], as the representative work of PEFT, is proposed to tackle natural language understanding tasks by inserting the compact blocks into Multi-Head Attention (MHA) and Feed-Forward Networks (FFN) in Transformer. Following this work, a series of subsequent efforts are proposed to improve the performance of Adapter. For instance, AdapterDrop [50] removes Adapter blocks from the lower layers, and Compacter/Com-pacter++ [38] introduce Kronecker product and weights sharing tricks to further reduce the proportion of trainable parameters. More similar works are included [12,26,43,52,56,57]. Based on the hypothesis of low intrinsic rank [1], LoRA [16] tunes two low-rank learnable matrices to approximate the updating of query and value weights in MHA. Moreover, Zhang et al. [66] enhance LoRA by adaptively allocating the trainable parameters budget at each layer, and FacT [24] extends LoRA by introducing a new tensorization-decomposition framework. Instead of introducing extra parameters, BitFit [3], a simple heuristic strategy, only fine-tunes the bias terms of the model. In addition, prompt-based tuning [31,33,36,55,58] is also an interesting direction, but we do not involve these methods here given that the training curse of prompt-based methods [11]. In addition, substantial works attempt to combine different PEFTs together through tailored mechanisms [8,17,23,39,69]. Finally, He et al. [14] provide a unified view of existing PEFTs, and more detailed descriptions of PEFTs are discussed in the survey literature [11].\n3 Methodology" }, { "figure_ref": [], "heading": "Pilot Experiments", "publication_ref": [ "b14", "b15", "b2", "b18", "b61", "b12", "b44", "b58", "b39", "b33", "b8", "b39", "b48", "b61", "b64", "b67", "b5", "b9", "b29" ], "table_ref": [], "text": "To answer the first question: Can PEFTs from the language domain be transferred directly to graphbased tasks? We evaluate the performance of three mainstream PEFTs (Adapter [15], LoRA [16],\nand BitFit [3]) on large-and small-scale graph-based downstream tasks, respectively. To be specific, on the large-scale datasets, i.e., MolHIV (41K) and MolPCBA (437K) [19], we first average the results of each PEFT based on the pre-trained Graphormer [62], and then subtract the average result of full fine-tuning. Here, we refer to the final result as Performance Gap, as shown in Fig. 1. Similar operations are also conducted on seven small-scale datasets (0.6 ∼ 2.4K), i.e., FreeSolv, ESOL, BBBP, Estrogen-α, Estrogen-β, MetStab low , and MetStab high [13,45,59], based on another pretrained model MAT [40]. From the comparison in Fig. 1, we can observe that the performance of traditional PEFTs is far from full fine-tuning on graph-based tasks, especially on large-scale datasets, across varying degrees of the ratio of trainable parameters.\nTo shed light on why there is such a significant gap between traditional PEFTs and full fine-tuning, we investigate the feature distribution of different methods inspired by Lian et al. [34]. Specifically, based on BBBP and pre-trained MAT, we first take the hidden representation of a virtual node (similar to the [CLS] token in NLP [9]) in the last layer as the entire graph representation. Then, for the identical input, the graph feature representations from diverse methods are visualized in Fig. 2. More results are shown in Appendix A.4. Given that full fine-tuning updates all parameters of the model, its performance can be seen as an \"upper bound\" for PEFT 3 . Therefore, a good PEFT is believed that it should have similar behavior with full fine-tuning, such as the encoding of features. However, from the comparison in Fig. 2, we can observe that the feature distributions encoded by traditional PEFTs are shifted compared to full fine-tuning, which here is called feature distribution shift.\nTo understand the reason underlying this phenomenon, we revisit the relationship between GTNs and vanilla Transformers. For the encoding of node/token, they have highly similar operations, e.g., encoding the representation of node/token through an embedding layer. However, there are significant discrepancies in terms of the encoding of position (or edge in the graph). Specifically, only the position embedding layer is utilized within vanilla Transformers in NLP, while most existing GTNs extract diverse graph structure information as the inductive bias and then inject them into the model [40,49,62,65,68], since the graph structure contains rich edge semantic information. In addition, recent researches also demonstrate the significant effectiveness of graph structure in learning graph representation [6,10,30]. Motivated by these observations, in this paper, we attempt to introduce graph structure as the inductive bias to alleviate the feature distribution shift issue." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Structure-Aware Parameter-Efficient Fine-Tuning", "publication_ref": [ "b14", "b15", "b37", "b28", "b34", "b1", "b28" ], "table_ref": [], "text": "For the parameter-efficient module, we believe that the following principles should be taken into consideration: (i) it can explicitly encode graph structure during the fine-tuning process; (ii) it should satisfy the main property of PEFT -lightweight [15,16,38]; (iii) it should be easy to implement and can be integrated into diverse GTNs.\nThe design of parameter-efficient module. Inspired by the design of Graph Convolutional Networks (GCN) [29,35], which can model both graph structure and node representation simultaneously, we leverage this operation to explicitly introduce graph structure into the model. Here, we give the following definition:\nX = GraphConv(S, X; W ) = σ(SXW )(1)\nwhere X, X ∈ R n×d (n: the sequence length, d: the hidden representation dimension) refer to the input, output of the module, respectively. S ∈ R n×n indicates the introduced graph structure information (e.g., the adjacency matrix of graph), W ∈ R d×d is the learnable weight, and σ(•) indicates the nonlinear activation function. Further, following the lightweight principle, we decompose W into two low-rank matrices to reduce the number of learnable parameters, i.e., W = W down W up , where W down ∈ R d×r , W up ∈ R r×d and r is called the bottleneck size. Moreover, to stabilize the training process of PEFT, we insert two LayerNorm layers [2] before and after GraphConv(•), respectively, as depicted in Fig. 3.\nOverall, the pipeline of our PEFT module is as follows: firstly, the input (X) goes through the first LayerNorm layer, then passes GraphConv(•) by absorbing the graph structure information (S). Next, we construct a skip connection between the output of GraphConv(•) and the normalized input (X ). Lastly, the final output (X ) is obtained through the second LayerNorm layer, i.e.:\nX = LN(X), X = LN X + σ(SX W down W up )(2)\nwhere LN(•) represents the LayerNorm layer. In Fig. 3, we describe in detail the architecture of our proposed PEFT (G-Adapter) and compare it with traditional PEFTs. In addition, thanks to the modular and lightweight properties, G-Adapter can be seamlessly integrated into diverse GTNs. We also provide some general pseudo-code to execute our approach in Appendix A.3.\nThe selection of graph structure information. To start with, we consider the adjacency matrix (with self-connections) of the graph: S 1 = A + I n , where A, I n ∈ R n×n refer to the adjacency matrix and identity matrix, respectively. Then, following Kipf and Welling [29], we introduce the degree matrix of nodes to normalize the adjacency matrix:\nS 2 = D-1 2 à D-1 2\n, where à = S 1 , D is the diagonal matrix and Dii = j Ãij . In addition, we propose a distance-based graph structure information: S 3 = [dis(v i , v j )] n×n , where dis(v i , v j ) refers to the distance of the shortest path (or the inter-atomic distance in the molecular graph) between two nodes v i and v j . Last, we combine the adjacency and distance to construct a hybrid structure information:\nS 4 = α • D-1 2 à D-1 2 + β • [dis(v i , v j )] n×n ,\nwhere α, β are scalar hyper-parameters to balance the impacts of the adjacency and distance terms. We evaluate the proposed graph structure information (S 1 , S 2 , S 3 , S 4 ) on a range of graph-based tasks, and the detailed comparisons are presented in Sec. 4.2." }, { "figure_ref": [], "heading": "Bregman Proximal Point Optimization", "publication_ref": [ "b21" ], "table_ref": [], "text": "Full-FT Original Therefore, to maintain consistency with the feature distribution of the original parameters, we propose Bregman proximal point optimization strategy [22] to prevent the model from aggressive update. Specifically, for the pre-trained model f (•; θ) with trainable parameters θ, at the (t + 1)-th iteration, we have\nθ t+1 = arg min θ (1 -µ) • L vanilla (θ) + µ • L bregman (θ, θ t )(3)\nwhere µ > 0 is a hyper-parameter, L vanilla is a common classification or regression loss function, and L bregman is the Bregman divergence defined as:\nL bregman (θ, θ t ) = E x∼D f (x; θ), f (x; θ t )(4)\nwhere the input x is derived from the training set D, and here we leverage the symmetric KLdivergence, i.e., (p, q) = KL(p||q) + KL(q||p). Intuitively, L bregman serves as a regularizer and prevents θ t+1 from deviating too much from the previous iteration θ t , therefore can effectively retain the capacity of encoding feature in the pre-trained model f (•; θ)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b12", "b18", "b44", "b58", "b39", "b61", "b61", "b39", "b17", "b50", "b14", "b15", "b2", "b25", "b37", "b13" ], "table_ref": [], "text": "Datasets & Evaluation Protocols. We conduct our experiments on nine benchmark datasets: MolHIV, MolPCBA, FreeSolv, ESOL, BBBP, Estrogen-α, Estrogen-β, MetStab low and MetStab high [13,19,45,59], where MolHIV (41K) and MolPCBA (437K) are two large-scale molecular property prediction datasets and the others are small-scale molecular datasets (0.6 ∼ 2.4K). We provide more descriptions of datasets in Appendix A.2. Following the previous settings [40,62], we employ the scaffold split on MolHIV, MolPCBA, BBBP, and Estrogen-α/β, and then the random split on the rest of datasets. For the evaluation protocols, MolPCBA is evaluated by Accuracy Precision (AP), FreeSolv and ESOL are evaluated by RMSE, and the others are evaluated by AUC. Pre-trained Models & Baselines. Two widely used pre-trained GTNs are leveraged as our backbones: Graphormer [62] and MAT [40]. In our experiments, we employ the base version of Graphormer, which has 12 layers Transformer encoders and is pre-trained on large-scale molecular dataset PCQM4M-LSC [18]. MAT is built on 8 encoders of Transformer, where the dimension of hidden representation is set to 1024. And, the node-level self-supervised learning serves as a pre-training task for MAT on ZINC15 [51]. For the baselines, we include full fine-tuning as a strong counterpart and six popular traditional PEFTs: Adapter [15], LoRA [16], BitFit [3], Hyperformer [26], Compacter [38], and MAM [14]. More descriptions per baseline are presented in Appendix A.1.\nImplementation. Before fine-tuning, we begin by reusing the official released pre-trained checkpoints 4 to initialize our backbones, while the introduced modules are randomly initialized, and then use AdamW optimizer to fine-tune the models. We set fair hyperparametric search budgets for various PEFTs, and the detailed configurations per method on diverse datasets are shown in Appendix A.3. We report the comparison results on Mol-HIV and MolPCBA based on the pre-trained Graphormer in Tab. 2, and more results are shown in Tab. 1 on small-scale datasets based on the pre-trained MAT. From these comparisons, we can draw the following observations:" }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "Observation I: Simple graph structure could deliver significant performance benefits. For instance, based on graph adjacent information, G-Adapters (S 1 , S 2 ) obtain better results compared to G-Adapters (S 3 , S 4 ) with distancebased structure information in Tab. 1. Moreover, in Tab. 2, G-Adapter (S 1 ) achieves the optimal performance among all PEFTs. We speculate that this may be because the graph adjacency matrix (S 1 ) is the complete information of graph structures, that is, the distance-based structure (S 3 ) can be derived from S 1 . Therefore, this suggests that our model is not only remarkably expressive but also insensitive to the graph structure, which means that we do not have to design tailored graph structures except for graph adjacency information. In the following statement, we take G-Adapter (S 1 ) as our baseline unless otherwise specified.\nObservation II: Our proposed G-Adapter consistently outperforms traditional PEFTs and offers a better trade-off between the ratio of trainable parameters (γ) and performance. For instance, Adapter, LoRA, and BitFit lag far behind G-Adapter on MolHIV and MolPCBA in Tab. 2. Although BitFit updates the fewest number of parameters (γ = 0.16%), it also yields the worst performances (0.709 AUC, 0.184 AP). In comparison, our proposed G-Adapter achieves 79.0 AUC, 0.269 AP with γ = 0.24%, γ = 1.89%, respectively, which is also the optimal solution compared to other PEFTs. Observation III: Compared to full fine-tuning, G-Adapter could achieve competitive and (most) superior performances on large-and small-scale datasets, respectively. For example, G-Adapter outperforms full fine-tuning by a significant margin (1.3% AUC) on Estrogen-β. We believe that the improvement on small-scale datasets is understandable, there are several reasons: (i) With decreasing the scale of the training set, it might meet serious over-fitting and catastrophic forgetting issues if the entire parameters are updated in full fine-tuning, whereas our method eases these concerns by only tuning G-Adapter blocks while freezing the original parameters. (ii) G-Adapter restricts the drastic updating of parameters via Bregman proximal point optimization strategy, which acts as a regularizer during the training process and therefore boosts the generalization capacity." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "Efficiency of Training, Inference and Memory Footprint", "publication_ref": [], "table_ref": [], "text": "In this subsection, we mainly investigate the following questions: Does G-Adapter seriously affect the training (or convergence) efficiency and inference speed compared to full fine-tuning? And, can G-Adapter bring significant benefits to the storage of model weights, as we claimed before? Specifically, for the training efficiency, we evaluate PEFTs and full fine-tuning on three datasets with different scales 5 : MolPCBA (437K), MolHIV (41K), and Estrogen-α (2K). The experimental results are shown in Fig. 5, in which we observe that: (i) For the large-scale dataset, the convergence of PEFTs lags behind full fine-tuning by about 4 ∼ 5 epochs. However, this gap is significantly narrowed as the amount of training data decreases. (ii) Compared to traditional PEFTs, G-Adapter not only achieves faster convergence but also higher performance over datasets of varying scales.\nFor the inference efficiency and memory footprint, we adopt different bottleneck sizes (r = 32, 16, 8, 4) on MolPCBA, MolHIV, FreeSolv, and Estrogen-α, respectively. The experimental results are shown in Tab. 3, where we observe that: (i) Compared to full fine-tuning, the extra introduced modules result in a trivial inference delay, which is almost negligible with the bottleneck size decreasing. Note that BitFit does not introduce additional modules but merely tunes the bias terms, therefore it theoretically has the same inference efficiency as full fine-tuning. (ii) For the storage requirements, the distinction between PEFTs and full fine-tuning is remarkably significant. For an example on Estrogen-α (r = 4), full fine-tuning requires storing a complete checkpoint (161MB)\nfor each downstream task, while Adapter (LoRA), BitFit and G-Adapter only need to store 0.7MB, 0.2MB and 0.4MB checkpoint per task, respectively, which greatly reduces the storage requirements." }, { "figure_ref": [ "fig_1" ], "heading": "The Impact of Insertion Position and Components", "publication_ref": [ "b13" ], "table_ref": [], "text": "We discuss the impact of potential designs on performance from two perspectives: (i) The insertion position. First, we insert G-Adapter block into the front and back of MHA, denoted as pre_mha and post_mha, respectively. Similarly, G-Adapter block is plugged before and after FFN, denoted as pre_ffn and post_ffn, respectively. Note that our baseline can be regarded as inserting the G-Adapter block in the middle of FFN. Finally, like the insertion position of Adapter in Fig. 3, we insert two G-Adapter blocks into MHA and FFN, denoted as mha + ffn. (ii) Importance of each component. First, we remove the adjacency matrix (denoted as w/o. S) to explore the importance of graph structure. Then, we separately remove the first, second, and both LayerNorm layers to explore individual effects denoted as w/o. pre_ln, w/o. post_ln and w/o. ln. We also explore the role of nonlinear activation function by removing it, denoted as w/o. act_fn. In addition, the effect of Bregman proximal point optimization is evaluated by only using the vanilla loss function, denoted as w/o. breg. The experimental results are shown in Tab. 4, in which we can obtain that: (i) Plugging G-Adapter block into the front, middle, or back of FFN could yield better performance than MHA, and more blocks seem not to give better results, which is also consistent with the previous conclusion in NLP [14]. An intuitive explanation is that, in each encoder of Transformer, FFN concentrates most of the parameters (∼ 67%), while MHA only accounts for ∼ 33%. Therefore, tweaking the weights of FFN may be a more efficient way for fine-tuning. (ii) Removing any of LayerNorm layers or the nonlinear activation function will hurt the performance. Moreover, removing the graph structure or Bregman proximal point optimization strategy would also significantly degrade the performance." }, { "figure_ref": [], "heading": "Can Graph Structure Information", "publication_ref": [], "table_ref": [], "text": "Benefit Traditional PEFTs? One of the major contributions of G-Adapter is the introduction of graph structure, therefore a natural question is: can the graph structure enhance the traditional PEFTs as well? Given that the adjacency matrix (S) has performed well as graph structure information in previous experiments, we directly introduce S into Adapter and LoRA. Their modified updating formulas are presented in Appendix A.1. We conduct the experiments on four datasets: MolHIV, MolPCBA, BBBP, and MetStab low . The results are reported in Tab. 5, in which we could observe a slight improvement in the modified methods compared to the original Adapter and LoRA. However, there is still a significant gap with our proposed G-Adapter, which further justifies that the traditional PEFT architectures are not suitable for handling graph-based tasks." }, { "figure_ref": [], "heading": "Conclusion & Limitations", "publication_ref": [ "b28", "b60", "b25", "b12", "b37", "b13" ], "table_ref": [], "text": "In this paper, we propose a novel structure-aware PEFT method, G-Adapter, for graph-based tasks based on pre-trained GTNs. Unlike the traditional PEFTs, which lead to the issue of feature distribution shift, G-Adapter leverages the graph structure and Bregman proximal point optimization strategy to mitigate this concern. Extensive experiments on a variety of graph-based downstream tasks demonstrate the effectiveness of our proposed method. Although our approach demonstrates satisfactory performance, there are still some limitations: (i) Considering that the applicable scenarios of PEFT are large-scale models, our method is not tested on conventional graph network architectures (e.g., GCN [29], GIN [61]). Because these models are already quite lightweight, resulting in the advantages of PEFT not being sufficiently exploited. (ii) Limited by computational resources, we only evaluate two pre-trained GTNs (Graphormer and MAT). Nevertheless, thanks to the simplicity and generality of our proposed method, it can be applied to various graph Transformer-based models.\nHyperformer This method can be regarded as a variant of Adapter via using shared hypernetworks in multi-task scenarios. Specifically, Hyperformer [26] leverages the task conditioned hypernetworks to obtain the parameters of Adapter modules, i.e.: X = X + LN σ(XW down )W up (13) where W down = W D I τ , W up = W U I τ , W D ∈ R (d×r)×t , W U ∈ R (r×d)×t , and I τ ∈ R t is task embedding for each individual task (τ ). Similar operations are also conducted on the parameters of LayerNorm layer LN(•).\nCompacter Mahabadi et al. [38] introduce the Kronecker product and weights sharing tricks to reduce the ratio of trainable parameters in Adapter. Specifically, for the learnable weight W ∈ R d×r , we can decompose W into multiple \"small\" matrices via Kronecker product (⊗):\nW = n i=1 A i ⊗ B i(14)\nwhere\nA i ∈ R n×n , B i ∈ R d n × r\nn . This decomposition method can be applied to the trainable weights W down and W up in Adapter. Besides, Compacter also shares the A i across all Adapter blocks. MAM He et al. [14] investigate the traditional PEFTs from three perspectives: updated functional form, insertion form, and modified representation, and then offer a unified view to understand existing PEFTs: X = X + ∆X, where ∆X is learned by PEFT modules. Furthermore, based on their findings (e.g., FFN can better utilize modification than MHA at larger capacities), they propose a new PEFT method (MAM) by combining the most optimal choices." }, { "figure_ref": [], "heading": "A.2 More Descriptions about Datasets", "publication_ref": [ "b12", "b18", "b44", "b58", "b39", "b61" ], "table_ref": [], "text": "We evaluate our proposed method and other baselines on nine benchmark datasets: MolHIV, MolPCBA, FreeSolv, ESOL, BBBP, Estrogen-α, Estrogen-β, MetStab low , and MetStab high [13,19,45,59]. Following the previous settings [40,62], we split the dataset into the training set, the validation set, and the test set in the ratio of 8 : 1 : 1, and the statistical information is shown in Tab. 6. Specifically, for each of dataset, MolHIV and MolPCBA are two molecular property prediction datasets, which are derived from the popular graph benchmark OGB. The target of this task is to predict the binary labels for each molecule, which indicates whether it has a particular property or not. FreeSolv and ESOL are regression tasks for predicting water solubility in terms of hydration free energy and log solubility. BBBP is a binary classification task for predicting the ability of a molecule to penetrate the blood-brain barrier. The aim of Estrogen-α and Estrogen-β is to predict whether a compound is active towards a given target based on experimental data from the ChEMBL database. Last, MetStab low and MetStab high are also binary classification tasks for predicting whether a compound has high or low metabolic stability. In addition, we also provide some general pseudo-code to illustrate how to integrate our proposed method into existing GTNs in Alg. 1. " }, { "figure_ref": [ "fig_6", "fig_5" ], "heading": "A.4 More Experimental Results", "publication_ref": [], "table_ref": [], "text": "Here, we conduct more experiments to demonstrate the feature distribution shift issue on different datasets (MolHIV, Estrogen-β, and MetStab low ) with two pre-trained GTNs (Graphormer and MAT).\nThe experimental results are shown in Fig. 8, 9, and 10. In addition, we depict the relationship between Jensen-Shannon divergence (which measures the degree of discrepancy in feature distribution between PEFT and full fine-tuning) and performance across various datasets and methods in Fig. 6. We also supplement more comparisons in terms of the training efficiency, the impact of different designs, and the effect of graph structure information for traditional PEFTs on more datasets. The results are shown in Fig. 7, Tab. 8, 9, where we could draw the consistent conclusion as before. " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [ "b53", "b61", "b39", "b14", "b0", "b15", "b2" ], "table_ref": [], "text": "A.1 More Detailed Preliminaries Transformer Transformer [54], as one of the most popular network architectures so far, has been widely employed in diverse domains, such as NLP, computer vision and graph. In a standard encoder of Transformer, Multi-Head self-Attention (MHA) and Feed-Forward Networks (FFN) are two core components. Given the input X ∈ R n×d , where n is the length of input sequence and d refers to the hidden size of representation, the query Q and key-value pairs K, V are first obtained by:\npass the self-attention operation:\nAfter that, all head outputs are concatenated by a linear projection transformation\n), then we can attain the final output of MHA:\nwhere\nAnother important module is FFN, which consists of two linear layers with a ReLU nonlinear activation function (where we still take X as the input for simplicity):\nwhere\nsome GTNs, such as Graphormer [62] and MAT [40]. In the following demonstration, for simplicity, we take X, X ∈ R d×d as the input, output of a certain module, respectively.\nAdapter Houlsby et al. [15] insert two compact modules (i.e., Adapter blocks in Fig. 3) into the encoders of Transformer. Specifically, an Adapter block is composed of the down-projection transformation W down , the up-projection transformation W up , the nonlinear activation function σ(•), and the skip connection:\nwhere W down ∈ R d×r , W up ∈ R r×d and r is the bottleneck size of Adapter, which satisfies the condition r d for reducing the number of learnable parameters. To introduce the graph structure information (S ∈ R n×n ) in Sec. 5.3, we modify Eq. ( 8) as follows:\nLoRA Based on the low intrinsic rank hypothesis [1], LoRA [16] reparametrizes the updating of pre-trained weight W by the low-rank decomposition, i.e., W + ∆W = W + W down W up . For the practice in Transformer, LoRA injects two low-rank modules into the query and value weights (W q , W v ) in a parallel connection manner. For the weight W * ∈ {W q , W v }, we can obtain:\nwhere s ≥ 1 is a scalar hyper-parameter, and X can be regarded as the new query or value. For an example of query Q = XW q , the updated query Q = Q + s • XW down W up , which has a similar updating formulation with Adapter. To introduce the graph structure information (S ∈ R n×n ) in Sec. 5.3, we modify Eq. ( 10) as follows:\nBitFit Ben-Zaken et al. [3] employ a straightforward strategy to expose knowledge of the pretrained models for downstream tasks via tuning the bias terms (b) of the model. To be specific, for the linear operation, the updated output is equal to:\nwhere W refers to the pre-trained weights of the linear layer, and b\n(including the parameters of LayerNorm layers). Note that only b is updated during fine-tuning. " } ]
It has become a popular paradigm to transfer the knowledge of large-scale pretrained models to various downstream tasks via fine-tuning the entire model parameters. However, with the growth of model scale and the rising number of downstream tasks, this paradigm inevitably meets the challenges in terms of computation consumption and memory footprint issues. Recently, Parameter-Efficient Fine-Tuning (PEFT) (e.g., Adapter, LoRA, BitFit) shows a promising paradigm to alleviate these concerns by updating only a portion of parameters. Despite these PEFTs having demonstrated satisfactory performance in natural language processing, it remains under-explored for the question of whether these techniques could be transferred to graph-based tasks with Graph Transformer Networks (GTNs). Therefore, in this paper, we fill this gap by providing extensive benchmarks with traditional PEFTs on a range of graph-based downstream tasks. Our empirical study shows that it is sub-optimal to directly transfer existing PEFTs to graph-based tasks due to the issue of feature distribution shift. To address this issue, we propose a novel structure-aware PEFT approach, named G-Adapter, which leverages graph convolution operation to introduce graph structure (e.g., graph adjacent matrix) as an inductive bias to guide the updating process. Besides, we propose Bregman proximal point optimization to further alleviate feature distribution shift by preventing the model from aggressive update. Extensive experiments demonstrate that G-Adapter obtains the state-of-the-art performance compared to the counterparts on nine graph benchmark datasets based on two pre-trained GTNs, and delivers tremendous memory footprint efficiency compared to the conventional paradigm.
G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning for Graph Transformer Networks
[ { "figure_caption": "On small-scale datasets.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An overview of existing popular PEFTs (Adapter, LoRA, and BitFit) and our proposed G-Adapter. In the left part, we demonstrate the insertion position of PEFT blocks in a standard encoder of Transformer.In the right part, we depict the architecture of each PEFT method. Here, each color represents an approach, where the grey blocks are frozen during the fine-tuning process. Our proposed G-Adapter is marked and demonstrated in purple, in which S indicates the introduced graph structure information (e.g., the graph adjacent matrix).", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of feature distribution between full fine-tuning and the original model parameters, where Jensen-Shannon divergence is 0.27%. It is expected that the feature distribution encoded by PEFT should be aligned with full fine-tuning as much as possible, as discussed in Sec. 3.1. However, the feature distribution of full fine-tuning is unavailable during the training process of PEFT. Interestingly, we observe that the feature distribution encoded by the original model parameters has a high similarity with full fine-tuning, as shown in Fig. 4. It indicates that full fine-tuning only slightly modulates the value of model parameters but does not change the ability of the model to encode features.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The comparison of training efficiency between PEFTs (Adapter, LoRA, BitFit, and our proposed G-Adapter) and full fine-tuning on diverse scale datasets.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 : 8 :68Figure 6: The relationship between Jensen-Shannon divergence and performance.", "figure_data": "", "figure_id": "fig_4", "figure_label": "68", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: More comparisons of training efficiency between PEFTs and full fine-tuning on more datasets. Note that the evaluation protocol for FreeSolv and ESOL is RMSE (the lower, the better), while the others are AUC (the higher, the better).", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Illustration of feature distribution shift on MolHIV with pre-trained Graphormer.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 10 :Algorithm 1101Figure 10: Illustration of feature distribution shift on MetStab low with pre-trained MAT.", "figure_data": "", "figure_id": "fig_7", "figure_label": "101", "figure_type": "figure" }, { "figure_caption": "Comparison of PEFTs and full fine-tuning on small-scale datasets. The results are averaged from six seeds, and the subscript is the standard deviation, where bold indicates the best results in PEFTs. * represents the mean ratio of trainable parameters over seven datasets. Estrogen-β MetStab low MetStab high Full Finetunig 100% 0.286 ±0.035 0.270 ±0.037 0.764 ±0.008 0.979 ±0.002 0.778 ±0.005 0.863 ±0.025 0.878 ±0.032 Adapter 2.52% 0.327 ±0.011 0.320 ±0.072 0.724 ±0.009 0.978 ±0.024 0.768 ±0.021 0.846 ±0.034 0.859 ±0.028 Hyperformer 2.43% 0.310 ±0.020 0.321 ±0.045 0.727 ±0.012 0.977 ±0.027 0.770 ±0.013 0.842 ±0.022 0.853 ±0.023 Compacter 1.56% 0.314 ±0.028 0.316 ±0.038 0.730 ±0.022 0.971 ±0.034 0.764 ±0.027 0.832 ±0.019 0.860 ±0.046 MAM 1.28% 0.302 ±0.019 0.292 ±0.022 0.743 ±0.014 0.980 ±0.011 0.776 ±0.022 0.851 ±0.023 0.872 ±0.054 LoRA 1.01% 0.309 ±0.032 0.284 ±0.054 0.726 ±0.012 0.979 ±0.007 0.781 ±0.039 0.839 ±0.022 0.878 ±0.027 BitFit 0.10% 0.321 ±0.048 0.314 ±0.031 0.739 ±0.005 0.977 ±0.019 0.770 ±0.035 0.848 ±0.031 0.805 ±0.045 G-Adapter (S 1 ) 0.71% 0.280 ±0.012 0.279 ±0.018 0.750 ±0.012 0.976 ±0.033 0.791 ±0.022 0.865 ±0.036 0.881 ±0.023 G-Adapter (S 2 ) 0.71% 0.282 ±0.014 0.286 ±0.022 0.751 ±0.009 0.981 ±0.017 0.788 ±0.031 0.870 ±0.013 0.874 ±0.025 G-Adapter (S 3 ) 0.71% 0.291 ±0.008 0.289 ±0.017 0.744 ±0.011 0.973 ±0.015 0.786 ±0.034 0.860 ±0.031 0.861 ±0.018 G-Adapter (S 4 ) 0.71% 0.298 ±0.011 0.282 ±0.019 0.747 ±0.006 0.975 ±0.011 0.775 ±0.024 0.858 ±0.025 0.869 ±0.037", "figure_data": "MethodRatio *RMSE (↓)AUC (↑)(γ)FreeSolvESOLBBBPEstrogen-α", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The comparison on two large-scale datasets MolHIV and MolPCBA.", "figure_data": "MethodMolHIVMolPCBARatio (γ)AUC (↑)Ratio (γ)AP (↑)Full Finetunig100%0.804±0.006100%0.272±0.013Adapter1.24%0.743±0.0104.69%0.235±0.009Hyperformer1.13%0.740±0.0124.37%0.246±0.012Compacter0.64%0.752±0.0233.42%0.230±0.023MAM0.57%0.758±0.0172.66%0.251±0.016LoRA0.34%0.763±0.0142.42%0.246±0.012BitFit0.16%0.709±0.0080.16%0.184±0.011G-Adapter (S1)0.24%0.790±0.0111.89%0.269±0.008G-Adapter (S2)0.24%0.788±0.0061.89%0.264±0.012G-Adapter (S3)0.24%0.772±0.0081.89%0.250±0.011G-Adapter (S4)0.24%0.781±0.0121.89%0.262±0.011", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The comparison of PEFTs and full fine-tuning in terms of inference speed (the millisecond per sample) and memory footprint across different bottleneck sizes.", "figure_data": "MethodMolPCBA (r=32)MolHIV (r=16)FreeSolv (r=8)Estrogen-α (r=4)Mem. (MB) Infer (ms) Mem. (MB) Infer (ms) Mem. (MB) Infer (ms) Mem. (MB) Infer (ms)Full Finetunig1850.811851.391610.421611.11Adapter5.00.992.41.461.10.460.71.15LoRA5.00.972.41.431.10.470.71.13BitFit0.30.810.31.390.20.420.21.11G-Adapter2.90.931.41.410.70.440.41.12", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The impact of insertion position and components on performance.", "figure_data": "MethodMolHIV MolPCBAG-Adapter0.7900.269G-Adapter (pre_mha)0.7520.246G-Adapter (post_mha)0.7470.232G-Adapter (pre_ffn)0.7810.258G-Adapter (post_ffn)0.7700.260G-Adapter (mha + ffn)0.7630.245G-Adapter (w/o. S)0.7280.214G-Adapter (w/o. pre_ln)0.7620.247G-Adapter (w/o. post_ln)0.7550.250G-Adapter (w/o. ln)0.7450.237G-Adapter (w/o. act_fn)0.7660.240G-Adapter (w/o. breg)0.7540.234", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The impact of graph structure information on traditional PEFTs.", "figure_data": "MethodMolHIV MolPCBA BBBP MetStablowAdapter0.7430.2350.7240.839Adapter + S0.7490.2420.7330.844LoRA0.7630.2460.7390.858LoRA + S0.7660.2520.7420.861", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Statistics for different datasets. \"# Train\", \"# Valid\", and \"# Test\" indicate the number of training, validation, and test sets, respectively. Here, \"Clf.\" and \"Reg.\" refer to the classification and regression tasks, respectively.", "figure_data": "DatasetsMolHIV MolPCBA FreeSolv ESOL BBBP Estrogen-α Estrogen-β MetStab low MetStab high# Train32,901350,3435139021,6311,9181,5681,7011,701# Valid4,11343,79364113204240196213213# Test4,11343,79365113204240197213213Task TypeClf.Clf.Reg.Reg.Clf.Clf.Clf.Clf.Clf.MetricAUCAPRMSERMSE AUCAUCAUCAUCAUCA.3 Detailed Experimental Configurations and ImplementationsFor a variety of methods, we perform a relatively fair hyper-parameter search in terms of learn-", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The detailed experimental configurations (batch size, learning rate, and bottleneck size) of various methods on a range of datasets, where Full-FT denotes full fine-tuning.", "figure_data": "MethodMolHIVMolPCBA FreeSolvESOLBBBPEstrogen-α Estrogen-β MetStab low MetStab highBatch Size / Learning RateFull-FT128 / 2e-5 128 / 2e-5 32 / 1e-5 32 / 1e-5 32 / 1e-532 / 1e-532 / 1e-532 / 1e-532 / 1e-5Adapter128 / 2e-3 128 / 2e-3 64 / 1e-3 64 / 2e-3 64 / 2e-332 / 1e-364 / 1e-332 / 2e-364 / 2e-3Hyperformer 128 / 2e-3 128 / 1e-3 64 / 2e-3 32 / 1e-3 32 / 2e-332 / 1e-332 / 2e-364 / 1e-332 / 1e-3Compacter128 / 1e-3 128 / 2e-3 32 / 1e-3 32 / 1e-3 32 / 2e-364 / 2e-332 / 2e-364 / 2e-332 / 2e-3MAM128 / 2e-3 128 / 1e-3 32 / 1e-3 64 / 1e-3 32 / 1e-364 / 2e-332 / 1e-332 / 2e-364 / 2e-3LoRA128 / 1e-3 128 / 2e-3 32 / 2e-3 32 / 1e-3 64 / 2e-364 / 2e-332 / 1e-332 / 1e-332 / 1e-3BitFit128 / 1e-3 128 / 1e-3 32 / 1e-3 32 / 1e-3 32 / 1e-332 / 1e-332 / 1e-332 / 1e-332 / 1e-3G-Adapter128 / 2e-3 128 / 1e-3 32 / 1e-3 64 / 2e-3 64 / 1e-332 / 2e-332 / 2e-332 / 2e-332 / 2e-3Bottleneck Size (r)Adapter16641681643212864Hyperformer848164168166448Compacter1648328164164832MAM83281632483232LoRA4321683284416G-Adapter4484164441664", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "The impact of insertion position and components on performance.MethodFreeSolv ESOL BBBP Estrogen-α Estrogen-β MetStab low MetStab high", "figure_data": "G-Adapter0.2800.279 0.7500.9760.7910.8650.881G-Adapter (pre_mha)0.3170.312 0.7390.9660.7720.8320.865G-Adapter (post_mha)0.3040.298 0.7330.9630.7870.8470.863G-Adapter (pre_ffn)0.2910.281 0.7410.9730.7890.8730.875G-Adapter (post_ffn)0.2890.284 0.7390.9740.7880.8690.872G-Adapter (mha + ffn)0.2940.296 0.7350.9720.7770.8640.869G-Adapter (w/o. S)0.3510.335 0.7060.9500.7370.8230.832G-Adapter (w/o. pre_ln)0.3210.319 0.7110.9620.7450.8520.841G-Adapter (w/o. post_ln)0.3140.323 0.7130.9550.7510.8460.845G-Adapter (w/o. ln)0.3430.328 0.7050.9560.7260.8330.839G-Adapter (w/o. act_fn)0.3310.301 0.7130.9620.7550.8440.855G-Adapter (w/o. breg)0.3460.325 0.7040.9510.7430.8490.834Performance0.5 0.6 0.7 0.8 0.9Full Fine-tuning Adapter LoRA BitFit G-AdapterPerformance0.4 0.5 0.6 0.7 0.8 0.9Full Fine-tuning Adapter LoRA BitFit G-AdapterPerformance0.68 0.70 0.72 0.74 0.76Full Fine-tuning Adapter LoRA BitFit G-Adapter0.40.30.660.30246810 12 14 16 18 20 # Epochs0.202468# Epochs 10 12 14 16 18 200.640246810 12 14 16 18 20 # Epochs(a) FreeSolv(b) ESOL(c) BBBP0.900.800.850.85Performance0.76 0.78Performance0.75 0.80Performance0.80 0.750.72 0.740246810 12 14 16 18 20 # Epochs Full Fine-tuning Adapter LoRA BitFit G-Adapter0.65 0.700246810 12 14 16 18 20 # Epochs Full Fine-tuning Adapter LoRA BitFit G-Adapter0.65 0.7002468# Epochs 10 12 14 16 18 20 Full Fine-tuning Adapter LoRA BitFit G-Adapter(d) Estrogen-β(e) MetStab low(f) MetStab high", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Anchun Gui; Jinqiang Ye; Han Xiao; G-Adapter Relu; G-Adapter Adapter; Relu Lora
[ { "authors": "Armen Aghajanyan; Sonal Gupta; Luke Zettlemoyer", "journal": "", "ref_id": "b0", "title": "Intrinsic Dimensionality Explains the Effectiveness of Language Model Fine-Tuning", "year": "2021" }, { "authors": "Jimmy Lei Ba; Jamie Ryan Kiros; Geoffrey E Hinton", "journal": "", "ref_id": "b1", "title": "Layer Normalization", "year": "2016" }, { "authors": "Elad Ben-Zaken; Yoav Goldberg; Shauli Ravfogel", "journal": "", "ref_id": "b2", "title": "BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models", "year": "2022" }, { "authors": "Deyu Bo; Chuan Shi; Lele Wang; Renjie Liao", "journal": "", "ref_id": "b3", "title": "Specformer: Spectral Graph Neural Networks Meet Transformers", "year": "2023" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b4", "title": "Language Models Are Few-Shot Learners", "year": "2020" }, { "authors": "Dexiong Chen; O' Leslie; Karsten Bray; Borgwardt", "journal": "", "ref_id": "b5", "title": "Structure-Aware Transformer for Graph Representation Learning", "year": "2022" }, { "authors": "Jinsong Chen; Kaiyuan Gao; Gaichao Li; Kun He", "journal": "", "ref_id": "b6", "title": "NAGphormer: A Tokenized Graph Transformer for Node Classification in Large Graphs", "year": "2023" }, { "authors": "Jiaao Chen; Aston Zhang; Xingjian Shi; Mu Li; Alex Smola; Diyi Yang", "journal": "", "ref_id": "b7", "title": "Parameter-Efficient Fine-Tuning Design Spaces", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b8", "title": "BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding", "year": "2019" }, { "authors": "Cameron Diao; Ricky Loynd", "journal": "", "ref_id": "b9", "title": "Relational Attention: Generalizing Transformers for Graph-Structured Tasks", "year": "2023" }, { "authors": "Ning Ding; Yujia Qin; Guang Yang; Fuchao Wei; Zonghan Yang; Yusheng Su; Shengding Hu; Yulin Chen; Chi-Min Chan; Weize Chen; Jing Yi; Weilin Zhao; Xiaozhi Wang; Zhiyuan Liu; Hai-Tao Zheng; Jianfei Chen; Yang Liu; Jie Tang; Juanzi Li; Maosong Sun", "journal": "", "ref_id": "b10", "title": "Delta Tuning: A Comprehensive Study of Parameter Efficient Methods for Pre-trained Language Models", "year": "2022" }, { "authors": "Chin-Lun Fu; Zih-Ching Chen; Yun-Ru Lee; Hung-Yi Lee", "journal": "", "ref_id": "b11", "title": "AdapterBias: Parameterefficient Token-dependent Representation Shift for Adapters in NLP Tasks", "year": "2022" }, { "authors": "A Gaulton; L J Bellis; A P Bento; J Chambers; M Davies; A Hersey; Y Light; S Mcglinchey; D Michalovich; B Al-Lazikani; J P Overington", "journal": "Nucleic Acids Research", "ref_id": "b12", "title": "ChEMBL: A Large-Scale Bioactivity Database for Drug Discovery", "year": "2012" }, { "authors": "Junxian He; Chunting Zhou; Xuezhe Ma; Taylor Berg-Kirkpatrick; Graham Neubig", "journal": "", "ref_id": "b13", "title": "Towards a Unified View of Parameter-Efficient Transfer Learning", "year": "2022" }, { "authors": "Neil Houlsby; Andrei Giurgiu; Stanislaw Jastrzebski; Bruna Morrone; Quentin De Laroussilhe; Andrea Gesmundo; Mona Attariyan; Sylvain Gelly", "journal": "", "ref_id": "b14", "title": "Parameter-Efficient Transfer Learning for NLP", "year": "2019" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b15", "title": "LoRA: Low-Rank Adaptation of Large Language Models", "year": "2022" }, { "authors": "Shengding Hu; Zhen Zhang; Ning Ding; Yadao Wang; Yasheng Wang; Zhiyuan Liu; Maosong Sun", "journal": "", "ref_id": "b16", "title": "Sparse Structure Search for Delta Tuning", "year": "2022" }, { "authors": "Weihua Hu; Matthias Fey; Hongyu Ren; Maho Nakata; Yuxiao Dong; Jure Leskovec", "journal": "", "ref_id": "b17", "title": "OGB-LSC: A Large-Scale Challenge for Machine Learning on Graphs", "year": "2021" }, { "authors": "Weihua Hu; Matthias Fey; Marinka Zitnik; Yuxiao Dong; Hongyu Ren; Bowen Liu; Michele Catasta; Jure Leskovec", "journal": "", "ref_id": "b18", "title": "Open Graph Benchmark: Datasets for Machine Learning on Graphs", "year": "2021" }, { "authors": "Weihua Hu; * ; Bowen Liu; * ; Joseph Gomes; Marinka Zitnik; Percy Liang; Vijay Pande; Jure Leskovec", "journal": "", "ref_id": "b19", "title": "Strategies for Pre-training Graph Neural Networks", "year": "2020" }, { "authors": "Ziniu Hu; Yuxiao Dong; Kuansan Wang; Kai-Wei Chang; Yizhou Sun", "journal": "", "ref_id": "b20", "title": "GPT-GNN: Generative Pre-Training of Graph Neural Networks", "year": "2020" }, { "authors": "Haoming Jiang; Pengcheng He; Weizhu Chen; Xiaodong Liu; Jianfeng Gao; Tuo Zhao", "journal": "", "ref_id": "b21", "title": "SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models through Principled Regularized Optimization", "year": "2020" }, { "authors": "Zeyinzi Jiang; Chaojie Mao; Ziyuan Huang; Yiliang Lv; Deli Zhao; Jingren Zhou", "journal": "", "ref_id": "b22", "title": "Rethinking Efficient Tuning Methods from a Unified Perspective", "year": "2023" }, { "authors": "Shibo Jie; Zhi-Hong Deng", "journal": "", "ref_id": "b23", "title": "FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer", "year": "2022" }, { "authors": "Jin Bowen; Yu Zhang; Yu Meng; Jiawei Han", "journal": "", "ref_id": "b24", "title": "Edgeformers: Graph-Empowered Transformers for Representation Learning on Textual-Edge Networks", "year": "2023" }, { "authors": "Rabeeh Karimi Mahabadi; Sebastian Ruder; Mostafa Dehghani; James Henderson", "journal": "", "ref_id": "b25", "title": "Parameter-Efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks", "year": "2021" }, { "authors": "Ling Min; Serena Khoo; Hai Leong Chieu; Zhong Qian; Jing Jiang", "journal": "", "ref_id": "b26", "title": "Interpretable Rumor Detection in Microblogs by Attending to User Interactions", "year": "2020" }, { "authors": "Jinwoo Kim; Dat Tien Nguyen; Seonwoo Min; Sungjun Cho; Moontae Lee; Honglak Lee; Seunghoon Hong", "journal": "", "ref_id": "b27", "title": "Pure Transformers Are Powerful Graph Learners", "year": "2022" }, { "authors": "Thomas N Kipf; Max Welling", "journal": "", "ref_id": "b28", "title": "Semi-Supervised Classification with Graph Convolutional Networks", "year": "2017" }, { "authors": "Devin Kreuzer; Dominique Beaini; William L Hamilton; Vincent Létourneau; Prudencio Tossou", "journal": "", "ref_id": "b29", "title": "Rethinking Graph Transformers with Spectral Attention", "year": "2022" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "", "ref_id": "b30", "title": "The Power of Scale for Parameter-Efficient Prompt Tuning", "year": "2021" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b31", "title": "BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension", "year": "2020" }, { "authors": "Lisa Xiang; Percy Li; Liang", "journal": "", "ref_id": "b32", "title": "Prefix-Tuning: Optimizing Continuous Prompts for Generation", "year": "2021" }, { "authors": "Dongze Lian; Zhou Daquan; Jiashi Feng; Xinchao Wang", "journal": "", "ref_id": "b33", "title": "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning", "year": "2022" }, { "authors": "Kevin Lin; Lijuan Wang; Zicheng Liu", "journal": "", "ref_id": "b34", "title": "Mesh Graphormer", "year": "2021" }, { "authors": "Xiao Liu; Kaixuan Ji; Yicheng Fu; Weng Tam; Zhengxiao Du; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b35", "title": "P-Tuning: Prompt Tuning Can Be Comparable to Fine-tuning Across Scales and Tasks", "year": "2022" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b36", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach", "year": "2019" }, { "authors": "Rabeeh Karimi Mahabadi; James Henderson; Sebastian Ruder", "journal": "", "ref_id": "b37", "title": "Compacter: Efficient Low-Rank Hypercomplex Adapter Layers", "year": "2021" }, { "authors": "Yuning Mao; Lambert Mathias; Rui Hou; Amjad Almahairi; Hao Ma; Jiawei Han; Wen-Tau Yih; Madian Khabsa", "journal": "", "ref_id": "b38", "title": "UniPELT: A Unified Framework for Parameter-Efficient Language Model Tuning", "year": "2022" }, { "authors": "Łukasz Maziarka; Tomasz Danel; Sławomir Mucha; Krzysztof Rataj; Jacek Tabor; Stanisław Jastrzębski", "journal": "", "ref_id": "b39", "title": "Molecule Attention Transformer", "year": "2020" }, { "authors": "Grégoire Mialon; Dexiong Chen; Margot Selosse; Julien Mairal", "journal": "", "ref_id": "b40", "title": "GraphiT: Encoding Graph Structure in Transformers", "year": "2021" }, { "authors": "Erxue Min; Runfa Chen; Yatao Bian; Tingyang Xu; Kangfei Zhao; Wenbing Huang; Peilin Zhao; Junzhou Huang; Sophia Ananiadou; Yu Rong", "journal": "", "ref_id": "b41", "title": "Transformer for Graphs: An Overview from Architecture Perspective", "year": "2022" }, { "authors": "Jonas Pfeiffer; Aishwarya Kamath; Andreas Rücklé; Kyunghyun Cho; Iryna Gurevych", "journal": "", "ref_id": "b42", "title": "AdapterFusion: Non-Destructive Task Composition for Transfer Learning", "year": "2021" }, { "authors": "Jonas Pfeiffer; Andreas Rücklé; Clifton Poth; Aishwarya Kamath; Ivan Vulić; Sebastian Ruder; Kyunghyun Cho; Iryna Gurevych", "journal": "", "ref_id": "b43", "title": "AdapterHub: A Framework for Adapting Transformers", "year": "2020" }, { "authors": "Sabina Podlewska; Rafał Kafel", "journal": "International Journal of Molecular Sciences", "ref_id": "b44", "title": "MetStabOn-Online Platform for Metabolic Stability Predictions", "year": "2018" }, { "authors": "Alec Radford; Karthik Narasimhan; Tim Salimans; Ilya Sutskever", "journal": "", "ref_id": "b45", "title": "Improving Language Understanding by Generative Pre-Training", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b46", "title": "Language Models Are Unsupervised Multitask Learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b47", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer", "year": "2020" }, { "authors": "Yu Rong; Yatao Bian; Tingyang Xu; Weiyang Xie; Wei Ying; Wenbing Huang; Junzhou Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Self-Supervised Graph Transformer on Large-Scale Molecular Data", "year": "2020" }, { "authors": "Andreas Rücklé; Gregor Geigle; Max Glockner; Tilman Beck; Jonas Pfeiffer; Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b49", "title": "AdapterDrop: On the Efficiency of Adapters in Transformers", "year": "2021" }, { "authors": "Teague Sterling; John J Irwin", "journal": "Journal of Chemical Information and Modeling", "ref_id": "b50", "title": "ZINC 15 -Ligand Discovery for Everyone", "year": "2015" }, { "authors": "Asa ; Cooper Stickland; Iain Murray", "journal": "", "ref_id": "b51", "title": "BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning", "year": "2019" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar; Aurelien Rodriguez; Armand Joulin; Edouard Grave; Guillaume Lample", "journal": "", "ref_id": "b52", "title": "LLaMA: Open and Efficient Foundation Language Models", "year": "2023" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b53", "title": "Attention Is All You Need", "year": "2017" }, { "authors": "Tu Vu; Brian Lester; Noah Constant; Rami Al-Rfou; ' ; Daniel Cer", "journal": "", "ref_id": "b54", "title": "SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer", "year": "2022" }, { "authors": "Ruize Wang; Duyu Tang; Nan Duan; Zhongyu Wei; Xuanjing Huang; Jianshu Ji; Guihong Cao; Daxin Jiang; Ming Zhou", "journal": "", "ref_id": "b55", "title": "K-Adapter: Infusing Knowledge into Pre-Trained Models with Adapters", "year": "2021" }, { "authors": "Yaqing Wang; Subhabrata Mukherjee; Xiaodong Liu; Jing Gao; Ahmed Hassan Awadallah; Jianfeng Gao", "journal": "", "ref_id": "b56", "title": "AdaMix: Mixture-of-Adapter for Parameter-efficient Tuning of Large Language Models", "year": "2022" }, { "authors": "Zhen Wang; Rameswar Panda; Leonid Karlinsky; Rogerio Feris; Huan Sun; Yoon Kim", "journal": "", "ref_id": "b57", "title": "Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning", "year": "2023" }, { "authors": "Zhenqin Wu; Bharath Ramsundar; Evan N Feinberg; Joseph Gomes; Caleb Geniesse; Aneesh S Pappu; Karl Leswing; Vijay Pande", "journal": "Chemical Science", "ref_id": "b58", "title": "MoleculeNet: A Benchmark for Molecular Machine Learning", "year": "2018" }, { "authors": "Jun Xia; Yanqiao Zhu; Yuanqi Du; Stan Z Li", "journal": "", "ref_id": "b59", "title": "A Survey of Pretraining on Graphs: Taxonomy, Methods, and Applications", "year": "2022" }, { "authors": "Keyulu Xu; * ; Weihua Hu; * ; Jure Leskovec; Stefanie Jegelka", "journal": "", "ref_id": "b60", "title": "How Powerful Are Graph Neural Networks?", "year": "2019" }, { "authors": "Chengxuan Ying; Tianle Cai; Shengjie Luo; Shuxin Zheng; Guolin Ke; Di He; Yanming Shen; Tie-Yan Liu", "journal": "", "ref_id": "b61", "title": "Do Transformers Really Perform Badly for Graph Representation?", "year": "2021" }, { "authors": "X B Bruce; Jianlong Yu; Lingbo Chang; Qi Liu; Chang Tian; Chen Wen", "journal": "", "ref_id": "b62", "title": "Towards a Unified View on Visual Parameter-Efficient Transfer Learning", "year": "2022" }, { "authors": "Weihao Yuan; Xiaodong Gu; Heng Li; Zilong Dong; Siyu Zhu", "journal": "", "ref_id": "b63", "title": "Monocular Scene Reconstruction with 3D SDF Transformers", "year": "2023" }, { "authors": "Jiawei Zhang; Haopeng Zhang; Congying Xia; Li Sun", "journal": "", "ref_id": "b64", "title": "Graph-Bert: Only Attention Is Needed for Learning Graph Representations", "year": "2020" }, { "authors": "Qingru Zhang; Minshuo Chen; Alexander Bukharin; Pengcheng He; Yu Cheng; Weizhu Chen; Tuo Zhao", "journal": "", "ref_id": "b65", "title": "Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning", "year": "2023" }, { "authors": "Zaixi Zhang; Qi Liu; Hao Wang; Chengqiang Lu; Chee-Kong Lee", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b66", "title": "Motif-Based Graph Self-Supervised Learning for Molecular Property Prediction", "year": "2021" }, { "authors": "Jianan Zhao; Chaozhuo Li; Qianlong Wen; Yiqi Wang; Yuming Liu; Hao Sun; Xing Xie; Yanfang Ye", "journal": "", "ref_id": "b67", "title": "Gophormer: Ego-Graph Transformer for Node Classification", "year": "2021" }, { "authors": "Han Zhou; Xingchen Wan; Ivan Vulić; Anna Korhonen", "journal": "", "ref_id": "b68", "title": "AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning", "year": "2023" } ]
[ { "formula_coordinates": [ 5, 220.06, 512.59, 283.94, 8.96 ], "formula_id": "formula_0", "formula_text": "X = GraphConv(S, X; W ) = σ(SXW )(1)" }, { "formula_coordinates": [ 5, 198.91, 668.53, 305.09, 9.65 ], "formula_id": "formula_1", "formula_text": "X = LN(X), X = LN X + σ(SX W down W up )(2)" }, { "formula_coordinates": [ 6, 344.6, 143.36, 71.7, 12.53 ], "formula_id": "formula_2", "formula_text": "S 2 = D-1 2 Ã D-1 2" }, { "formula_coordinates": [ 6, 108, 192.75, 396, 23.44 ], "formula_id": "formula_3", "formula_text": "S 4 = α • D-1 2 Ã D-1 2 + β • [dis(v i , v j )] n×n ," }, { "formula_coordinates": [ 6, 187.49, 443.87, 316.51, 14.66 ], "formula_id": "formula_4", "formula_text": "θ t+1 = arg min θ (1 -µ) • L vanilla (θ) + µ • L bregman (θ, θ t )(3)" }, { "formula_coordinates": [ 6, 211.94, 503.92, 292.06, 9.65 ], "formula_id": "formula_5", "formula_text": "L bregman (θ, θ t ) = E x∼D f (x; θ), f (x; θ t )(4)" }, { "formula_coordinates": [ 16, 269.02, 209.36, 234.98, 30.32 ], "formula_id": "formula_6", "formula_text": "W = n i=1 A i ⊗ B i(14)" }, { "formula_coordinates": [ 16, 134.04, 244.61, 99.43, 12.54 ], "formula_id": "formula_7", "formula_text": "A i ∈ R n×n , B i ∈ R d n × r" } ]
2023-05-30
[ { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "classification. This is flexible and can be easily transferred to other vision tasks such as semantic segmentation for natural images, and medical image segmentation. (2) An adaptive mechanism is designed to adjust the decision criterion automatically (no-interaction setting) for judging the quality of the pseudo labels based on their confidence.\n(3) We promote the investigation of multiple network outputs in terms of an information theory aspect -entropy -to weight confidence levels of pseudo labels from each network. Then, we use this information to optimise the unsupervised training process in semi-supervised land cover classification." }, { "figure_ref": [], "heading": "II. BACKGROUND & RELATED WORK", "publication_ref": [ "b4", "b5", "b6", "b7", "b3", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "In general terms, semi-supervised learning is defined as an approach that lies between supervised and unsupervised learning. During the supervised learning step, various widely applied semantic segmentation methods can be used such as PSPNet [5], UNet [6], SegNet [7], DeepLabV3+ [8]. In current semi-supervised learning research [4], [9], [10] within the field of computer vision, the commonly used network is DeepLabV3+ with a pre-trained backbone e.g. ResNet 50. However, in remote sensing, no specific architecture dominates. Especially when the amount of labelled data is small, due to the exploitation of low-and high-level features via efficient skip-connections, a simpler method like U-net shows competitive (even better) results compared to other classic semantic segmentation networks [11].\nConsistency regularization [12] describes a class of unsupervised learning algorithms as a part of semi-supervised learning, that are easy to implement and widely compatible with supervised learning segmentation networks. The key idea of consistency regularization is to force perturbed models (or perturbed inputs) to have consistent outputs for unlabelled inputs. Based on this concept, cross pseudo supervision (CPS) [13] and CPS-to-n-networks (n-CPS) [14] show considerable success, which yields state-of-the-art on semantic segmentation benchmark datasets, e.g. Cityscapes. However, CPS and n-CPS use pseudo labels to supervise the network regardless of their quality. In addition, perturbed models in those methods have the same structure which causes these networks to tend to output similar predictions. In order to increase the diversity of pseudo labels in parallel, using different segmentation networks stands out to be an efficient and accurate alternative [15]. In previous land cover mapping work [16], we have shown that a dual-encoder structure is helpful for leveraging minimal supervision, yet the performance drops significantly when the amount of training data is reduced." }, { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "III. METHOD", "publication_ref": [ "b4", "b5", "b6", "b16", "b12", "b14" ], "table_ref": [], "text": "The proposed semi-supervised learning approach, illustrated in Figure 1, uses both labelled and unlabelled data as input into the three different networks (PSPNet [5], UNet [6], SegNet [7]) in each training iteration. The labelled data is used in a regular supervised learning manner to train these models by using the cross-entropy loss function. In addition, unlabelled data is used to generate pseudo labels, which are exploited to inform each network. The outputs from three networks are added linearly after a softmax layer to generate a comprehensive probability distribution of the classes for all pixels of the input image. In this case, if the probability distributions among the three networks are high-peak unimodal whilst the classifications corresponding to each peak are consistent, the operation of linear addition will keep this unimodal distribution (low uncertainty). Otherwise, e.g. the distributions are not unimodal or they are diverse, the combined prediction will not have a distinct strong peak (high uncertainty). Considering the fact that the information entropy is a measure of uncertainty, we calculated the entropy of the classification distribution based on the combined prediction to investigate the quality of predicted pseudo labels. Furthermore, the proposed confidence-guided cross-entropy loss function is designed to limit the negative contribution of the pseudo labels with high entropy (high uncertainty) to the network parameter optimisation. As shown in Figure 2, the proposed confidence-guided cross-entropy loss module (CGCE) is used to calculate the unsupervised loss. The aim of this loss is to make use of the high-reliable confidence of predictions to re-weight the standard cross entropy loss at the pixel level based on their entropy among classes. The mean value of entropy is regarded as a threshold to decide on the reliability of the estimated confidence. The unreliable confidence values are assumed to provide limited useful information for re-weighting the loss. Thus, the loss of these pixels is not reweighted and just used in the standard cross-entropy loss function. However, the confidence of predictions above the mean value is regarded as reliable and is used for entropy calculation to re-weight the loss. Instead of directly using the probability of predictions to weight the loss (focal loss [17]), entropy is used to form the weight, which represents the confidence of pseudo labels generated from multiple distinct networks. Specifically, the weight w is defined as follows w = max(I)-I max(I)-min(I) + 1, where I refers to the entropy of class probability for each pixel. Thus, since w ≥ 1 the effect of these pixels is increased during training compared to the pixels with unreliable confidence values. Then, the weight, w, is added as a factor to standard cross entropy loss ℓ(x, y) to favour the high-quality pseudo labels.\nℓ(x, y) = N n=1 -w log exp(xn,y n ) C c=1 exp(xn,c) N ,(1)\nwhere x represents the input, y denotes the target class, w signifies the weight, C indicates the number of classes, and N is the batch size. Inspired by [13] and [15], the unsupervised loss is acquired by cross-supervision between predictions from different networks. Finally, the total loss L is set to a linear combination of supervised loss L s and unsupervised loss L u as\nL = L s + λL u , (2\n)\nwhere λ is the trade-off weight between supervised and unsupervised losses. It is worth noting that the unsupervised loss L u is the linear addition of 6 losses that result from 3-model cross-supervision, since the pseudo label from each network can supervise the other two networks and leads to two losses, as shown in Figure 2." }, { "figure_ref": [ "fig_2" ], "heading": "IV. EXPERIMENTS AND RESULTS", "publication_ref": [ "b17", "b18", "b12", "b5" ], "table_ref": [ "tab_0" ], "text": "We evaluated our method using the ISPRS Potsdam dataset [18], which consists of 38 multi-source 6000 × 6000 patches, including infrared, red, green, and blue orthorectified optical images, and corresponding digital surface models (DSM). We divided these data tiles into 512 × 512 patches, resulting in 3456 training samples and 2016 test samples. Both true orthophoto and DSM modalities have a 5 cm ground sampling distance. The dataset contains six manually classified land cover classes: impervious surfaces, buildings, low vegetation, trees, cars, and clutter/background.\nIn order to compare the proposed method -CGSSL -we utilised two classic semi-supervised models of Mean Teacher [19] and CPS [13]. The quantity of labelled data used in the aforementioned semi-supervised learning approaches is only half (1728 samples) of the whole training split of the Potsdam dataset. We remove the labels of the remaining half and just used the images in the unsupervised part. We also provide the performance of UNet [6] using fully supervised learning for both the whole and half of labelled data which are named U-Net1 and U-Net2 in the sequel, respectively. The same test set is used to evaluate all models. Thus, when applying the proposed method in real-world scenarios, only a subset of the images would need labelling, and the remainder would be left unlabelled to reduce manual labour. Our experiments were implemented by Pytorch. We used a mini-batch SGD optimizer that adopted a polynomial learning rate policy. All the experiments were performed on an NVIDIA A100-sxm in a GW4 Isambard. We thoroughly evaluated all models using class-related performance metrics, including accuracy, precision, recall, mean intersection over union (mIoU), and F 1 -score. As shown in Table I, CGSSL shows the best performance in terms of all performance metrics. In particular, CGSSL improves recall significantly due to the great reduction of false negatives in prediction. Even though CGSSL only uses half of the labelled data, its performance is even better than UNet1 which is trained with the whole dataset. Figure 3 shows an example of predictions for all methods where CGSSL is mostly close to the ground truth." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduced an innovative semi-supervised learning approach for land cover classification that utilizes a confidence-guided cross-entropy loss. In particular, an adaptive loss was provided for semi-supervised learning to exploit high-quality pseudo labels and limit the effect of low-quality pseudo labels with an information theory perspective. Our approach is also flexible and can be transferred to various other semi-supervised learning tasks. The proposed method shows considerable performance for land cover classification, and benefits from unlabeled data. Meanwhile, since three networks are required to increase the diversity of pseudo labels in training processing, one of the drawbacks of this method is the increased computational requirement, which means that it might not be appropriate for edge computing devices in practical applications. Thus, our future work aims to further develop computationally cheaper segmentation architectures for semi-supervised learning." } ]
Semi-supervised learning has been well developed to help reduce the cost of manual labelling by exploiting a large quantity of unlabelled data. Especially in the application of land cover classification, pixel-level manual labelling in large-scale imagery is labour-intensive, time-consuming and expensive. However, existing semi-supervised learning methods pay limited attention to the quality of pseudo-labels during training even though the quality of training data is one of the critical factors determining network performance. In order to fill this gap, we develop a confidenceguided semi-supervised learning (CGSSL) approach to make use of high-confidence pseudo labels and reduce the negative effect of low-confidence ones for land cover classification. Meanwhile, the proposed semi-supervised learning approach uses multiple network architectures to increase the diversity of pseudo labels. The proposed semisupervised learning approach significantly improves the performance of land cover classification compared to the classic semi-supervised learning methods and even outperforms fully supervised learning with a complete set of labelled imagery of the benchmark Potsdam land cover dataset.
Confidence-Guided Semi-supervised Learning in Land Cover Classification
[ { "figure_caption": "Fig. 1 .1Fig. 1. Overall framework of the confidence guided semi-supervised learning (CGSSL) approach", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. The details of the Confidence-Guided Cross Entropy (CGCE) module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Visual Results of each method on Potsdam Dataset. Values in parentheses refer to percentage accuracy. # U-Net1 was trained with the whole 3456 labelled samples. * U-Net2 was trained with 1728 labelled samples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "COMPARISON OF DIFFERENT METHODS FOR POTSDAM DATASET.", "figure_data": "ModelTypeAccuracyPrecisionRecallmIoUF 1 -scoreU-Net1 † [6] U-Net2 * [6]Supervised Supervised85.36% 84.26%76.75% 76.45%81.23% 79.32%67.59% 66.49%78.92% 77.86%Mean Teacher [19] Semi-Supervised84.58%78.52%80.88%68.24%79.68%CPS [13]Semi-Supervised85.30%77.94%80.75%68.38%79.32%CGSSL (ours)Semi-Supervised86.59%79.06%83.54%70.17%81.24%† U-Net1 was trained with the whole 3456 labelled samples.", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" } ]
Wanli Ma; Oktay Karakus; Paul L Rosin
[ { "authors": "A Vali", "journal": "Remote Sensing", "ref_id": "b0", "title": "Deep learning for land use and land cover classification based on hyperspectral and multispectral earth observation data: A review", "year": "2020" }, { "authors": "J.-X Wang", "journal": "IEEE Geoscience and Remote Sensing Letters", "ref_id": "b1", "title": "Semi-supervised semantic segmentation of remote sensing images with iterative contrastive network", "year": "2022" }, { "authors": "B Zhang", "journal": "NeurIPS", "ref_id": "b2", "title": "FlexMatch: Boosting semi-supervised learning with curriculum pseudo labeling", "year": "2021" }, { "authors": "H Hu", "journal": "NeurIPS", "ref_id": "b3", "title": "Semi-supervised semantic segmentation via adaptive equalization learning", "year": "2021" }, { "authors": "H Zhao", "journal": "", "ref_id": "b4", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "O Ronneberger", "journal": "Springer", "ref_id": "b5", "title": "U-Net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "V Badrinarayanan", "journal": "TPAMI", "ref_id": "b6", "title": "SegNet: A deep convolutional encoder-decoder architecture for image segmentation", "year": "2017" }, { "authors": "L.-C Chen", "journal": "", "ref_id": "b7", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "Y Wang", "journal": "", "ref_id": "b8", "title": "Semi-supervised semantic segmentation using unreliable pseudo-labels", "year": "2022" }, { "authors": "H Xu", "journal": "NeurIPS", "ref_id": "b9", "title": "Semi-supervised semantic segmentation with prototype-based consistency regularization", "year": "2022" }, { "authors": "Y Zheng", "journal": "Remote Sensing", "ref_id": "b10", "title": "Semi-supervised adversarial semantic segmentation network using transformer and multiscale convolution for highresolution remote sensing imagery", "year": "2022" }, { "authors": "G French", "journal": "", "ref_id": "b11", "title": "Semi-supervised semantic segmentation needs strong, varied perturbations", "year": "2019" }, { "authors": "X Chen", "journal": "", "ref_id": "b12", "title": "Semi-supervised semantic segmentation with cross pseudo supervision", "year": "2021" }, { "authors": "D Filipiak", "journal": "", "ref_id": "b13", "title": "n-CPS: Generalising cross pseudo supervision to n networks for semi-supervised semantic segmentation", "year": "2021" }, { "authors": "X Luo", "journal": "", "ref_id": "b14", "title": "Semi-supervised medical image segmentation via cross teaching between CNN and transformer", "year": "2022" }, { "authors": "W Ma", "journal": "Remote Sensing", "ref_id": "b15", "title": "AMM-FuseNet: attention-based multi-modal image fusion network for land cover mapping", "year": "2022" }, { "authors": "T.-Y Lin", "journal": "", "ref_id": "b16", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "F Rottensteiner", "journal": "ISPRS Annals", "ref_id": "b17", "title": "The ISPRS benchmark on urban object classification and 3D building reconstruction", "year": "2012" }, { "authors": "A Tarvainen", "journal": "", "ref_id": "b18", "title": "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results", "year": "2017" } ]
[ { "formula_coordinates": [ 3, 219.06, 638.96, 343.97, 31.71 ], "formula_id": "formula_0", "formula_text": "ℓ(x, y) = N n=1 -w log exp(xn,y n ) C c=1 exp(xn,c) N ,(1)" }, { "formula_coordinates": [ 4, 271.17, 214.34, 287.61, 10.63 ], "formula_id": "formula_1", "formula_text": "L = L s + λL u , (2" }, { "formula_coordinates": [ 4, 558.78, 214.66, 4.26, 9.5 ], "formula_id": "formula_2", "formula_text": ")" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b48", "b15", "b48", "b15", "b29", "b48", "b23" ], "table_ref": [], "text": "As multi-agent reinforcement learning (MARL) systems are increasingly deployed throughout society, it is imperative yet challenging for users to understand the emergent behaviors of MARL agents in complex environments. Recent work [Boggess et al., 2022] proposes methods for generating policy summarization to explain agents' behaviors under a given MARL policy, as well as language explanations to answer user queries about agents' decisions such as \"Why don't [agents] do [actions] in [states]?\" However, existing methods cannot handle temporal queries involving a sequence of MARL agents' decisions, for example, \"Why don't [agents] complete [task 1], followed by [task 2], and eventually [task 3]?\" Explanations to answer such a temporal user query can help reconcile discrepancies between the actual and anticipated agent behaviors.\nRecently, there has been increasing interest in generating policy-level contrastive explanations for RL in the single-agent setting. [Sreedharan et al., 2022] considers a problem setting where the agent comes up with a plan to achieve a certain goal, and the user responds by raising a foil (represented as a sequence of agent states and actions). To show why the agent's plan is preferred over the foil (e.g., the foil leads to an invalid state), explanations are generated by finding missing preconditions of the failing foil action on a symbolic model through sample-based trials. [Finkelstein et al., 2022] considers a similar problem setting, where the user queries about an alternative policy specifying actions that the agent should take in certain states. Explanations are defined as a sequence of Markov decision process (MDP) transforms, such that the RL agent's optimal policy (i.e., seeking to maximize its accumulated reward) in the transformed environment aligns with the user queried policy.\nThere are many challenges and limitations when applying these approaches in multi-agent environments. First, we need a better representation of user queries. Asking the user to provide concrete information about agents' joint states and joint actions, which grow exponentially with the increasing number of agents, is tedious, if not impractical. Further, these approaches have limited scalability in multi-agent environments due to computational complexity. [Sreedharan et al., 2022] requires a large number of samples generated via a random walk to find missing preconditions. [Finkelstein et al., 2022] computes a sequence of MDP transforms (e.g., mapping the entire state/action space) and retrains the agent policy in each transformed MDP. Moreover, the generated explanations may not capture agent cooperation requirements that are essential for understanding multi-agent behaviors.\nWe address these challenges by developing an approach to generate policy-level contrastive explanations for MARL. Our proposed approach takes the input of a temporal user query specifying which tasks should be completed by which agents in what order. Any unspecified tasks are allowed to be completed by the agents at any point in time. The user query is then encoded as a PCTL * logic formula, which is checked against a multi-agent Markov decision process (MMDP) representing an abstraction of a given MARL policy via probabilistic model checking [Kwiatkowska et al., 2017]. If the MMDP satisfies the PCTL * formula, then the user query is feasible under the given policy (i.e., there exists at least one policy execution that conforms with the user query). Otherwise, our approach deploys a guided rollout procedure to sample more of the MARL agents' behaviors and update the MMDP with new samples. If the updated MMDP still does not satisfy the PCTL * formula, the proposed approach generates correct and complete explanations that pinpoint the causes of all failures in the user query.\nComputational experiments on four benchmark MARL domains demonstrate the scalability of our approach (up to 9 agents in one domain). It only took seconds to check the feasibility of a user query and generate explanations when needed. Additionally, we conducted a user study to evaluate the quality of generated explanations, where we adapted [Sreedharan et al., 2022] to generate baseline explanations. The study results show that, compared with the baseline, explanations generated using our approach significantly improve user performance (measured by the number of correctly answered questions) and yield higher average user ratings on explanation goodness metrics (e.g., understanding, satisfaction) [Hoffman et al., 2018]." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Explainable Reinforcement Learning", "publication_ref": [ "b53", "b19", "b44", "b51", "b31", "b48", "b17", "b50", "b0", "b40", "b35", "b17", "b1", "b25", "b13", "b17", "b7", "b21", "b7", "b35", "b33", "b40", "b48", "b15" ], "table_ref": [], "text": "A growing body of research in explainable RL has emerged in recent years, as surveyed in [Wells and Bednarz, 2021;Heuillet et al., 2021;Puiutta and Veith, 2020]. Existing works can be categorized according to different axes (e.g., timing, scope, form, setting). We position our proposed approach based on these categorizations as follows.\nFirst, there are intrinsic and post-hoc methods depending on the timing when the explanation is generated. The former (e.g., [Topin et al., 2021;Landajuela et al., 2021]) builds intrinsically interpretable policies (e.g., represented as decision trees) at the time of training, while the latter (e.g., [Sreedharan et al., 2022;Hayes and Shah, 2017]) generates post-hoc explanations after a policy has been trained. Our proposed approach belongs to the latter.\nSecond, existing works can be distinguished by the scope of explanations. Some methods provide explanations about policy-level behaviors (e.g., [Topin and Veloso, 2019;Amir and Amir, 2018]), while others explain specific, local decisions (e.g., [Olson et al., 2021;Madumal et al., 2020]). Our work focuses on explaining discrepancies between actual and anticipated policy-level behaviors.\nAdditionally, current approaches generate explanations in diverse forms, including natural language [Hayes and Shah, 2017], saliency maps [Atrey et al., 2019], reward decomposition [Juozapaitis et al., 2019], finite-state machines [Danesh et al., 2021], and others. Our proposed approach generates language explanations following [Hayes and Shah, 2017] and [Boggess et al., 2022], both of which use the Quine-McCluskey algorithm to compute a minimized Boolean formula and then translate the formula into an explanation using language templates.\nFinally, the majority of existing works on explainable RL focus on the single-agent setting. There is very little prior work considering multi-agent environments. [Heuillet et al., 2022] estimates the contribution of each agent for a group plan, but only as a general explanation of a model and not for a specific instance given by a user. [Boggess et al., 2022] develops methods to generate policy summarization and querybased language explanations for MARL. However, as discussed in Section 1, existing methods cannot handle temporal queries considered in this work. [Miller, 2019] identifies being contrastive (\"Why A but not B?\") as one of the key desired properties of an explanation. The research thread on contrastive explanations for RL has been drawing increasing attention since then. For example, [Madumal et al., 2020] generates contrastive explanations for \"why action\" and \"why not action\" queries via counterfactual analysis of a structural causal model; [Lin et al., 2021] develops a deep RL architecture with an embedded selfprediction model to explain why a learned agent prefers one action over another; and [Olson et al., 2021] computes counterfactual state explanations (i.e., minimal changes needed for an alternative action). These works all focus on generating contrastive explanations for the RL agent's local decisions in a state. By contrast, several recent works [Sreedharan et al., 2022;Finkelstein et al., 2022] generate policy-level contrastive explanations in the single-agent setting. However, as discussed in Section 1, these methods are not suitable for MARL. Our proposed approach advances the state of the art by developing an approach for generating contrastive explanations about MARL agents' policy-level behaviors." }, { "figure_ref": [], "heading": "Contrastive Explanations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Program Formulation", "publication_ref": [ "b7" ], "table_ref": [], "text": "We consider a problem setting where a MARL policy has been trained over N agents, denoted by π : X → ∆(A), which is a function mapping a set of joint states X = {x = (x 1 , . . . , x N )} to a distribution over a set of joint actions A = {a = (a 1 , . . . , a N )}. Execution of policy π yields a sequence of joint states and joint actions x 0 a0 -→ x 1 a1 -→ • • • where a t ∼ π(•|x t ) at each step t. Suppose that the goal of the agents is to jointly complete a set G of tasks (sub-goals). Let R i : X × A × X → R denote the reward function that determines the immediate reward received by agent i. A positive reward R i (x t , a t , x t+1 ) > 0 is only received when a task g ∈ G is completed by agent i at step t. We assume that each agent can complete at most one task at a step and, if multiple agents cooperate to complete a task, each of them would receive a positive reward at the same step.\nTo start with, the user is presented with a high-level plan that summarizes one possible execution of the given MARL policy π. For example, consider a MARL domain where three robotic agents are trained to complete search and rescue tasks shown in Figure 1(a). We can compute a high-level plan by applying the policy summarization method proposed in [Boggess et al., 2022]. Figure 1(b) illustrates an example plan, where columns indicate the order of tasks completed by agents and each row corresponds to an agent's task sequence. Agent cooperation is represented by multiple agents sharing the same task in the same column. In this example, robots II and III first cooperate to fight the fire, followed by robots I and II jointly removing the obstacle, and finally robots I and III rescue the victim together. The user may not desire the presented plan and raise an alternative query. The user query does not have to be a complete plan involving all agents and tasks. Instead, the user can query about a partial plan such as \"Why don't robots I and II remove the obstacle before robot II fights the fire alone?\" We define a temporal user query as a list of atomic propositions specifying an order of tasks completed by some agents, denoted by ρ = τ 1 , τ 2 , • • • , where each τ specifies a task g ∈ G and designated agents. Tasks not specified in the query can be completed in any order (e.g., before τ 1 , between τ 1 and τ 2 , or after τ 2 ). The aforementioned example query is denoted by obstacle robotI robotII, fire robotII .\nA temporal user query ρ is feasible under a MARL policy π if there exists at least one execution of π that conforms with the queried plan ρ. When ρ is infeasible under π, explanations are generated to reconcile discrepancies between the actual and anticipated multi-agent behaviors. We say that an explanation is correct if it pinpoints the causes of one or more failures in ρ (e.g., unsatisfied task preconditions or agent cooperation requirements). A correct explanation is complete if it identifies the reasons behind all failures of a user query ρ.\nThis work aims to solve the following problem. Problem: Given a temporal user query ρ and a trained MARL policy π, check if ρ is feasible under policy π. If ρ is infeasible, generate correct and complete explanations to reconcile discrepancies between the actual and anticipated multi-agent behaviors." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [], "table_ref": [], "text": "To tackle this problem, we present an approach as illustrated in Algorithm 1. We describe the construction of a policy abstraction (line 1) in Section 4.1, the encoding and checking of Algorithm 1 Checking the feasibility of a user query Input: a temporal user query ρ, a trained MARL policy π Output: YES, or explanations E 1: construct a policy abstraction MMDP M given π 2: encode the temporal query ρ as a PCTL * formula ϕ 3: if M satisfies ϕ then 4: return YES 5: else 6:\nM ← update M via guided rollout (Algorithm 2) 7:\nif M satisfies ϕ then 8:\nreturn YES 9: else 10:\ngenerate explanations E (Algorithm 3) 11:\nreturn E the user query (lines 2-5) in Section 4.2, guided rollout (lines 6-9) in Section 4.3, and explanation generation (lines 10-11) in Section 4.4. Additionally, we analyze the correctness and complexity of the approach in Section 4.5." }, { "figure_ref": [ "fig_1" ], "heading": "Policy Abstraction MMDP", "publication_ref": [ "b7" ], "table_ref": [], "text": "Given a trained MARL policy π, we construct a multi-agent Markov decision process (MMDP) abstraction following the policy abstraction method described in [Boggess et al., 2022].\nWe denote an MMDP as a tuple M = (S, s 0 , A, T , L) with a set of joint abstract states S, an initial state s 0 ∈ S, a set of joint actions A, a transition function T : S ×A → ∆(S), and a labeling function L : S → 2 AP that assigns a set of atomic propositions AP to states. A path through M is a sequence\ns 0 a0 -→ s 1 a1 -→ • •\n• starting from the initial state s 0 . The state space S = {s = (s 1 , . . . , s N )} is defined over a set of Boolean predicates indicating whether a task g ∈ G has been completed by agent i. The initial state s 0 represents that none of the tasks has been completed. In the example MMDP shown in Figure 2, the initial state is s 0 = (000, 000, 000). State s 1 = (000, 100, 100) represents that the fire task has been completed by robotic agents II and III, which is labeled with L(s 1 ) = {fire robotII robotIII}. The next state s 2 = (010, 110, 100) is labeled with L(s 2 ) = {obstacle robotI robotII}, which only contains the newly completed obstacle task.\nThe MMDP transition function T is built by finding corresponding abstract transitions (s, a, s ) of each sample (x, a, x ) observed during the MARL policy evaluation, and transition probabilities are computed via frequency counting. Given a joint state x = (x 1 , . . . , x N ), we determine a corresponding joint abstract state s = (s 1 , . . . , s N ) by checking if agent i receives a reward R i (x, a, x ) > 0 for completing a task g ∈ G. For each MMDP state s ∈ S, we keep track of a set of corresponding sampled joint states, denoted by X(s) = {x}, and count the total number of observed MARL samples, denoted by C(s)." }, { "figure_ref": [ "fig_1" ], "heading": "Query Checking with Temporal Logic", "publication_ref": [ "b3", "b29" ], "table_ref": [], "text": "We encode a temporal user query ρ = τ 1 , τ 2 , • • • as a PCTL * logic [Aziz et al., 1995] formula ϕ with a \"sequencing\" specification template as follows.\nϕ = P >0 [♦(τ 1 ∧ ♦(τ 2 ∧ ♦ • • • ))]\nAlgorithm 2 Guided rollout Input: a trained MARL policy π, a policy abstraction MMDP M Output: an updated MMDP M 1: unfold M as a search tree and assign a U value to each node 2: N ← tree nodes ordered by U values and sample counts 3: for (k = 0; k < RolloutNum; k++) do 4: s ← N .pop(0) 5:\nx ← pick a corresponding joint state from X(s) 6:\nδ ← a rollout execution of π from x with DepthLimit 7:\nupdate the MMDP with samples in δ 8: return the updated MMDP M where P >0 means that the specification should be satisfied with non-zero probability, and ♦ denotes the logical operator \"eventually\". The PCTL * formula ϕ is satisfied in an MMDP M if there exists a path through M such that τ 1 eventually becomes true at some point along the path, and τ 2 eventually holds at some point afterward. For example, the MMDP shown in Figure 2 satisfies a PCTL * formula P >0 [♦(fire robotII robotIII ∧ ♦victim robotI robotIII)].\nTo check if M satisfies ϕ, we apply probabilistic model checking [Kwiatkowska et al., 2017] which offers efficient techniques for the exhaustive exploration of M to determine if ϕ holds in any path. If M satisfies ϕ, then Algorithm 1 returns YES, indicating that the user query is feasible under the given MARL policy. Otherwise, there does not exist any path through M that conforms with the user query. Since the MMDP M is constructed based on samples observed during the MARL policy evaluation, it does not necessarily capture all possible agent behaviors under the given policy π. Thus, M not satisfying ϕ is not a sufficient condition for claiming that the user query is infeasible under the given MARL policy.\nTo address this issue, we develop a guided rollout procedure to update the MMDP M via drawing more samples from the MARL policy π." }, { "figure_ref": [ "fig_1" ], "heading": "Guided Rollout", "publication_ref": [], "table_ref": [], "text": "Algorithm 2 illustrates the guided rollout procedure, which starts by unfolding paths of the MMDP M as a search tree. The root node of the tree is the initial state s 0 of M. As the search tree unfolds, we assign a U value to each node representing the degree to which the path from the root node to the current node conforms with the user query. Consider an example user query τ 1 = fire robotII robotIII, τ 2 = obstacle robotII , unfolding the MMDP in Figure 2 yields U (s 0 ) = 0, U (s 1 ) = 1 for conforming with τ 1 , and U (s 2 ) = -∞ for violating τ 2 . The search tree stops expanding a node with U = -∞ since the user query is already violated along the path.\nLet N be a queue of tree nodes ordered by decreasing U values and, for nodes with the same U value, increasing counts of MARL samples C(s). This ordering prioritizes the exploration of states with a higher degree of user query conformance (i.e., U values) and less sampling. Given a joint abstract state s ∈ N , we (randomly) pick a corresponding joint state x ∈ X(s) and generate a rollout exe- find a failure τj where j = U max + 1 5:\ncution δ = x a -→ x a -→ • • • of\nV ← target MMDP states that complete the task in τj 6:\nV ← non-target MMDP states 7:\nif V = ∅ then 8: φ ← Quine-McCluskey(1=binary(V), 0=binary( V)) 9:\n← select a minterm in φ that is closest to ρ 10:\nE ← insert language explanations 11:\nupdate ρ to fix the failure τj 12: return E eter DepthLimit. We update the MMDP with samples observed in δ. Then, we consider the next node in N and repeat the above process (lines 4-7 of Algorithm 2). When the number of rollout executions hits a predefined parameter RolloutNum, Algorithm 2 terminates with an updated MMDP, denoted by M .\nWe check if M satisfies a PCTL * formula ϕ encoding the user query ρ (line 7 of Algorithm 1) as described in Section 4.2. If M satisfies ϕ, then the user query ρ is feasible under the given MARL policy π. When M does not satisfy ϕ, the user query is infeasible in the MMDP M . Given sufficiently large RolloutNum and DepthLimit, the MMDP M provides a good approximation of MARL agents' behaviors under the given policy π. Thus, we can claim that the user query ρ is infeasible under π with high probability. In this case, we generate explanations to reconcile discrepancies between the actual and anticipated multi-agent behaviors." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Explanation Generation", "publication_ref": [], "table_ref": [], "text": "Algorithm 3 shows the explanation generation procedure. Given the updated MMDP M resulting from Algorithm 2, we unfold M as a search tree and assign a U value to each tree node following Section 4.3. Let U max denote the maximum U value in the tree. Then, τ j with j = U max + 1 is a failed task making the query ρ infeasible. For example, consider a user query τ 1 = obstacle robotI robotII, τ 2 = victim robotI, τ 3 = fire robotII robotIII , which yields U max = 0 indicating that τ 1 fails. To pinpoint the cause of this failure, we find a set of target MMDP states V where the failed task is completed by some agents (not necessarily by the queried agents). All other possible states (including those not sampled) are placed in a non-target set V.\nWhen V is non-empty, we obtain a minimized Boolean formula φ by applying the Quine-McCluskey algorithm [Quine, 1952], which represents the minimal description of the states in the target set V compared to those in the non-target set V. We select a minterm in φ that is closest to ρ (e.g., involving queried agents) and convert into an explanation using language templates. For example, the MMDP state s 2 in Figure 2 is a target state for τ 1 based on its state label, which indicates that the obstacle task is completed by robots I and II in this state. Applying Quine-McCluskey yields a single-minterm formula φ = fire robotII ∧ fire robotIII ∧ obstacle robotI ∧ obstacle robotII. Recall our assumption in Section 3 that each agent can complete at most one task at a step. Thus, the fire task must have been completed by robots II and III in some previous state. We obtain an explanation: \"The robots cannot remove the obstacle because fighting the fire must be completed before removing the obstacle.\"\nTo generate correct and complete explanations for all possible failures in a user query, we update ρ based on the minterm to fix the failure τ j . Since is the closest minterm to ρ, the applied changes are minimal. We check whether the updated ρ is feasible in M via probabilistic model checking as described in Section 4.2. If the model checker yields YES, then Algorithm 3 terminates because all failures of the (original) user query have been explained and fixed. Otherwise, the algorithm repeats lines 3-11 for the updated ρ. Following the previous example, we update the query as τ 1 = fire robotII robotIII, τ 2 = obstacle robotI robotII, τ 3 = victim robotI , which results in U max = 2, indicating that the updated query still has a failure τ 3 = victim robotI. The MMDP state s 3 in Figure 2 is a target state where the victim task is completed. Applying Quine-McCluskey yields φ = victim robotI ∧ victim robotIII, which only contains one minterm and is translated into an explanation: \"The robots cannot rescue the victim because Robot I needs Robot III to help rescue the victim.\" We further update the query as τ 1 = fire robotII robotIII, τ 2 = obstacle robotI robotII, τ 3 = victim robotI robotIII , which is feasible because the MMDP path s 0 → s 1 → s 2 → s 3 in Figure 2 conforms with this query. The algorithm terminates and returns the generated explanations of all failures.\nNote that in the special case where the target states set V is empty, we skip the Quine-McCluskey and generate an explanation to indicate that the queried task has not been completed in any observed sample. Then, we update the user query by removing the failed task and continue with Algorithm 3." }, { "figure_ref": [], "heading": "Correctness and Complexity", "publication_ref": [ "b5" ], "table_ref": [ "tab_0" ], "text": "Correctness. The correctness of our proposed approach, with respect to the problem formulated in Section 3, is stated below and the proof is given in the appendix. Proposition 1. Given a temporal user query ρ and a trained MARL policy π, if Algorithm 1 returns YES, then the query ρ must be feasible under the policy π; otherwise, Algorithm 1 generates correct and complete explanations E. Complexity. We analyze the complexity of the following key steps in the proposed approach.\n• The time complexity of checking an MMDP against a PCTL * formula ϕ defined in Section 4.2 via probabilistic model checking is double exponential in |ϕ| (i.e., equal to the length of the user query |ρ|) and polynomial in the size of the MMDP [Baier and Katoen, 2008]. The MMDP state space size |S| is bounded by O(2 |G| N ), depending on the number of agents N and tasks |G|. However, only a small set of reachable states is usually induced in practice (as shown in Table 1), given a welltrained MARL policy. • The time complexity of guided rollout (Algorithm 2) is given by O RolloutNum • DepthLimit). As discussed above, the larger the parameter values of RolloutNum and DepthLimit, the better approximation of MARL policy behaviors captured by the updated MMDP M .\n• The time complexity of explanation generation (Algorithm 3) is given by O\nλ • (3 N •|G| / N • |G|)\n, where λ is the number of failures in the user query, and Quine-McClusky [Chandra and Markowsky, 1978].\nO 3 N •|G| / N • |G| is the time complexity of\nEven though the complexity is high, in practice it is possible to check query feasibility and generate explanations in reasonable times as shown in the next section." }, { "figure_ref": [], "heading": "Computational Experiments", "publication_ref": [ "b7", "b42", "b42", "b37", "b11", "b27" ], "table_ref": [ "tab_0" ], "text": "To demonstrate the scalability of our approach, we developed a prototype implementation and applied it to four benchmark MARL domains1 .\n(1) Search and Rescue (SR), where multiple robotic agents cooperate to complete tasks such as fighting fires and rescuing victims [Boggess et al., 2022].\n(2) Level-Based Foraging (LBF), where agents play a mixed cooperative-competitive game to collect food scattered in a gridworld [Papoudakis et al., 2021].\n(3) Multi-Robot Warehouse (RWARE), where robots collaboratively move and deliver requested goods [Papoudakis et al., 2021].\n(4) PressurePlate (PLATE), where agents are required to cooperate during the traversal of a gridworld, with some agents staying on pressure plates to open the doorway for others to proceed [McInroe and Christianos, 2022].\nOur prototype implementation used the Shared Experience Actor-Critic [Christianos et al., 2020] for MARL policy training and evaluation. All models were trained and evaluated until converging to the expected reward, or up to 10,000 steps, whichever occurred first. The PRISM probabilistic model checker [Kwiatkowska et al., 2011] was applied for checking the feasibility of user queries. We set the guided rollout parameters as RolloutNum = 10 and DepthLimit = 50. The experiments were run on a machine with 2.1 GHz Intel CPU, 132 GB of memory, and CentOS 7 operating system.\nTable 1 shows experimental results. For each case study, we report the number of agents N , the number of tasks |G|, and the length of user queries |ρ|. Additionally, we report the size of policy abstraction MMDPs M in terms of the number of (reachable) states |S| and the number of transitions |T |. In general, the MMDP size increases with a growing number of agents and tasks. However, an unequal distribution of agent actions under the MARL policy π can lead to a smaller MMDP M (e.g., LBF-5) as agents take the same trajectories more often leading to less exploration.\nWe consider two temporal queries (i.e., a feasible query and an infeasible query with the same length |ρ|) in each case study and report the runtime of Algorithm 1. For infeasible queries, we also report the number of failures λ, which were controlled to grow with the environment size as the longer the query length |ρ|, the larger number of task failures it may contain. The size of generated explanations is equal to the number of failures (i.e., one for each task failure in the user query).\nExperimental results show that all queries were solved efficiently within seconds. Checking an infeasible query is generally slower than checking a feasible query in the same case study, due to the extra time needed for guided rollout and generating explanations. In summary, computational experiments demonstrate that our approach can be successfully applied to various benchmark MARL domains with a large number of agents (e.g., up to 9 agents in the PLATE domain), for checking the feasibility of temporal user queries and generating explanations when needed." }, { "figure_ref": [], "heading": "User Study", "publication_ref": [], "table_ref": [], "text": "We evaluate the quality of generated reconciliation explanations via a user study.2 Section 6.1 describes the study design and Section 6.2 analyzes the results." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3", "fig_3" ], "heading": "Study Design", "publication_ref": [ "b48", "b23" ], "table_ref": [], "text": "User interface. The study was conducted via the Qualtrics survey platform. Instead of allowing participants to raise queries in real-time, we generated explanations for a selected set of temporal queries a priori, which enables us to present the same set of explanations to different participants. Figure 3 shows an example of the user interface. Participants were shown the agents' original plan (Plan A) and an alternate plan representing a temporal query (Plan B). An explanation was presented to explain why Plan B was infeasible. Participants were then asked to use the provided explanation to decide if a new query (Plan C) was feasible. Participants were incentivized with bonus payments to answer the question correctly. Participants. We recruited 88 participants (i.e., fluent English speakers over the age of 18) through university mailing lists (52% male, 45.5% female, 2.3% non-binary). They had an average age of 23.9 (SD = 6.1). To ensure data quality, a demonstration was given, attention checks were injected, and the time to complete the survey was tracked. Baseline. We adapted the explanation generation method in [Sreedharan et al., 2022], which was initially proposed for the single-agent setting, as a baseline for comparison. We extended the method for joint states and actions and limited its sampling to the given policy instead of the larger environment. Furthermore, we use the same user interface as shown in Figure 3 to avoid any confounding variables regarding presentation in the study. The baseline method takes the input of a user query expressed as a sequence of agent states and actions, for which we converted a high-level plan (e.g., Plan B in Figure 3) into a low-level execution of joint states and joint actions. Explanations generated using the baseline method could fail to capture agent cooperation requirements in multiagent environments. Moreover, the baseline method only provides explanations for the first point of failure rather than all failures in a user query. For example, the baseline explanations for Plan B in Figure 3 changes the second sentence in the explanation to \"The first failed task would be: remove obstacle.\" and only contains E1. Participants would not be able to answer the bonus question correctly without knowing E2. Independent variables. We employed a within-subject study design where participants were asked to complete two trials for evaluating explanations generated using the baseline method and our proposed approach, respectively. There were 4 sets of temporal queries (i.e., two single-failure queries and two with multiple failures) and bonus questions in each trial. The queried plans and questions used in the two trials were different but had a similar difficulty level. Participants were presented with the same set of plans and questions and were randomly assigned to two groups (i.e., evaluating the baseline explanations before or after the proposed explanations) to counterbalance the ordering confound effect. Dependent measures. We counted the number of questions correctly answered by participants as a performance measure. Additionally, at the end of each trial, participants were instructed to rate on a 5-point Likert scale (1 -strongly disagree, 5 -strongly agree) the following statements regarding explanations good metrics adapted from [Hoffman et al., 2018].\n• The explanations help me understand how the robots complete the mission.\n• The explanations are satisfying. • The explanations are sufficiently detailed.\n• The explanations are sufficiently complete, that is, they provide me with all the needed information to answer the questions.\n• The explanations are actionable, that is, they help me know how to answer the questions.\n• The explanations let me know how reliable the robots are for completing the mission.\n• The explanations let me know how trustworthy the robots are for completing the mission.\nHypotheses. We tested two hypotheses stated below.\n• H1: Explanations generated by our proposed approach enable participants to answer more questions correctly than the baseline explanations.\n• H2: Explanations generated by our proposed approach lead to higher ratings on explanation goodness metrics than the baseline explanations." }, { "figure_ref": [ "fig_4" ], "heading": "Results Analysis", "publication_ref": [], "table_ref": [], "text": "Question-answering performance. Participants were able to answer more questions correctly based on explanations generated by our proposed approach (M=3.1 out of 4, SD=1.0) than those generated with the baseline method (M=0.6 out of 4, SD=0.8). A paired t-test (α = 0.05) shows a statistically significant difference (t(87)=-17.0, p ≤0.01, d=1.8). Thus, the data supports H1.\nRecall that the baseline method only provides explanations for the first point of failure in a user query and could not always correctly identify agent cooperation requirements. By contrast, our approach generates correct and complete explanations for all failures in a user query, which help participants to better understand agent behaviors under a given policy, and thus, leads to better question-answering performance. Explanation goodness ratings. Figure 4 shows that participants gave higher subjective ratings to the proposed explanations than the baseline explanations on average, with respect to all explanation goodness metrics.\nWe used the Wilcoxon signed-rank test (α = 0.05) to evaluate hypothesis H2. Statistically significant differences were found for the following four metrics: understanding (W =315.0, Z=-1.6, p ≤0.05, r=-0.1), satisfaction (W =236.0, Z=-2.2, p ≤0.01, r=-0.2), detail (W =255.0, Z=-1.6, p ≤0.01, r=-0.1), and actionability (W =105.5, Z=-2.0, p ≤0.02, r=-0.1). But no significant difference was found on other metrics: completeness (W =389.5, Z=-1.2, p ≤ 0.1, r=-0.1), reliability (W =255.5, Z=-0.5, p ≤0.4, r=-0.04), and trust (W =181.5, Z=-1.0, p ≤0.07, r=-0.1). Thus, the data partially supports H2.\nParticipants' superior question-answering performance is consistent with their statistically significant higher subjective ratings on understanding, detail, and actionability (i.e., the proposed explanations provide detailed and actionable information for answering questions). Furthermore, the baseline explanations were rated significantly less satisfying, because they may miss essential information (e.g., agent cooperation) for answering questions. Participants may misjudge the explanations' completeness as they were unaware of the total number of failures in a queried plan. Finally, the generated explanations are mostly about missing task preconditions, which are less useful for participants to judge how reliable and trustworthy the robots are for completing the mission. Summary. Results of the user study show that, compared with the baseline, explanations generated by our proposed approach significantly improve participants' performance in correctly answering questions, and lead to higher average ratings on explanation goodness metrics such as understanding and satisfaction." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work presents an approach for generating policy-level contrastive explanations for MARL to answer a temporal user query, which specifies a sequence of tasks to be completed by agents with possible cooperation. The proposed approach checks if the user query is feasible under the given MARL policy and, if not, generates correct and complete explanations to pinpoint reasons that make a user query infeasible. A prototype implementation of the proposed approach was successfully applied to four benchmark MARL domains with a large number of agents (e.g., up to 9 agents in one domain). In all the experiments, it only took seconds to check the feasibility of a user query and generate explanations when needed. Additionally, a user study was conducted to evaluate the quality of generated explanations. The study results show that explanations generated using the proposed approach can help improve user performance, understanding, and satisfaction.\nThere are several directions to explore for possible future work. First, we will evaluate the proposed approach with different MARL methods. While the prototype implementation only uses one MARL algorithm, the proposed approach should be compatible with any MARL method because it only relies on sampling possible MARL executions. Second, we will leverage the expressiveness of PCTL * logic and investigate a richer set of user queries. For example, a \"coverage\" query which specifies a set of tasks to be covered in any order, and a \"sequencing with avoidance\" query which asks for the completion of a sequence of tasks while avoiding some other tasks to be completed by specific agents. Lastly, we would like to apply the proposed approach to a wide range of MARL environments in real-world scenarios." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "Proposition 1. Given a temporal user query ρ and a trained MARL policy π, if Algorithm 1 returns YES, then the query ρ must be feasible under the policy π; otherwise, Algorithm 1 generates correct and complete explanations E.\nProof. We prove the following two cases. Case 1: When Algorithm 1 returns YES, the policy abstraction MMDP M or the updated MMDP M satisfies the PCTL * formula ϕ encoding the user query ρ, indicating that there must exist a path through M or M that conforms with ρ. By construction, every abstract MMDP transition (s, a, s ) in M or M with non-zero probability maps to at least one sampled decision (x, a, x ) of the given MARL policy π. Thus, there must exist an execution of policy π that conforms with the user query ρ. By definition, the user query ρ is feasible under the given MARL policy π. Case 2: Algorithm 1 returns explanations E generated via Algorithm 3. As described in Section 4.4, Algorithm 3 terminates when all failures in the user query ρ have been explained and fixed. Given a finite-length temporal query ρ, there is a finite number of failures. For any failure in the query, if the target states set V is non-empty, then the failure must be fixable using a Quine-McCluskey minterm that represents a target state where the failed task is completed. If V is empty, then the failure is removed from the query. Thus, the termination of Algorithm 3 is guaranteed. By definition, the generated explanations are correct (i.e., identifying the causes of one or more failures in ρ) and complete (i.e., finding the reasons behind all failures in ρ)." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by U.S. National Science Foundation under grant CCF-1942836, U.S. Office of Naval Research under grant N00014-18-1-2829, U.S. Air Force Office of Scientific Research under grant FA9550-21-1-0164, Israel Science Foundation under grant 1958/20, and the EU Project TAILOR under grant 952215. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the grant sponsors." } ]
As multi-agent reinforcement learning (MARL) systems are increasingly deployed throughout society, it is imperative yet challenging for users to understand the emergent behaviors of MARL agents in complex environments. This work presents an approach for generating policy-level contrastive explanations for MARL to answer a temporal user query, which specifies a sequence of tasks completed by agents with possible cooperation. The proposed approach encodes the temporal query as a PCTL * logic formula and checks if the query is feasible under a given MARL policy via probabilistic model checking. Such explanations can help reconcile discrepancies between the actual and anticipated multi-agent behaviors. The proposed approach also generates correct and complete explanations to pinpoint reasons that make a user query infeasible. We have successfully applied the proposed approach to four benchmark MARL domains (up to 9 agents in one domain). Moreover, the results of a user study show that the generated explanations significantly improve user performance and satisfaction.
Explainable Multi-Agent Reinforcement Learning for Temporal Queries
[ { "figure_caption": "Figure 1 :1Figure 1: Example MARL domain and a high-level plan.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Fragment of an example MMDP.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "the policy π starting from x. The rollout depth |δ| is bounded by a predefined param-Algorithm 3 Generating reconciliation explanations Input: a user query ρ = τ1, τ2, • • • , the updated MMDP M Output: explanations E 1: E ← {} 2: while ρ is infeasible in M do 3: U max ← the maximum U value in the search tree of M 4:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Example of the user study interface displaying explanations generated by the proposed approach.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Mean and SD of participant ratings on explanation goodness metrics (\"*\" indicates statistically significant difference with the significant level set as α = 0.05).", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Experimental results on four benchmark MARL domains.", "figure_data": "Case StudyMMDP MFeasibleInfeasibleDomain N |G| |ρ||S||T |Time (s)λ Time (s)333281270.812.2SR4441636741.525.3555445 1,50424.4389.8333673440.912.9LBF4442117812.127.65551524544.5320.5243982680.8115.5RWARE 365442 1,2603.7242.24881,089 2,75121.7385.2533871810.813.0PLATE744851750.9225.79551322661.43126.8", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Kayla Boggess; Sarit Kraus; Lu Feng
[ { "authors": "Amir Amir; Dan Amir; Ofra Amir", "journal": "", "ref_id": "b0", "title": "Highlights: Summarizing agent behavior to people", "year": "2018" }, { "authors": " Atrey", "journal": "", "ref_id": "b1", "title": "", "year": "2019" }, { "authors": "Akanksha Atrey; Kaleigh Clary; David Jensen", "journal": "", "ref_id": "b2", "title": "Exploratory not explanatory: Counterfactual analysis of saliency maps for deep reinforcement learning", "year": "2019" }, { "authors": " Aziz", "journal": "", "ref_id": "b3", "title": "", "year": "1995" }, { "authors": "Adnan Aziz; Vigyan Singhal; Felice Balarin; Robert K Brayton; Alberto L Sangiovanni-Vincentelli", "journal": "Springer", "ref_id": "b4", "title": "It usually works: The temporal logic of stochastic systems", "year": "1995" }, { "authors": "Katoen Baier", "journal": "", "ref_id": "b5", "title": "", "year": "2008" }, { "authors": "Christel Baier; Joost-Pieter Katoen", "journal": "MIT press", "ref_id": "b6", "title": "Principles of model checking", "year": "2008" }, { "authors": " Boggess", "journal": "", "ref_id": "b7", "title": "", "year": "2022" }, { "authors": "S Boggess; L Kraus; Feng", "journal": "", "ref_id": "b8", "title": "Toward policy explanations for multi-agent reinforcement learning", "year": "2022" }, { "authors": "Markowsky Chandra", "journal": "", "ref_id": "b9", "title": "", "year": "1978" }, { "authors": "K Ashok; George Chandra; Markowsky", "journal": "Discrete Mathematics", "ref_id": "b10", "title": "On the number of prime implicants", "year": "1978" }, { "authors": "Christianos ", "journal": "", "ref_id": "b11", "title": "", "year": "2020" }, { "authors": "Filippos Christianos; Lukas Schäfer; Stefano V Albrecht", "journal": "", "ref_id": "b12", "title": "Shared experience actor-critic for multi-agent reinforcement learning", "year": "" }, { "authors": " Danesh", "journal": "", "ref_id": "b13", "title": "", "year": "2021" }, { "authors": "Mohamad H Danesh; Anurag Koul; Alan Fern; Saeed Khorram", "journal": "PMLR", "ref_id": "b14", "title": "Re-understanding finitestate representations of recurrent policy networks", "year": "2021" }, { "authors": " Finkelstein", "journal": "", "ref_id": "b15", "title": "", "year": "2022" }, { "authors": "Mira Finkelstein; Lucy Liu; Yoav Kolumbus; David C Parkes; Jeffrey Rosenschein; Sarah Keren", "journal": "", "ref_id": "b16", "title": "Explainable reinforcement learning via model transforms", "year": "2022" }, { "authors": "Shah Hayes", "journal": "", "ref_id": "b17", "title": "", "year": "2017" }, { "authors": "Bradley Hayes; Julie A Shah", "journal": "IEEE", "ref_id": "b18", "title": "Improving robot controller transparency through autonomous policy explanation", "year": "2017" }, { "authors": " Heuillet", "journal": "", "ref_id": "b19", "title": "", "year": "2021" }, { "authors": "Alexandre Heuillet; Fabien Couthouis; Natalia Díaz-Rodríguez", "journal": "Knowledge-Based Systems", "ref_id": "b20", "title": "Explainability in deep reinforcement learning", "year": "2021" }, { "authors": " Heuillet", "journal": "", "ref_id": "b21", "title": "", "year": "2022" }, { "authors": "Alexandre Heuillet; Fabien Couthouis; Natalia Díaz-Rodríguez", "journal": "IEEE Computational Intelligence Magazine", "ref_id": "b22", "title": "Collective explainable ai: Explaining cooperative strategies and agent contribution in multiagent reinforcement learning with shapley values", "year": "2022" }, { "authors": " Hoffman", "journal": "", "ref_id": "b23", "title": "", "year": "2018" }, { "authors": "Shane T Robert R Hoffman; Gary Mueller; Jordan Klein; Litman", "journal": "", "ref_id": "b24", "title": "Metrics for explainable ai: Challenges and prospects", "year": "2018" }, { "authors": " Juozapaitis", "journal": "", "ref_id": "b25", "title": "", "year": "2019" }, { "authors": "Zoe Juozapaitis; Anurag Koul; Alan Fern; Martin Erwig; Finale Doshi-Velez", "journal": "", "ref_id": "b26", "title": "Explainable reinforcement learning via reward decomposition", "year": "2019" }, { "authors": " Kwiatkowska", "journal": "", "ref_id": "b27", "title": "", "year": "2011" }, { "authors": "M Kwiatkowska; G Norman; D Parker", "journal": "Springer", "ref_id": "b28", "title": "PRISM 4.0: Verification of probabilistic real-time systems", "year": "2011" }, { "authors": " Kwiatkowska", "journal": "", "ref_id": "b29", "title": "", "year": "2017" }, { "authors": "M Kwiatkowska; G Norman; D Parker", "journal": "Springer", "ref_id": "b30", "title": "Probabilistic model checking: Advances and applications", "year": "2017" }, { "authors": " Landajuela", "journal": "", "ref_id": "b31", "title": "", "year": "2021" }, { "authors": "Mikel Landajuela; Sookyung Brenden K Petersen; Claudio P Kim; Ruben Santiago; Nathan Glatt; Jacob F Mundhenk; Daniel Pettit; Faissol", "journal": "PMLR", "ref_id": "b32", "title": "Discovering symbolic policies with deep reinforcement learning", "year": "2021" }, { "authors": "Lin ", "journal": "", "ref_id": "b33", "title": "", "year": "2021" }, { "authors": "Zhengxian Lin; Kin-Ho Lam; Alan Fern", "journal": "", "ref_id": "b34", "title": "Contrastive explanations for reinforcement learning via embedded self predictions", "year": "2021" }, { "authors": " Madumal", "journal": "", "ref_id": "b35", "title": "", "year": "2020" }, { "authors": "Prashan Madumal; Tim Miller; Liz Sonenberg; Frank Vetere", "journal": "", "ref_id": "b36", "title": "Explainable reinforcement learning through a causal lens", "year": "2020" }, { "authors": "Christianos Mcinroe", "journal": "", "ref_id": "b37", "title": "Trevor McInroe and Filippos Christianos. Repo for the multi-agent pressureplate environment", "year": "2022" }, { "authors": " Miller", "journal": "", "ref_id": "b38", "title": "", "year": "2019" }, { "authors": "Tim Miller", "journal": "Artificial intelligence", "ref_id": "b39", "title": "Explanation in artificial intelligence: Insights from the social sciences", "year": "2019" }, { "authors": " Olson", "journal": "", "ref_id": "b40", "title": "", "year": "2021" }, { "authors": "Roli Matthew L Olson; Lawrence Khanna; Fuxin Neal; Weng-Keen Li; Wong", "journal": "Artificial Intelligence", "ref_id": "b41", "title": "Counterfactual state explanations for reinforcement learning agents via generative deep learning", "year": "2021" }, { "authors": " Papoudakis", "journal": "", "ref_id": "b42", "title": "", "year": "2021" }, { "authors": "Georgios Papoudakis; Filippos Christianos; Lukas Schäfer; Stefano V Albrecht", "journal": "", "ref_id": "b43", "title": "Benchmarking multi-agent deep reinforcement learning algorithms in cooperative tasks", "year": "2021" }, { "authors": "Veith Puiutta", "journal": "", "ref_id": "b44", "title": "", "year": "2020" }, { "authors": "Erika Puiutta; Eric Veith", "journal": "Springer", "ref_id": "b45", "title": "Explainable reinforcement learning: A survey", "year": "2020" }, { "authors": " Quine", "journal": "", "ref_id": "b46", "title": "", "year": "1952" }, { "authors": " Willard V Quine", "journal": "The American mathematical monthly", "ref_id": "b47", "title": "The problem of simplifying truth functions", "year": "1952" }, { "authors": " Sreedharan", "journal": "", "ref_id": "b48", "title": "", "year": "2022" }, { "authors": "Sarath Sreedharan; Utkarsh Soni; Mudit Verma; Siddharth Srivastava; Subbarao Kambhampati", "journal": "", "ref_id": "b49", "title": "Bridging the gap: Providing post-hoc symbolic explanations for sequential decision-making problems with inscrutable representations", "year": "2022" }, { "authors": "Veloso Topin", "journal": "", "ref_id": "b50", "title": "Nicholay Topin and Manuela Veloso. Generation of policy-level explanations for reinforcement learning", "year": "2019" }, { "authors": " Topin", "journal": "", "ref_id": "b51", "title": "", "year": "2021" }, { "authors": "Nicholay Topin; Stephanie Milani; Fei Fang; Manuela Veloso", "journal": "", "ref_id": "b52", "title": "Iterative bounding mdps: Learning interpretable policies via non-interpretable methods", "year": "2021" }, { "authors": "Bednarz Wells", "journal": "Frontiers in artificial intelligence", "ref_id": "b53", "title": "Lindsay Wells and Tomasz Bednarz. Explainable ai and reinforcement learning-a systematic review of current approaches and trends", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 315, 366.96, 65.5, 13.39 ], "formula_id": "formula_0", "formula_text": "s 0 a0 -→ s 1 a1 -→ • •" }, { "formula_coordinates": [ 3, 368.98, 695.09, 132.55, 9.76 ], "formula_id": "formula_1", "formula_text": "ϕ = P >0 [♦(τ 1 ∧ ♦(τ 2 ∧ ♦ • • • ))]" }, { "formula_coordinates": [ 4, 54, 680.61, 133.18, 12.58 ], "formula_id": "formula_2", "formula_text": "cution δ = x a -→ x a -→ • • • of" }, { "formula_coordinates": [ 4, 318.78, 152.63, 237.05, 28.05 ], "formula_id": "formula_3", "formula_text": "if V = ∅ then 8: φ ← Quine-McCluskey(1=binary(V), 0=binary( V)) 9:" }, { "formula_coordinates": [ 5, 431.32, 104.93, 92.06, 10.31 ], "formula_id": "formula_4", "formula_text": "λ • (3 N •|G| / N • |G|)" }, { "formula_coordinates": [ 5, 334.93, 128.36, 192.38, 10.53 ], "formula_id": "formula_5", "formula_text": "O 3 N •|G| / N • |G| is the time complexity of" } ]
2023-10-24
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b10", "b36", "b8", "b31", "b33", "b1", "b35", "b12", "b13", "b0", "b34", "b34", "b27", "b42", "b12", "b34", "b38", "b37", "b32", "b4", "b22", "b34", "b21", "b20", "b18", "b5" ], "table_ref": [], "text": "Text simplification systems aim to lower the barrier of reading for a wider, more inclusive audience, for instance, children (De Belder and Moens, 2010), emergent bilinguals (Taylor et al., 2022), and individuals with language impairments (Carroll et al., 1998;Rello et al., 2013). While there has been abundant research in automatic text simplification (Siddharthan, 2014), recent data-driven efforts have focused on re-writing a sentence or passage into simpler language while preserving its meaning, often as a monolingual translation task using encoder-decoder models (Alva-Manchego et al., 2020;Sun et al., 2021;Devaraj et al., 2021) or editing models (Dong et al., 2019;Agrawal and Carpuat, 2022). * Yating and William contributed equally.\nSimplified. Those factories are gone now. New companies have come that need skilled workers with more education. New Haven youth want those jobs, but they do not have the education or the skills.\nMany do not have the money to get the training they need. That is where New Haven Promise comes in. It will make a difference by paying for college. New Haven Promise is no one-way street.\nImplicit QUD: Why don't people acquire the necessary skills?\nOriginal. Factories have closed and their low-skill manufacturing jobs are long gone. The new companies in town require workers with a college degree or advanced training. […] The New Haven Promise is part of a bigger plan to improve the city's economy. Srikanth and Li (2021). Both simplified and original snippets are shown; elaboration added to the simplified version is shaded in blue. \" [...]\" in the original text refers to content deleted in the simplified version. This work focuses on already identified elaborations in the simplified text, and introduces implicit questions under discussion (\"implicit QUD\", yellow box) to characterize and help generate the elaborations. This work instead focuses on elaborative simplification (Srikanth and Li, 2021), i.e., explaining or elaborating difficult concepts or content during the simplification process, as illustrated in Figure 1. Although elaborations would add to the amount of content a reader needs to process, psycholinguistic studies have established the benefit of elaborative modifications for L2 reading comprehension (Parker and Chaudron, 1987;Yano et al., 1994). However, deriving elaborative simplification is challenging: existing simplification models-because they are trained as end-to-end translation models-do not actively generate elaborations and, when they do, tend to hallucinate (Devaraj et al., 2021;Srikanth and Li, 2021). Thus to make progress, we argue that explicit analysis and supervision is necessary. There has been little\nwork understanding what people choose to elaborate, how they elaborate, and how the elaboration fits into the discourse context. Understanding these dimensions is crucial for developing better systems by giving us a framework for analyzing elaborations.\nWe propose a simple but powerful way of thinking about elaborations: as answers to implicit questions. Consider Figure 1: the editor inserted \"Many do not have the money to get the training they need\" as an explanation for the preceding sentence \"they do not have the education or the skills\". This elaboration did not exist in the original (unsimplified) document, and it can be thought of as answering the implicit question \"Why don't people acquire the necessary skills?\".\nThis approach has a long history in the Question Under Discussion (QUD) linguistics framework (Von Stutterheim and Klein, 1989;Van Kuppevelt, 1995;Roberts, 2012;Benz and Jasinskaja, 2017;Ko et al., 2023); the QUD framework views each sentence as the answer to an implicit or explicit question from prior context. Thus, our model for elaborative simplification is that, while simplifying text, editors implicitly ask questions, especially when difficult concepts are encountered. Elaborative simplifications are (explicit) answers to these (implicit) questions.\nWith this view, we formulate elaborative simplification as a two-step process: question generation and question answering. The question generation step models \"what is elaborated?\" by means of recovering the implicit QUDs, which will guide the question answering model for the generation of the actual elaboration.\nTo support this, we present ELABQUD, a novel corpus of implicit QUDs that are answered by the 1299 elaborations collected by Srikanth and Li (2021). In addition, ELABQUD also contains a finer-grained layer of annotation specifying which concepts the elaboration was about in the earlier context of the same document, which we call the targets of elaboration. We find authors elaborate both about entities and events, and that elaborated concepts tend to be composed of less frequent words. We also analyze the types of questions to determine how authors elaborate, and find that elaborations often involve causal reasoning and explanations of entities.\nUsing ELABQUD, we first train and evaluate question generation models attempting to automat-ically generate the QUDs. We train these models using two QUD corpora, then fine-tune on ELABQUD: one setting where the model is exposed to the elaboration (Ko et al., 2022), and one where the model is not (Ko et al., 2020) following the expectation-driven model of QUD (Kehler and Rohde, 2017). The latter setting mimics the realistic scenario where the answer (i.e., the actual elaboration that we aim to generate) is not known prior to asking the questions. We show that expectation-driven questions, although often plausible and valid, tend to deviate more often from the exact direction of the annotated QUDs.\nNext, we plug in the generated questions as prompts for a GPT-3 model (Brown et al., 2020) to derive elaborations in a zero-shot manner. We show that compared with no prompt or generic prompts, QUD-driven elaborations are of substantially higher quality and are typically more elaboration-like.\nWe release ELABQUD and code at https:// github.com/sheffwb/elabQUD (copyright issues discussed in Appendix C)." }, { "figure_ref": [ "fig_0" ], "heading": "Background and Related Work", "publication_ref": [ "b9", "b17", "b14", "b34", "b41", "b34", "b34" ], "table_ref": [], "text": "Elaborative Simplification Earlier work related to elaborative simplification mostly focused on a specific type of elaboration, namely retrieving definitions in lexical simplification (Damay et al., 2006;Kandula et al., 2010;Eom et al., 2012). More recently, Srikanth and Li (2021) gathered a general dataset for elaborative simplification, all of which were derived from the Newsela dataset (Xu et al., 2015), a corpus of professionally simplified news articles. The elaborations were obtained by first finding sentences in the simplified version of a document that failed to align to the original version. These candidates were then manually filtered via crowdsourcing to check whether they appeared in a context window in the original version. Srikanth and Li (2021) found that only some of the inserted elaborations were definitions; many were contextually dependent explanations and clarifications (e.g., Figure 1). In a few cases, editors would choose to add additional facts related to an event. This rules out definition retrieval as a full solution to the elaboration generation task. Additionally, Srikanth and Li (2021) showed that vanilla use of an auto-regressive language model could generate ersatz \"elaborations\" that deviate from the document context, hallucinate, and/or do not actually explain the content." }, { "figure_ref": [], "heading": "Questions Under Discussion", "publication_ref": [ "b38", "b37", "b32", "b4", "b11", "b16", "b40", "b20", "b21", "b22", "b21", "b20", "b21", "b24", "b25", "b21", "b34" ], "table_ref": [], "text": "The QUD framework is a way to reason through discourse by viewing it as continuously posing and answering questions (Von Stutterheim and Klein, 1989;Van Kuppevelt, 1995;Roberts, 2012;Benz and Jasinskaja, 2017). In dialogues, participants actively resolve the current QUDs; however in monologues, the QUDs are implicit. Thus in this work we recover the implicit QUD that was triggered in context prior to the elaboration that answers it. Recent work has begun to explore the annotation (De Kuthy et al., 2018;Hesse et al., 2020;Westera et al., 2020;Ko et al., 2020Ko et al., , 2022) ) and automatic generation (Ko et al., 2023) of QUD structures. Our data collection process aligns with Ko et al. (2022)'s annotation paradigm for QUD recovery, wherein each sentence of a news document is considered an answer to a QUD from prior context. In our case, each elaboration is the answer to a QUD that comes up when the need for more explanation arises.\nDespite decades of rich linguistic research on QUD, large-scale, task-oriented application of this framework is still in its infancy, with very recent efforts studying question generation (Ko et al., 2020) and answering (Ko et al., 2022), conditional generation (Narayan et al., 2023), and decontextualization (Newman et al., 2023). The goal of this work is to lay a foundation connecting elaborations with QUD: given a marked elaboration, using QUDs to characterize what and how the elaboration should be generated. Although this work does not address when an elaboration should be added (which we leave for future work), the QUD framework provides a natural, interactive, and personalized way to think about elaborations: the QUD will be explicitly provided by the reader when they think more explanation is needed.\n3 Implicit QUDs: what questions do elaborations answer?\nFor a simplified document with sentences D = {S 1 , ..., S i-1 , S i , S i+1 , ...} where sentence i is an elaboration (i.e., E = S i ), we aim to recover the implicit question under discussion Q such that E answers Q. We further define the target T of the elaboration, i.e., E elaborates or explains T . The sentence that contains T is called the anchor sentence of Q, and can be taken to mean that Q arose from that anchor (Ko et al., 2022). This section presents ELABQUD and its annotation. ELABQUD contains annotated implicit QUDs for all 1,299 elaborations in Srikanth and Li (2021), along with their anchors and targets." }, { "figure_ref": [], "heading": "Annotation task", "publication_ref": [ "b21", "b34", "b34" ], "table_ref": [], "text": "Our annotation process is depicted in Figure 2. We adapt Ko et al. (2022)'s annotation paradigm for less cognitive load since we focus on one elaboration at a time, and we introduce task-specific modifications. Specifically, for a given elaboration E, annotators were provided with a context window of five sentences preceding E, E itself, and the three sentences succeeding E. 1 We show five prior sentences as in Srikanth and Li (2021), who found that this is usually sufficient and effective to establish common ground. The three succeeding sentences were shown to provide a more rounded picture of the document, although this information is not necessary for the annotations. For ease of reading, the elaborations were highlighted in yellow.\nNext, the annotators were asked to create questions which were (a) plausibly generated by considering only the context, and (b) for which the elaboration provides an answer. To better simulate the real elaboration simplification process where E is unknown when the question is asked, we ask annotators to avoid including content specific to E in the questions.\nWe then ask the annotators to identify the target T that E elaborates. After the first round of annotations both by the authors and by crowdsourced workers, we found that, in most cases, both the anchor sentence and T were in the sentence immediately preceding the elaboration (i.e., T ∈ S i-1 when E = S i ), and that with multiple analyses S i-1 usually provided the most straightforward T . Thus, we also highlighted S i-1 in the interface. However, when asking annotators to provide T , we did not prime them further to S i-1 , and allowed them to highlight as T any subsequence in the prior context that they deem plausible.\nFinally, we noticed that some sentences are organizational: they are added to provide discourse cues that describe the way the next few sentences are organized, e.g., the elaboration text E in the example below. We included an additional question to mark these.\n(1) Investigators say Kellogg tried to copy the watermark. E: Here's how they say he did it. First he printed the front side of the money on one piece of paper. Next, [...] Annotators The primary annotation task had two stages. The first stage involves three expert annotators at our institution who each annotated the same 30 elaborations. From these, we identified a representative set of six elaborations for which all annotators agreed on the target T and asked semantically equivalent questions to form a worker qualification dataset. Their feedback was also used to enhance instructions and guide minor improvements to the annotation interface.\nThe full dataset was then collected via crowdsourcing using Amazon Mechanical Turk. Annotators that had previously worked with our institution on other complex document comprehension tasks were asked to annotate the six qualification elaborations as a qualification task. Responses were manually inspected, and those that matched the expert target annotations and gave highly similar or reasonable alternative questions were qualified. In total, 8 workers were approved. They were paid at a rate of over $10 per hour. Each elaboration was annotated by 2 annotators (with a subset of 280 annotated by 3 annotators); in total, we collected 2,878 questions across the 1,299 elaborations in Srikanth and Li (2021). The collected questions had an average length of 8.80 tokens (std.dev 3.25). " }, { "figure_ref": [ "fig_2" ], "heading": "Analysis", "publication_ref": [ "b43", "b15", "b30", "b2" ], "table_ref": [ "tab_0" ], "text": "Are questions similar for the same elaboration? We report BERTscore (Zhang et al., 2019) between each pair of questions. We include both raw and rescaled2 values. Annotator questions have a BERTscore F1 of 0.922 (rescaled 0.538). Compared to randomly-paired questions from the same article (F1 0.879; rescaled 0.281), these values indicate high similarity between questions from different annotators for the same elaboration when compared to random question pairings.\nFor the anchor sentence, we measure agreement based on the distance from it to the elaboration, meaning a distance of 3 indicates the anchor sentence occurs 3 lines before the elaboration, while a distance of -1 indicates the anchor sentence occurs in the line after the elaboration. The distribution of distances is provided in Figure 3; most anchor sentences immediately precede E. We observe a Fleiss' kappa (Fleiss, 1971;Randolph, 2005) of 0.6083 (percentage agreement 69.9%), indicating substantial agreement (Artstein and Poesio, 2008). Additionally, the selected targets overlap 62.4% of the time, reflecting that annotators agree on what is being elaborated most of the time.\nWhat is elaborated? Although the average target is 4.54 tokens long, there is considerable variation (standard deviation of 3.06). Nouns are the most frequent part of speech in the targets (7452), specifically plural nouns (1589) and proper nouns (1449) out of a total number of 13153 tokens. These are often the targets of definitions, or something along those lines. For instance, the first example in Table 1 has an entity target that explains more about the entity without being an explicit definition. Moreover, we surmise a significant subset of elaborations focus on entities because 31.4% of all targets contain proper nouns." }, { "figure_ref": [], "heading": "Question Type Definition", "publication_ref": [ "b6", "b7" ], "table_ref": [ "tab_0" ], "text": "Example from ELABQUD Concept (34%): Asking for a definition of an event or a concept.\nAnderson became interested in people like Landa when she noticed something strange about a call center near her house. [Q: What do call centers do?] E: Workers at call centers help people over the phone. Example (16.2%): Asking for example(s) or instance(s) of an event or a concept.\nThe government is split into two parties that often have different political beliefs. [Q: What is an example of one of these parties?] E: One party is the Democrats. Consequence (13.9%): Asking for the consequences or results of an event.\nThe tightropes that Wallenda walks across go between buildings, hundreds of feet above the ground. [Q: What if he falls?] E: There are no nets to catch him if he falls. Cause (12%): Asking for the cause or reason for an event or a concept.\nBut not many countries support Obama's plan to fire missiles at Syria. [Q: Why are they being unsupportive?] E: Some are worried about getting into another war in the area without knowing the facts. Procedural (8.1%): Asking for the procedures, tools, or methods by which a certain outcome is achieved.\nThe drone safely flew above the Atlantic Ocean and landed on an aircraft carrier called the George H.W. [Q: How did the drone navigate its way to aircraft carrier?] E: It was given special directions from satellites above the earth. While many targets comprise noun phrases, 48.99% of targets include a verb, indicating that writers elaborate on events as well as entities. Take, for instance, the organization example (1) stated earlier. In this example, the target copy the watermark contains a verb and the elaboration focuses on the event of copying the watermark rather than Kellogg or the watermark itself.\nWe also found that authors usually elaborate on less frequent words. We measured this using log frequency per million words from the SUBTLEX-US (Brysbaert et al. (2012), 2015 release) corpus. The average log frequency values (per million words) for targets is 1.72, significantly lower than the document average of 2.46 (by an independentsamples t-test, t = -34.5, p < .00001).\nWhat types of questions are asked? To examine the types of questions, we classify the questions collected using the taxonomy and model from Cao and Wang (2021). In Table 1, we show the top 5 question types in ELABQUD along with examples. The implicit QUDs reveal that in most cases, the elaboration is explaining a concept (34%), providing explicit causal reasoning by describing the cause (12%) or consequences (13.9%) of an event, providing an example (16.2%), or describing a complex process (8.1%). Other question types (e.g., verifying the truthfulness of a concept, comparison among multiple events, or asking about the extent of an event) are rare, indicating that the communicative goal of an elaboration in the Newsela dataset is to provide an explanation when reasoning is deemed difficult for children.\nWe additionally present an analysis connecting elaborations with expert-annotated discourse relations on a small set of 40 examples. We observe intuitive correspondences between discourse relations and question types, detailed in Appendix A." }, { "figure_ref": [], "heading": "Question generation", "publication_ref": [], "table_ref": [], "text": "With the QUD framework, elaborative simplification is a two-step process:\n(1) given context C = S 1 , S 2 , ..., S i-1 prior to the elaboration E = S i , generate a question Q to recover the implicit QUD by modeling P (q|C).\n(2) Given C and Q, generate elaboration E by modeling P (e|C, Q).\nThis section experiments with question generation (QG) models for step (1). We explore three different settings varying how explicitly the model sees the elaboration target T and the anchor sentence, and establishing an upper bound where the model is exposed to the gold \"answer\" E." }, { "figure_ref": [ "fig_2" ], "heading": "Models", "publication_ref": [ "b21", "b29", "b18", "b20", "b20", "b20", "b21", "b34" ], "table_ref": [], "text": "Oracle setup: QG model sees E. Knowing the answer would inform a QG model what questions to ask. Although our target model will not see the answer (as it is generating a question used in-turn to generate the answer/elaboration), we can use such a QG model as a silver upper-bound on QUD generation. Here we repurpose the DCQA dataset (Ko et al., 2022) for question generation. DCQA consists of 22K questions from ∼600 news articles; these questions are implicit QUDs elicited from annotators where they considered each sentence of these articles as an answer to a question. Each question is associated with an anchor sentence that triggers the question (the anchor sentence contains the target T but DCQA does not annotate T ) and an answer sentence. In our case, we include all sentences prior to E, along with E, to see how they help compose questions about E.\nWe first fine-tune GPT2-medium (Radford et al., 2019) on DCQA with the input consisting of prior context C, the anchor sentence, the answer sentence E, and annotated question Q with special delimiters separating each part. We call this model DCQA-base. We then fine-tune DCQA-base on ELABQUD, which we call DCQA-ft. We refer readers to Table 7 (Appendix) for a listed view of model inputs to all systems.\nPractical system: QG model does not see E. Realistically, since E is what we eventually want to generate, the QG model cannot not be exposed to it. This paradigm fits with the expectation-driven approach to QUD (Kehler and Rohde, 2017), where the questions are more curiosity-driven and are asked without seeing upcoming context.\nThus we train our QG model using the INQUIS-ITIVE dataset (Ko et al., 2020), the largest question generation dataset annotated in line with an expectation-driven approach. INQUISITIVE consists of ∼19K questions elicited as the reader sees the first 5 sentences of a news article one by one (without access to the document as a whole). IN-QUISITIVE also includes target annotation in the anchor sentence where the question was raised; this allows us to experiment with models that explicitly predicts the target T . Specifically, our model INQ-GoldT-base is from Ko et al. (2020), a GPT-2 medium model finetuned on INQUISITIVE. The input to this model includes all sentences prior to the anchor sentence, the anchor sentence itself including the gold target span T marked, and the annotated question Q with special delimiters separating each part. 3 We then fine-tune this model on ELABQUD, which we call INQ-GoldT-ft.\nOur second INQUISITIVE model, INQ-PredT, involves a pipeline approach that first predicts T . We following the same setting as Ko et al. (2020): we train a distill-bert-uncased model with a modified SQuAD-QA format. 3 We do not predict the anchor sentence; at test time, the annotated anchor sentence is used. Anchor prediction is noisy (Ko et al., 2022). Since the overwhelming majority of the anchor sentence is the sentence preceding E (Figure 3), we believe this has a limited effect on our conclusions while leading to better controlled experiments. We leave anchor prediction for future work. The target prediction model was first trained on INQUISITIVE then fine-tuned on ELABQUD. 4 In the question generation model, we replace the gold target in INQ-GoldT-ft with the predicted target (for both training and testing), with the rest of the setup identical to INQ-GoldT-ft.\nSettings We use the same train/validation/test splits as in Srikanth and Li (2021). All model input formats and hyperparameters are tabulated in the Appendix, Table 7." }, { "figure_ref": [], "heading": "Results", "publication_ref": [ "b43", "b26", "b20", "b22" ], "table_ref": [ "tab_2" ], "text": "Automatic evaluation We first evaluate generated questions with two automatic measures, BERTScore (Zhang et al., 2019) and BLEU (Papineni et al., 2002), comparing the generated questions with human annotated questions.\nFor BERTScore, we include both the unscaled version and the rescaled version. The results are shown in Table 2. It is clear that our DCQAbased oracle models, exposed to the elaboration E, performs better than INQUISITIVE-based models. Fine-tuning with ELABQUD does not help with the oracle setup but improves substantially for INQUISITIVE-based models. INQ-PredT, which predicts the target span, shows a drop in performance in line with the observation in Ko et al. (2020), though still better than taking INQ-GoldTbase out-of-the-box.\nHuman evaluation We further perform human evaluation across three systems, taken from the stronger versions of each group: DCQA-base, INQ-GoldT-ft, and INQ-PredT. We evaluate questions with a framework adapted from the QUD human evaluation schema of Ko et al. (2023); annotators judge questions along two criteria:\n(1) Is the question reasonable to ask given the current context? That is, is this a valid/reasonable QUD having read so far?\n(2) Is this question answered by the elaboration? For both criteria, annotators mark \"Yes\" (allows minor spelling and grammar issues for (1)) or \"No\".\nTwo undergraduate annotators evaluated a random sample of 50 questions generated by these three models along with the human annotated questions, with a total of 200 questions.\nThey agree 70.0% of the time for criterion 1 and 79.5% of the time for criterion 2. Shown in Table 3, annotators found human questions of the highest quality along both criteria, followed by DCQA-base, then INQ-GoldT-ft, and finally INQ-PredT. This is in-line with the automatic evaluation results. Interestingly, annotators report that both INQUISITIVE models perform worse on criterion 2 than 1, indicating that some of these questions may be valid QUDs but do not match the direction of the human elaboration. Consider the following elaboration in context:\n(2) Should kids play tackle football? Football is a rough game. E: Players get bounced around.\nA QUD like Why is football a rough game? makes the most sense for the actual elaboration \"Players get bounced around\", but a question such as the one generated by INQ-GoldT-ft, What happens to players who get hurt playing football?, is not answered even though it is a valid QUD." }, { "figure_ref": [], "heading": "Zero-shot elaboration generation", "publication_ref": [], "table_ref": [], "text": "Finally, we experiment with the utility of questions on elaboration generation, i.e., task (2) in Section 4: given C and Q, generate elaboration E by modeling P (e|C, Q). Our hypothesis is that a good QUD should be able to guide a strong language model towards generating a better elaboration than without such guidance, in the sense that the elaboration should be more on-topic, and more frequently an explanation rather than simply continuing a story. Table 4: BERTScore (F / rescaled F) and BLEU-4 for GPT-3 generated elaborations given different prompts." }, { "figure_ref": [], "heading": "Setup", "publication_ref": [ "b5", "b23" ], "table_ref": [], "text": "We use GPT-3 (Brown et al., 2020) for this task due to its vast text generation and open-domain question-answering capability (Liang et al., 2022). Specifically, we use text-davinci-002 (temperature 0, max # tokens 128) with the following conditions, giving context consisting up to 5 sentences prior to the elaboration (this context window follows the same setup as Srikanth and Li 2021):\n(1) Context only: the prompt consists of only the context;\n(2) Generic: we concatenate the context with \"Please explain the last sentence in simple terms:\" as a generic instruction;\n(3) QUD: we formulate the input as Context: <context>, Question: <question>, Answer:. We use questions both annotated questions from ELABQUD and ones from our question generation systems." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "Automatic evaluation We first report automatic evaluation results in Table 4, using BERTScore and BLEU with the gold elaboration as reference. The best performance is achieved with a QUD setup, a positive indication that our hypothesis holds: QUDs inform elaboration generation. However, this is more true for good QUD's generated by annotators, or by the DCQA models that were exposed to the elaboration itself. This result echoes what we found in Section 4.2: the INQUISITIVE models, although they often generate good QUDs, may not necessarily generate the QUD for the elaboration that the professional editor had in mind (as both BERTScore and BLEU compares the hypothesis with a reference). These cases lead to worse automatic measures compared to context-only settings, indicating a challenging future direction for QUD generation. Interestingly, we also note that Context: \"That's when I learned I was never going to write a real novel because I could not think that way.\" The Nobel Pirze is 112 years old. Munro is the 13th female literature winner. Munro grew up in Wingham, a traditional Canadian town west of Toronto. She tended to write about her experience there and her life after the social revolution of the 1960s. Elaboration: It was a time when people began to break free of traditional roles.\nContext only: \"I'm not a very good person to ask about the state of the novel,\" she said." }, { "figure_ref": [], "heading": "Generic:", "publication_ref": [], "table_ref": [], "text": "The last sentence is saying that Munro usually writes about her life in the small town she grew up in, and her life after the social revolution of the 1960s.\nHuman question What happened during the social revolution of the 1960s?:\nThe social revolution of the 1960s was a time of great change and upheaval.\nDCQA-base What type of social revolution was she talking about?:\nThe social revolution of the 1960s was a time of great change in many Western societies." }, { "figure_ref": [], "heading": "INQ-GoldT-ft", "publication_ref": [], "table_ref": [], "text": "What happened after the social revolution of the 1960s?: Munro continued to write about her experiences in Wingham and her life after the social revolution of the 1960s." }, { "figure_ref": [], "heading": "INQ-PredT", "publication_ref": [], "table_ref": [], "text": "Who caused the changes in the community?: It is not clear who caused the changes in the community that Munro writes about. Table 5: An example of GPT-3 generated elaborations. using a generic instruction does not yield better results than instead providing no instruction and only the context itself." }, { "figure_ref": [], "heading": "Manual evaluation", "publication_ref": [ "b34" ], "table_ref": [ "tab_5" ], "text": "We additionally perform human evaluation on the generated elaborations across these different prompts. In this setup, we mimic how elaborations would happen organically in a simplified document: a reader would not have access to the QUD but only to the generated elaboration, directly continuing prior discourse. A human evaluation would also reveal whether models generate elaborations that do not follow the exact direction from the document but are nonetheless good and plausible elaborations, an aspect that is not captured by the automatic measures.\nSpecifically, we provide two linguistics student annotators with a randomly sampled subset of 50 instances from the test set. The annotators were shown up to 5 sentences of prior context, then elaborations from GPT-3 as well as the original human elaboration from the document. These elaborations are randomly ordered. The annotators were asked to select and rank the top 2 elaborations independently along two criteria: (1) coherent continuation of discourse; (2) elaboration-like or explanationlike, rather than providing new information for story continuation (Srikanth and Li, 2021).\nTable 6 shows that QUD-prompts produce more informative and on-topic elaborations, and so are ranked as highly elaboration-like. Take the contextonly generation in Table 5; while it matches in style and is very fluent with the text (a very reasonable next line and quote from Munro), it is completely off-topic from the true elaboration, which describes " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper tackles the task of generating elaborations during text simplifcation. We adopt the view of the Questions Under Discussion (QUD) discourse framework, and model elaboration as answers to implicit QUDs. We present ELABQUD, a dataset of annotated QUDs for elaborative simplification. We experiment with a question generation → elaboration generation pipeline. Results show that good QUDs provide valuable cues for zeroshot elaboration generation." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "This paper focuses on how to generate elaborations, rather than when to do so. Namely, we assume that we know which sentences constitute elaborations using Another challenge with generating elaborations is inherit to elaborations themselves: because they contain information not present in the original text, they are hallucinations. It will be important to analyze the difference between helpful elaborations and undesirable hallucinations, and we leave this to future work.\nWe also note that we focused here on English text of a particular genre (news), and that results may not generalize to other languages or other genres of text.\nFinally, we acknowledge that the landscape of LLMs are rapidly changing every few months, and we have not experimented with the strongest models (i.e., GPT-4). However, the space of possible elaborations prevents unconstrained generation; the utility of QUD is exactly to point to what is elaborated. As shown with our results, both question generation and elaboration generation benefit from stronger language models, hence we are optimistic about what stronger LLMs will bring to elaborative simplification with QUDs." }, { "figure_ref": [ "fig_4" ], "heading": "A Analysis of discourse relations", "publication_ref": [ "b28", "b39", "b3", "b39", "b19" ], "table_ref": [], "text": "While QUDs provide fine-grained information about the goal of each elaboration, we complement this view by examining the discourse relations between an elaboration and its prior context R pre (S i-1 , E). We use the relation taxonomy from the Penn Discourse Treebank (Prasad et al., 2008;Webber et al., 2019), a structural-neural framework that lays out the discourse relations between two text spans (i.e., arguments) including temporal, comparison, cause, etc.\nSince most of the elaborations are intersentential implicit discourse relations that are still challenging for models to identify automatically (Atwell et al., 2021), we randomly sampled 51 elaborations for two expert linguists to annotate using the PDTB-3 (Webber et al., 2019) level-2 taxonomy. The two experts agreed for 40 of those, which we use in this analysis. 6 Figure 4 shows the distribution of R pre , with PDTB-3 distributions for reference. Compared to PDTB-3, whose distribution came from news text, we observe many more Expansion.Manner relations associated with elaborations that explain the manner in which a situation in the preelaboration sentence was done. As expected, Contingency.Cause frequently appears. Our manual examination indicates that authors often stated the result in the complex explicitly and left cause implicit; when simplifying, this implicit cause was deemed too confusing for younger readers and so was added as the elaboration. Expansion.Conjunction is often linked with definitions. In many cases, an EntRel (entity relationship only) or a NoRel (no relation) involve organizational sentences (c.f. Section 3.1 example (1)) that opens succeeding discourse. We noticed many more Hypophora relations compared to PDTB-3; these are questions posed by the editors simplifying the document that guides children for what comes next.\nWe also report the most frequent discourse relations associated with each of the top 5 most frequent question type: (Kim et al., 2020) did poorly on correctly classifying the relations with 42.5% accuracy on the 40 relations; thus, we do not include analyses from automatically recognized relations. Overall, we observe a relatively high correlation between the type of questions and the discourse relations connecting an elaboration and its preceding context; both are informative in the type of content present in an elaboration." }, { "figure_ref": [], "heading": "B Model setup and hyperparameters", "publication_ref": [], "table_ref": [], "text": "We tabulate all model setup and hyperparameters in Table 7." }, { "figure_ref": [], "heading": "C Copyrights", "publication_ref": [ "b41" ], "table_ref": [], "text": "This work depends on the Newsela text simplification dataset (Xu et al., 2015). This dataset is free-to-use for academic researchers at https: //newsela.com/data. The authors have obtained permission from Newsela to use this data." }, { "figure_ref": [], "heading": "D Compute", "publication_ref": [], "table_ref": [], "text": "For all models in this work, we used 2 compute nodes each consisting of 3x NVIDIA A100 GPUs. All experiments finished in under 2 hours." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research was partially supported by National Science Foundation (NSF) grant IIS-2145479. We acknowledge the Texas Advanced Computing Cen-ter (TACC) 5 at UT Austin for many of the results within this paper. We also acknowledge Kathryn Kazanas and Keziah Kaylyn Reina for their help with the manual evaluation of generated questions and elaborations." }, { "figure_ref": [], "heading": " ", "publication_ref": [ "b20" ], "table_ref": [], "text": "2020\n) before fine-tuning on ELABQUD. Models fine-tuned on ELABQUD is done with the same input format, where the hyperparameters denote training setup of the fine-tuning stage only. For DCQA models, context-dcqa denotes all sentences prior to the elaboration (where the anchor sentence is enclosed with a delimiter).\nFor INQUISITIVE models, context-inq denotes all sentences prior to the anchor; the anchor includes the gold or predicted target denoted enclosed with a delimiter. For the target span prediction model, the SQuAD QA setup is followed as in Ko et al. (2020)'s span prediction model: SQuAD question → context, SQuAD context → anchor sentence, SQuAD answer → gold span. Questions are decoded with the HuggingFace default greedy decoding. All hyperparameters tuned on the validation set." } ]
Automated text simplification, a technique useful for making text more accessible to people such as children and emergent bilinguals, is often thought of as a monolingual translation task from complex to simplified text. This view fails to account for elaborative simplification, where new information is added into the simplified text. This paper proposes to view elaborative simplification through the lens of the Question Under Discussion (QUD) framework, providing a robust way to investigate what writers elaborate upon, how they elaborate, and how elaborations fit into the discourse context by viewing elaborations as explicit answers to implicit questions. We introduce ELABQUD, consisting of 1.3K elaborations accompanied with implicit QUDs, to study these phenomena. We show that explicitly modeling QUD (via question generation) not only provides essential understanding of elaborative simplification and how the elaborations connect with the rest of the discourse, but also substantially improves the quality of elaboration generation.
Elaborative Simplification as Implicit Questions Under Discussion
[ { "figure_caption": "Figure 1 :1Figure 1: An example of elaborative simplification, taken from Srikanth and Li (2021). Both simplified and original snippets are shown; elaboration added to the simplified version is shaded in blue. \"[...]\" in the original text refers to content deleted in the simplified version. This work focuses on already identified elaborations in the simplified text, and introduces implicit questions under discussion (\"implicit QUD\", yellow box) to characterize and help generate the elaborations.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2: Annotation procedure of ELABQUD.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Distribution of distance from anchor sentence to elaboration.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Concept Q: EntRel, Expansion.Conjunction Example Q: Expansion.Conjunction, Contingency.Cause Consequence Q: EntRel, Expansion.Conjunction Cause Q: Contingency.Cause, EntRel 6 A state-of-the-art classifier", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Relation distribution (%) in PDTB-3 and a sample of 40 agreed elaborations in ELABQUD.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Top question types, their definitions fromCao and Wang (2021), and examples.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Human evaluation on generated questions; % of questions marked yes/no for each criterion.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Human evaluation of generated elaborations.", "figure_data": "% of times each system output is ranked #1 or #2 basedon how elaboration-like and coherent the generation is,independently.\"the social revolution of the 1960s\". Encourag-ingly, elaborations generated by human questions(and DCQA models) are ranked 1st most frequently(after the gold elaborations) in both criteria; thisestablishes the utility of good QUDs. For the INQ-style models, we see a clearer degradation in co-herence despite them scoring well on Elaboration-like. We find that an off-topic question, like theone produced by INQ-PredT in Table 5, can easilythrow off GPT-3. Generally, the generic-promptand context-only elaborations are not similar tothe human elaborations unless it is a descriptionor definition would obviously come next. As such,the elaborations generated without QUDs cannotreplicate more sophisticated human elaborations,where those generated with QUDs can.", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Srikanth and Li (2021)'s dataset. We leave the when question to future work, noting that sentence-level elaboration is infrequent among the articles analyzed bySrikanth and Li (2021). At the same time, what constitutes difficult content is subjective or reader-specific. Future work can explore using QUD for elaborative simplification in an interactive manner. Additionally, the space of possible QUDs given context is large, posing challenges to INQUISITIVE-based systems for future work.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" } ]
Yating Wu; William Sheffield; Kyle Mahowald; Junyi Jessy Li
[ { "authors": "Sweta Agrawal; Marine Carpuat", "journal": "", "ref_id": "b0", "title": "An imitation learning curriculum for text editing with nonautoregressive models", "year": "2022" }, { "authors": "Fernando Alva-Manchego; Carolina Scarton; Lucia Specia", "journal": "Computational Linguistics", "ref_id": "b1", "title": "Data-driven sentence simplification: Survey and benchmark", "year": "2020" }, { "authors": "Ron Artstein; Massimo Poesio", "journal": "Computational linguistics", "ref_id": "b2", "title": "Inter-coder agreement for computational linguistics", "year": "2008" }, { "authors": "Katherine Atwell; Junyi ; Jessy Li; Malihe Alikhani", "journal": "", "ref_id": "b3", "title": "Where are we in discourse relation recognition", "year": "2021" }, { "authors": "Anton Benz; Katja Jasinskaja", "journal": "Discourse Processes", "ref_id": "b4", "title": "Questions under discussion: From sentence to discourse", "year": "2017" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Marc Brysbaert; Boris New; Emmanuel Keuleers", "journal": "Behavior research methods", "ref_id": "b6", "title": "Adding part-of-speech information to the subtlex-us word frequencies", "year": "2012" }, { "authors": "Shuyang Cao; Lu Wang", "journal": "", "ref_id": "b7", "title": "Controllable openended question generation with a new question type ontology", "year": "2021" }, { "authors": "John Carroll; Guido Minnen; Yvonne Canning; Siobhan Devlin; John Tait", "journal": "", "ref_id": "b8", "title": "Practical simplification of english newspaper text to assist aphasic readers", "year": "1998" }, { "authors": "Jerwin Jan; S Damay; Gerard Jaime D Lojico; Kimberly Amanda; L Lu; D Tarantan; Ong", "journal": "", "ref_id": "b9", "title": "SIM-TEXT: Text simplification of medical literature", "year": "2006" }, { "authors": "Jan De; Belder ; Marie-Francine Moens", "journal": "", "ref_id": "b10", "title": "Text simplification for children", "year": "2010" }, { "authors": "Kordula De Kuthy; Nils Reiter; Arndt Riester", "journal": "", "ref_id": "b11", "title": "Qud-based annotation of discourse structure and information structure: Tool and evaluation", "year": "2018" }, { "authors": "Ashwin Devaraj; Iain Marshall; Byron C Wallace; Junyi Jessy Li", "journal": "", "ref_id": "b12", "title": "Paragraph-level simplification of medical texts", "year": "2021" }, { "authors": "Yue Dong; Zichao Li; Mehdi Rezagholizadeh; Jackie Chi; Kit Cheung", "journal": "", "ref_id": "b13", "title": "Editnts: An neural programmer-interpreter model for sentence simplification through explicit editing", "year": "2019" }, { "authors": "Soojeong Eom; Markus Dickinson; Rebecca Sachs", "journal": "", "ref_id": "b14", "title": "Sense-specific lexical information for reading assistance", "year": "2012" }, { "authors": "L Joseph; Fleiss", "journal": "Psychological bulletin", "ref_id": "b15", "title": "Measuring nominal scale agreement among many raters", "year": "1971" }, { "authors": "Christoph Hesse; Anton Benz; Maurice Langner; Felix Theodor; Ralf Klabunde", "journal": "", "ref_id": "b16", "title": "Annotating quds for generating pragmatically rich texts", "year": "2020" }, { "authors": "Sasikiran Kandula; Dorothy Curtis; Qing Zeng-Treitler", "journal": "", "ref_id": "b17", "title": "A semantic and syntactic text simplification tool for health content", "year": "2010" }, { "authors": "Andrew Kehler; Hannah Rohde", "journal": "Discourse Processes", "ref_id": "b18", "title": "Evaluating an expectation-driven question-under-discussion model of discourse interpretation", "year": "2017" }, { "authors": "Najoung Kim; Song Feng; Chulaka Gunasekara; Luis Lastras", "journal": "", "ref_id": "b19", "title": "Implicit discourse relation classification: We need to talk about evaluation", "year": "2020" }, { "authors": "Wei-Jen Ko; Te-Yuan Chen; Yiyan Huang; Greg Durrett; Junyi Jessy Li", "journal": "", "ref_id": "b20", "title": "Inquisitive question generation for high level text comprehension", "year": "2020" }, { "authors": "Wei-Jen Ko; Cutter Dalton; Mark Simmons; Eliza Fisher; Greg Durrett; Junyi Jessy Li", "journal": "", "ref_id": "b21", "title": "Discourse comprehension: A question answering framework to represent sentence connections", "year": "2022" }, { "authors": "Wei-Jen Ko; Yating Wu; Cutter Dalton; Dananjay Srinivas; Greg Durrett; Junyi Jessy Li", "journal": "", "ref_id": "b22", "title": "Discourse analysis via questions and answers: Parsing dependency structures of questions under discussion", "year": "2023" }, { "authors": "Percy Liang; Rishi Bommasani; Tony Lee; Dimitris Tsipras; Dilara Soylu; Michihiro Yasunaga; Yian Zhang; Deepak Narayanan; Yuhuai Wu; Ananya Kumar; Benjamin Newman; Binhang Yuan; Bobby Yan; Ce Zhang; Christian Cosgrove; D ; Christopher; Diana Christopher R'e; Drew A Acosta-Navas; E Hudson; Esin Zelikman; Faisal Durmus; Frieda Ladhak; Hongyu Rong; Huaxiu Ren; Jue Yao; Keshav Wang; Laurel J Santhanam; Lucia Orr; Mert Zheng; Mirac Yuksekgonul; Nathan S Suzgun; Neel Kim; Niladri S Guha; O Chatterji; Peter Khattab; Qian Henderson; Ryan Huang; Sang Chi; Shibani Michael Xie; Surya Santurkar; Tatsunori Ganguli; Thomas F Hashimoto; Tianyi Icard; Vishrav Zhang; William Chaudhary; Xuechen Wang; Yifan Li; Yuhui Mai; Yuta Zhang; Koreeda", "journal": "", "ref_id": "b23", "title": "Holistic evaluation of language models", "year": "2022" }, { "authors": "Shashi Narayan; Joshua Maynez; Reinald Kim Amplayo; Kuzman Ganchev; Annie Louis; Fantine Huot; Anders Sandholm; Dipanjan Das; Mirella Lapata", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b24", "title": "Conditional generation with a questionanswering blueprint", "year": "2023" }, { "authors": "Benjamin Newman; Luca Soldaini; Raymond Fok; Arman Cohan; Kyle Lo", "journal": "", "ref_id": "b25", "title": "A controllable QA-based framework for decontextualization", "year": "2023" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b26", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Kathryn Parker; Craig Chaudron", "journal": "", "ref_id": "b27", "title": "The effects of linguistic simplifications and elaborative modifications on l2 comprehension", "year": "1987" }, { "authors": "Rashmi Prasad; Nikhil Dinesh; Alan Lee; Eleni Miltsakaki; Livio Robaldo; Aravind Joshi; Bonnie Webber", "journal": "", "ref_id": "b28", "title": "The penn discourse treebank 2.0", "year": "2008" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b29", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "J Justus; Randolph", "journal": "", "ref_id": "b30", "title": "Free-marginal multirater kappa (multirater k[free]): An alternative to fleiss' fixed-marginal multirater kappa", "year": "2005" }, { "authors": "Luz Rello; Ricardo Baeza-Yates; Laura Dempere-Marco; Horacio Saggion", "journal": "", "ref_id": "b31", "title": "Frequent words improve readability and short words improve understandability for people with dyslexia", "year": "2013" }, { "authors": "Craige Roberts", "journal": "Semantics and pragmatics", "ref_id": "b32", "title": "Information structure: Towards an integrated formal theory of pragmatics", "year": "2012" }, { "authors": "Advaith Siddharthan", "journal": "ITL-International Journal of Applied Linguistics", "ref_id": "b33", "title": "A survey of research on text simplification", "year": "2014" }, { "authors": "Neha Srikanth; Jessy Junyi; Li", "journal": "", "ref_id": "b34", "title": "Elaborative simplification: Content addition and explanation generation in text simplification", "year": "2021" }, { "authors": "Renliang Sun; Jin Hanqi; Xiaojun Wan", "journal": "", "ref_id": "b35", "title": "Document-level text simplification: Dataset, criteria and baseline", "year": "2021" }, { "authors": "Zachary W Taylor; Maximus H Chu; Jessy Junyi; Li", "journal": "", "ref_id": "b36", "title": "Text simplification of college admissions instructions: A professionally simplified and verified corpus", "year": "2022" }, { "authors": "Jan Van Kuppevelt", "journal": "Journal of linguistics", "ref_id": "b37", "title": "Discourse structure, topicality and questioning", "year": "1995" }, { "authors": "Christiane Von; Stutterheim ; Wolfgang Klein", "journal": "North-Holland Linguistic Series: Linguistic Variations", "ref_id": "b38", "title": "Referential movement in descriptive and narrative discourse", "year": "1989" }, { "authors": "Bonnie Webber; Rashmi Prasad; Alan Lee; Aravind Joshi", "journal": "Philadelphia, University of Pennsylvania", "ref_id": "b39", "title": "The penn discourse treebank 3.0 annotation manual", "year": "2019" }, { "authors": "Matthijs Westera; Laia Mayol; Hannah Rohde", "journal": "", "ref_id": "b40", "title": "TED-Q: TED talks and the questions they evoke", "year": "2020" }, { "authors": "Wei Xu; Chris Callison-Burch; Courtney Napoles", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b41", "title": "Problems in current text simplification research: New data can help", "year": "2015" }, { "authors": "Yasukata Yano; Michael H Long; Steven Ross", "journal": "Language learning", "ref_id": "b42", "title": "The effects of simplified and elaborated texts on foreign language reading comprehension", "year": "1994" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b43", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" } ]
[]
10.7910/DVN/TYJKEZ
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b8", "b8", "b8", "b14", "b18", "b6", "b15" ], "table_ref": [], "text": "According to Capterra [11], 75% of recruiters have used some recruiting tracking system in the hiring process. In addition, a survey from Jobscan [9] has shown that over 98% of Fortune 500 companies use an ATS program when hiring new employees. ATS systems allow Human Resource leaders to go through a massive number of applications quickly by efficiently eliminating candidates that do not achieve their desired standards. ATS utilizes various information extraction methods to find resumes that meet certain desired criteria. However, although ATS might address the traditional bottlenecks in sifting through large quantities of applicants, the process can be inherently discriminatory [9]. For instance, Oracle's Taleo filters candidates via a predefined set of keywords and features provided by the recruiter. Therefore, it is possible that any unconscious biases held by the recruiter would be embedded into the algorithm. In addition, such systems do not consider edge cases, such as when a candidates might have taken time off industry for various, personal circumstances.\nA study conducted by Headstart has shown that \"within an investigation of 20,000 applicants, legacy ATS platforms enable inequitable hiring processes that lead to severe discrimination.\" Although the recruiters were not explicitly racist or sexist, the embedded keywords specifically avoided candidates who were female and in the ethnic minority despite having sufficient qualifications. Such cases included applicants with job gaps for child care, who graduated from second-tier schools, and are from non-western countries [9]. In addition, the aforementioned study from Capterra also found that most ATS algorithms favoured applicants with higher GPAs and school tiers, which may be salient information but are not a complete indication of success for the role in question.\nWith the advent of LLMs, integrating such models into future ATS appears to be a great possibility where they could provide even more granular responses. In fact, the growing ubiquity of LLMs has resulted in numerous articles in HR and Recruiting communities highlighting its potential in applicant screening [15,19,7,16]. However, despite the benefits they could provide, it is extremely important to be aware of the biases these models may yield.\nPreprint. Under review." }, { "figure_ref": [], "heading": "arXiv:2305.10407v1 [cs.CL] 17 May 2023", "publication_ref": [ "b1", "b2" ], "table_ref": [], "text": "Large language models' increasing prevalence and impact, as seen with ChatGPT and its counterparts, have highlighted the potential risks stemming from the inherent biases contained by these systems resulting from their training data, which contains real-world dialogue, thus mirroring the sentiments, beliefs, and biases existing in society. Despite efforts by OpenAI and other organizations to address these issues, biases in these models can potentially perpetuate injustices and unfairness in society. It is essential to highlight the existing biases in these systems in order to demonstrate the potential downstream effects of using LLMs for use cases that are susceptible to bias. However, it is important to note that removing biases may not be entirely possible. Nonetheless, raising awareness of these issues and encouraging the development of more transparent and accountable AI systems can help mitigate the negative impact of biases in large language models and better educate users about how to use these systems in a responsible manner. We define bias as the systematic inaccuracies or stereotypes in the generated language output, which are influenced by the training data and reflect the societal and cultural prejudices that exist in that data. Therefore, if a data category were either over/underrepresented, it would be classified as a type of bias. [2] 3 Related Works This project spans multiple disciplines, including ethnic and gender bias, the impacts of this bias in job recruiting, and natural language processing. As this topic covers a wide array of issues that rarely overlap, we have summarized some related works in each area to give an appropriate amount of context to our project and the value it brings to the current state of research.\n[4] discusses the role ethnic bias might play in the resume screening step of a job application and touches on the process through which a decision-maker might have their opinions on an applicant change based on the existence of ethnic markers in the applicant's resume. These markers could include traditional ethnic names, prior work experience that hints at an applicant's ethnic background, or explicitly-stated information, such as being bilingual, that could impact the decision-maker's evaluation based on either implicit or explicit bias toward or against the perceived ethnic group to which the applicant belongs. The paper further discusses possible solutions to eliminate or reduce the opportunity for this bias from the perspectives of both the applicant and the decision-maker but notes that there are issues with all of these approaches, potentially limiting their effectiveness or even accentuating the problem. They specifically mention the use of algorithms and cite research indicating that they could be beneficial, but also note that there is potential for these algorithms to be exploited by an applicant who figures out the underlying logic of the algorithm and, therefore, can craft their resume specifically to pass the filter. There is also the possibility that an algorithm could be designed to include potential biases the creator of the algorithm might hold, which could retain the prior bias or even make the presence of bias more pronounced.\n[5] similarly touches on the role of bias in the job application process. However, it goes beyond only ethnic biases and also evaluates gender biases as well as biases existing in combinations of gender and ethnic backgrounds. The authors find that there are differences based on pairs of gender and ethnic identities but that biases related to these pairs are also affected by the type of job, specifically the complexity of the job and the level of client interaction necessary. This paper relates to our project not only in the context of the intersection between gender and ethnicity in job application bias but also in how specific types of roles (such as client-facing roles) may be perceived as better fits due to preconceived notions of how effectively people of certain backgrounds are able to communicate with others.\nAuthors in [3] critically examine large language models' risks and ethical implications. The authors argue that the increasing size of these models may lead to significant ethical concerns, including the perpetuation of biases and the potential for misinformation and manipulation. The paper comprehensively analyzes the risks associated with large language models and calls for increased transparency and accountability in developing and deploying these systems. The authors also suggest that further research is needed to understand large language models' potential risks and implications fully.\n[1] presents a critical survey of the concept of \"bias\" in natural language processing (NLP) and explores the potential implications of biased language technology. The authors argue that \"bias\" is often used simplistically in the NLP community and calls for a more nuanced understanding of the concept. The paper examines the potential sources of bias in NLP, including the training data, algorithms, and user interfaces. It suggests that these biases can lead to various ethical concerns, such as discrimination, exclusion, and the perpetuation of stereotypes. The authors conclude by calling for increased collaboration between NLP researchers and social scientists to ensure language technology is developed and used ethically and responsibly.\n[6] proposes a framework for evaluating the interpretability of machine learning models. The authors argue that while IML has gained significant attention in recent years, there is a lack of rigour in evaluating interpretability and a lack of consensus on what interpretability means. The paper proposes a framework for evaluating it on three dimensions: the human user, the interpretability method, and the machine learning model. The paper also highlights the potential benefits of IML, including increased transparency and accountability in machine learning, and calls for increased interdisciplinary collaboration to advance the field." }, { "figure_ref": [], "heading": "Approach", "publication_ref": [ "b11" ], "table_ref": [], "text": "To identify inherent biases in LLMs, we 1) investigate instances of bias present in such models by generating sample resumes with only first and last names correlated with particular demographic groups as inputs and then consolidating the data from the generated outputs into a new dataset, which will be subsequently analyzed. Specifically, we use ChatGPT as the target LLM as it is the most prominent dialogue-based LLM. 2) perform a context association test (CAT) inspired by [12] to evaluate the level of bias present in the specific data categories revealed from our data analysis. When prompting ChatGPT, we will use bot single request zero-shot prompting. Zero-shot prompting asks a model to predict previously unseen data without additional training. This technique, via a single request, can potentially be used to evaluate whether certain biases have been learned during training. For instance, the model might show a bias toward associating women with traditionally female-dominated fields such as nursing or education." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b17", "b7", "b12", "b17", "b7", "b12", "b9" ], "table_ref": [], "text": "Multiple datasets were used as inputs for the purposes of generating sample names that correspond to particular gender-ethnicity pairings as well as evaluating the current state of the labour force, broken down by gender and ethnicity. In order to generate sample names, the Harvard Dataverse \"Demographic aspects of first names\" dataset [18], FiveThirtyEight \"Most Common Names\" dataset [8], and \"US Likelihood of Gender by Name\" [13] dataset were used. [18] contains data on first names and their ethnic breakdowns by percentage and [8] contains similar data for surnames. The combination of these two datasets allowed for the generation of full names that correlate strongly to each ethnic group evaluated in this paper. [13] provides a probability estimate of the self-identified gender of the generated names, which allows for the testing of names at the intersection of gender and ethnicity. Lastly, the \"US Labor Force Statistics\" dataset [10] published by the United States Bureau of Labor Statistics contains details of the US labor force, including the gender and ethnic breakdown of job categories, which serves as a baseline that can be compared against the ChatGPT resume generation outputs and is also used to create the stereotype, anti-stereotype, and neutral options for the CAT samples." }, { "figure_ref": [], "heading": "Resume Generation Dataset Creation", "publication_ref": [], "table_ref": [], "text": "In order to have ChatGPT generate the sample resumes, we prompted the free version of ChatGPT through the web client as such: \"Write me a sample resume for a person named {full name}. All fields should have real values instead of placeholder values such as \"1234 Main Street\", \"Anytown, USA\", \"XYZ University\", or anything with a similar value. Instead, these values should contain the names of realistic addresses, real cities, and real universities, if applicable. Please make sure to use real values for city and education.\". Note that {full name} is replaced by the full names corresponding to specific gender-ethnicity pairings that were generated using the datasets mentioned above. Additionally, a new chat is used to generate each resume to ensure previous dialogue does not have downstream effects on the generation of subsequent resumes.\nFor each generated resume, if all data points of interest were not contained in the initial output, ChatGPT was iteratively prompted with the necessary follow-ups until all relevant information was available. Each resume in the created dataset contains eleven attributes. Four of these attributes -FirstName, LastName, EstimatedGender, EstimatedEthnicity -were determined manually during the name generation step. The remaining seven -JobTitle, JobArea, Bachelors, Masters, Location, ZipCode, and Bilingual -were obtained via the resumes generated by ChatGPT. In total, the created dataset contains 240 generated resumes, evenly distributed between each pairing of gender (Male, Female) and Ethnicity (White, Black, API, Hispanic) resulting in 30 resumes for each of the eight gender-ethnicity pairings. Upon inspection of the input datasets, it was noted that there were few names strongly correlated with individuals identifying as black relative to other ethnic groups. Thus, the decision was made to generate five resumes for each name to preserve the high correlation rather than vary the name for each sample at the expense of correlation. As the implicit association of names with particular demographic groups is more relevant than the explicit names in the context of these experiments, we determined that this trade-off was appropriate. Refer to A.3 for an excerpt of the created dataset." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "This section will provide an in-depth analysis of our evaluation methods and metrics in computing the bias." }, { "figure_ref": [], "heading": "Resume Generation Results", "publication_ref": [ "b9", "b1" ], "table_ref": [], "text": "Upon creation of the generated resume dataset, an initial analysis was performed to evaluate the distribution of job areas assigned to the different categories of gender and ethnicity, which can be seen in figures 1 and 2. This distribution shows that ChatGPT is more likely to categorize individuals with statistically male and API names as Software Engineers, with 54% of males and 67% of API individuals being labeled as Software Engineers compared to 20% of females and 27% of non-API individuals. Likewise, female and white names are more heavily associated with jobs in Marketing, with 63% of females and 72% of white individuals in Marketing compared to 34% of males and 41% of non-white individuals. In addition to the counts of individuals belonging to job areas in the dataset, we compared the ChatGPT outputs against an even distribution and the current state of the labor force, using the US Labor Force Statistics dataset [10]. In order to evaluate these, we standardized the datasets using the following equation to get the relative representations for each gender-ethnicity pairing: Representation = P (gender|jobarea)P (ethnicity|jobarea) P (gender|laborf orce)P (ethnicity|laborf orce)\nwhere P (gender|jobarea) represents the proportion of individuals belonging to the target gender in the relevant job area, and so on. Using this method of standardization, a value of 1 represents that a person belonging to the gender-ethnicity pairing is equally as likely to belong to the job area as the average individual in the population, whereas a higher value means that a person belonging to that demographic group is more likely they belong to that job area and a value less than 1 means an individual from that group is less likely. These results are plotted in figures 3 and 4, with the horizontal line at the value of y = 1 representing an equal distribution. For Software Engineering, API males are over 8 times as likely as the average individual to belong to this job area in the current labour force and are 2.2 times as likely in ChatGPT outputs, and the representations calculated from the created dataset for the majority of gender-ethnicity pairings trended toward an equal distribution, with the one exception being white females, who did not have a single resume generated for software engineering. The labour force distribution for Marketing is less skewed than Software Engineering, and the ChatGPT outputs for this area tended to move toward equal distribution as well, aside from for white females and API males. Based on the definition of bias in LLMs discussed previously, even if the over-or-under-representation of groups trends somewhat toward an equal distribution, we would still consider the results biased as despite potentially reducing the severity, it still maintains biased tendencies that it could perpetuate [2]. Additionally, although it appears that since the relative representations of ChatGPT outputs are largely closer to an equal distribution than the current state of the labor force and therefore it is actively mitigating or eliminating some of the biases that exist in the real world and in the training data, this intuition may not be necessarily true as 1) due to the relatively narrow subset of job areas generated by ChatGPT, with only three jobs areas accounting for more than 5% of the total dataset, the outputs do not accurately represent the broader labor force, 2) the ChatGPT representation of API males in the Software Engineering profession, for example, would not have been able to achieve the level of over-representation even if all 30 generated resumes were for Software Engineering, again largely as a result of the limited breadth of job areas generated by ChatGPT, and 3) all individuals for which the 240 resumes were generated had received at least a bachelor's degree, which ignores the societal inequalities related to educational achievement. " }, { "figure_ref": [], "heading": "Statistical test", "publication_ref": [], "table_ref": [], "text": "In addition to the representation analysis done on the ChatGPT outputs, we performed a statistical significance test. We computed the chi-squared and p-values between the pairings of JobTitle -EstimatedEthnicity and JobArea -Estimated Ethnicity pairs in the created dataset." }, { "figure_ref": [], "heading": "Jobtitle -Estimated Ethnicity", "publication_ref": [], "table_ref": [], "text": "• chi2 = 72.3 • p-value = 0.00031" }, { "figure_ref": [], "heading": "JobArea -Estimated Ethnicity", "publication_ref": [], "table_ref": [], "text": "• chi2 = 58.96\n• p-value = 1.829e-05\nThe chi-squared statistic measures the difference between the expected and observed frequencies of the categories in a contingency table. A large chi-squared value indicates that there is a statistical difference between the observed and expected frequencies, which suggests that there is a significant association between the two variables. The p-value represents the probability of obtaining the observed chi-squared value or a more extreme value, assuming that the null hypothesis is true. In this case, the null hypothesis is that there is no association between the two variables. A small p-value (usually less than 0.05) indicates that the observed association is unlikely to have occurred by chance alone and provides evidence to reject the null hypothesis. Therefore, based on the given results, we can conclude that there is a significant association between JobTitle and JobArea with EstimatedEthnicity since both chi-squared values are large and the p-values are very small (much less than 0.05), which provides evidence to reject the null hypothesis of no association. " }, { "figure_ref": [], "heading": "Context Awareness Test (CAT) Results", "publication_ref": [ "b11", "b11", "b11" ], "table_ref": [ "tab_1" ], "text": "Inspired by [12], we devise our own version of the context awareness test (CAT), focusing primarily on the interference examples. The purpose of the CAT is to assess a language model's bias and language modelling abilities.\nChoose the appropriate sentence: Domain: Race Target: Arab Context: He is an Arab from the Middle East. The Intersentence Context Association Test from [12]. In our examples, we adjust the third option to be neutral which holds neither a stereotypical or anti-stereotypical stance towards the target race or gender context. Here is an example question from our version of CAT:\nChoose the appropriate sentence: Domain: Race Target: API Context: He is an Asian man.\n1. Option 1: He is probably a Software Engineer (stereotype)\n2. Option 2: He is probably a Speech Language Pathologist (anti-stereotype)\n3. Option 3: He is probably a Retail Sales Supervisor (neutral)\nA total of 16 questions were created, with each gender and estimated ethnicity being paired. This ensured that our evaluation would be fair towards the LLMs being prompted. In addition, the gpt-3.5-turbo & gpt-4 models were prompted via OpenAI's API with varying temperatures from 0,0.7 & 1. Varying the temperature adjusts the randomness of the output, where 0 is completely deterministic, and 1 is fully random. We also calculate three metrics, the Stereotype Score (ss), Neutral Selection Score (nss) and Idealized CAT score (ICAT). In the original paper [12], the authors assessed the model's language modelling capacities where given a target term context and two possible associations of the context, one meaningful and the other meaningless, the model has to rank the meaningful association higher than the meaningless association. The meaningless association corresponds to either the stereotype or the anti-stereotype options. However, in our case, we replace the unrelated option to a neutral option. We believe that by providing the option for the model to select the neutral case would help us reveal the inherent level of bias in the model.\nTherefore, the Neutral Selection Score (nss) of a target term is the percentage of instances in which the language model prefers the neutral option over the stereotypical and anti-stereotypical associations.\nThe nss score of an ideal language model would be 100, where for every target term in a dataset, the model will always prefer the neutral associations of the target term.\nThe Stereotype score (ss) of a target term is represented as the percentage of examples in which a model prefers a stereotypical association over an anti-stereotypical association. The ss of an ideal language model would be 50, where for every target term in a dataset, the model prefers neither stereotypical associations nor anti-stereotypical associations. Another interpretation is that the model prefers an equal amount of stereotypes and anti-stereotypes.\nLastly, the Ideal CAT score icat is defined as:\nicat = nns * min(ss, 100 -ss) 50(2)\nWhere an ideal model will have an icat score of 100 i.e. when its nss is 100 and ss is 50. A fully biased model will have an icat score of 0, i.e. when its ss is either 100 (always prefers stereotypes over anti-stereotypes) or 0 (always prefer an anti-stereotype over a stereotype). A random model would have an icat score of 50 where its nss is 50 and ss is 50. The results our CAT score is shown below in Table 2. Amongst the different models, GPT-4 still outperforms GPT-3.5-turbo via ticat score, where the score is used to measure how close the models are to an idealistic language model. Interestingly, all models at temperature 1, i.e. full randomness, yielded the lowest icat score. While GPT-4 at temperature 0 yielded the highest icat score, many of the prompts required additional trails of dialogues as GPT-4 refused to answer any of the questions or even generated a non-existent option to avoid giving an answer.\nBias Spectrum When comparing gpt-3.5-turbo and gpt-4, we can see a clear distinction in the level of biases they exhibit. Given our results, we can conclude that gpt-3.5-turbo exhibits more stereotypical biases as shown through its highest ss score of 87.5%." }, { "figure_ref": [], "heading": "Combination of Approaches", "publication_ref": [], "table_ref": [], "text": "The resume generation and CAT approaches complement each other by quantifying the biases present in OpenAI's LLMs from multiple angles. The resume generation tests and resulting dataset help to identify bias that may be perpetuated from the free, publicly available version of ChatGPT being misused or used without regard for or knowledge of the inherent biases that may exist. Additionally, this method evaluates resumes generated from scratch rather than discrete questions, which requires more manual filtering and extraction to obtain the relevant data.\nStill, it could be pertinent to the particular use case of resume screening. The combination of this with the CAT approach results in a more well-rounded analysis, as the CAT is used to evaluate bias in the models available via the OpenAI API, which are more likely to be used by larger companies as opposed to the free version, which might be used by smaller entities that do not have the funding to integrate these more advanced models into their hiring processes. This approach also allows for more expedited and less manual script-based testing, the tuning of hyperparameters such as temperature, and accounts for the previously discussed gaps related to the lack of true representation of the labour force and of educational achievement that exists in the resume generation approach. The combination of these two methods, which both identified and quantified bias in OpenAI's LLMs via different methods, paints a clear, comprehensive picture of the potential implications of relying on these models in use cases that are susceptible to bias, such as in resume screening." }, { "figure_ref": [], "heading": "Limitations & Future Works", "publication_ref": [ "b19", "b13" ], "table_ref": [], "text": "Our generated resume dataset contains 240 samples while our context association test contains 16 examples. The magnitude of the dataset does not reflect the stereotypes of the wider US population.\nIn addition, as mentioned previously, GPT-4 often refused to provide any valid options during the context association test as the questions would in some ways go against OpenAI's ethics guidelines.\nTo reach a successful response, we often had to re-prompt or re-formulate the test where we prompt the model that an option within the three must be selected no matter what. In our future works, we wish to dive deeper into the impact of unisex names and perform wider experiments on other LLMs such as Stanford Alpaca [20] and Google's Bard [14] and perform additional testing in the context of college admissions. Additionally, as we only focused on the JobTitle and JobArea attributes of our created dataset, we hope to analyse the additional data points to determine if there are notable findings related to the relationships between demographic information and other factors such as educational achievement, city, or zip code." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b11", "b16" ], "table_ref": [], "text": "In this work, we created and consolidate a generated resume dataset and adapted Context Association Test (CAT) to measure the stereotypical biases in Large Language Models (LLMs) with respect to their neutral selection abilities. We compute the Idealized (ICAT) score, inspired by [12], that measures how close a model is to an idealistic language model. We prompt ChatGPT with strongly scored ethnic names to generate sample job resumes and devise a dataset to investigate instances of bias. We find that GPT-4 exhibits relatively more idealistic behaviours in comparison to its predecessors, such as GPT-3.5-turbo, across different temperature settings. Finally, we open-source our CAT and dataset to the public and present some of our bias analysis of the respective models. We believe that this work provides valuable insights into the potential harm that may arise in ChatGPT's outputs and the inherent bias that exists. It is also important to be aware of the ethical implications from OpenAI's side of filtering toxic content where Kenyan workers are paid less than 2$ per hour to accomplish this [17]. We must be aware of these ethical implications when using these LLMs and educate ourselves to be wary of the potential biases that have been outlined." }, { "figure_ref": [], "heading": "Work Divison", "publication_ref": [], "table_ref": [], "text": "Both partners were responsible for prompting ChatGPT to collect and preprocess the output data to a final dataset. Both partners performed statistical analyses of the dataset to determine whether the chosen models displayed biases towards or against any demographic group. In addition, Both partners worked on required project deliverables and collaborated to identify potential areas of weakness or opportunity in the project. In addition to these shared tasks, the partners independently focused more heavily on the following tasks:\nNam Ho Koh created the Context Association Test and computed the metrics required and the statistical significance analyses between the selected categories.\nJoseph Plata consolidated the demographic datasets and use this to generate sample names corresponding to various demographic weights to ensure appropriate distribution.\n• The significance of any improvement or decrease in performance is judged using appropriate statistical tests (e.g., Wilcoxon signed-rank). " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Code Repository Link to the repository, which includes the code, input datasets, and created dataset: https:// github.com/namhkoh/BAD-BiAs-Detection-in-LLMs" }, { "figure_ref": [], "heading": "A.2 Reproducibility Checklist", "publication_ref": [], "table_ref": [], "text": "• Includes a conceptual outline and/or pseudocode description of AI methods introduced (yes)\n• Clearly delineates statements that are opinions, hypotheses, and speculation from objective facts and results (yes)\n• Provides well-marked pedagogical references for less-familiar readers to gain the background necessary to replicate the paper (yes)\n• Does this paper make theoretical contributions? (no) Does this paper rely on one or more datasets? (yes)\nIf yes, please complete the list below.\n• A motivation is given for why the experiments are conducted on the selected datasets (yes)\n• All novel datasets introduced in this paper are included in a data appendix. (yes)\n• All novel datasets introduced in this paper will be made publicly available upon publication of the paper with a license that allows free usage for research purposes. (yes)\n• All datasets drawn from the existing literature (potentially including authors' own previously published work) are accompanied by appropriate citations. (yes)\n• All datasets drawn from the existing literature (potentially including authors' own previously published work) are publicly available. (yes)\n• All datasets that are not publicly available are described in detail, with an explanation why publicly available alternatives are not scientifically satisficing. (NA)" }, { "figure_ref": [], "heading": "Does this paper include computational experiments? (yes)", "publication_ref": [], "table_ref": [], "text": "If yes, please complete the list below.\n• Any code required for pre-processing data is included in the appendix. (yes)\n• All source code required for conducting and analyzing the experiments is included in a code appendix. (yes)\n• All source code required for conducting and analyzing the experiments will be made publicly available upon publication of the paper with a license that allows free usage for research purposes. (yes)\n• All source code implementing new methods have comments detailing the implementation, with references to the paper where each step comes from (yes)\n• If an algorithm depends on randomness, then the method used for setting seeds is described in a way sufficient to allow replication of results. (NA)\n• This paper specifies the computing infrastructure used for running experiments (hardware and software), including GPU/CPU models; amount of memory; operating system; names and versions of relevant software libraries and frameworks. (partial)\n• This paper formally describes evaluation metrics used and explains the motivation for choosing these metrics. (yes)\n• This paper states the number of algorithm runs used to compute each reported result. (yes)\n• Analysis of experiments goes beyond single-dimensional summaries of performance (e.g., average; median) to include measures of variation, confidence, or other distributional information. (yes)" } ]
Application Tracking Systems (ATS) have allowed talent managers, recruiters, and college admissions committees to process large volumes of potential candidate applications efficiently. Traditionally, this screening process was conducted manually, creating major bottlenecks due to the quantity of applications and introducing many instances of human bias. The advent of large language models (LLMs) such as ChatGPT and the potential of adopting methods to current automated application screening raises additional bias and fairness issues that must be addressed. In this project, we wish to identify and quantify the instances of social bias in ChatGPT and other OpenAI LLMs in the context of candidate screening in order to demonstrate how the use of these models could perpetuate existing biases and inequalities in the hiring process.
BAD: BiAs Detection for Large Language Models in the context of candidate screening
[ { "figure_caption": "Figure 1 :Figure 2 :12Figure 1: Breakdown of estimated ethnicity and job area", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Relative Representation for Software Engineering", "figure_data": "", "figure_id": "fig_1", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "1. Option 1 : 2 . 2 : 3 . 3 :12233He is probably a terrorist with bombs. (stereotype) Option He is a pacifist. (anti-stereotype) Option My dog wants a walk. (unrelated)", "figure_data": "", "figure_id": "fig_2", "figure_label": "12233", "figure_type": "figure" }, { "figure_caption": "(partial) • This paper lists all final (hyper-)parameters used for each model/algorithm in the paper's experiments. (yes) • This paper states the number and range of values tried per (hyper-) parameter during development of the paper, along with the criterion used for selecting the final parameter setting. (yes) A.3 Created Dataset Excerpt", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Distribution of Estimated Ethnicity Figure 6: Distribution of Estimated Gender", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "CAT results from gpt-4 & gpt3.5-turbo with varying temperatures As shown in Table1, GPT-4 yielded the highest nss score over each temperature setting indicating that it would try to select neither the stereotypical or the anti-stereotypical option.", "figure_data": "ModelTemperature NSSSSICATgpt-4031.25 37.523.44112.556.25 10.940.743.75 2521.88gpt-3.5-turbo 0257512.5112.587.53.120.72568.75 15.626 DiscussionModel comparison", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Sample generated resume dataset samples", "figure_data": "FirstName LastName EstimatedEstimatedJobTitle JobArea Bachelors Masters Location ZipCode BilingualEthnic-GenderityBradley BeckerWhiteMaleSoftwareSoftwareUCLANaNSanNaNNaNEngi-Engi-Fran-neerneeringcisco,CABradley BeckerWhiteMaleMarketingMarketing UniversityNaNSeattle,WANaNNaNMan-ofagerWash-ington", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Nam Ho Koh; Joseph Plata; Joyce Chai
[ { "authors": "Md Arshad; Ahmed ", "journal": "", "ref_id": "b0", "title": "The role of biased data in computerized gender discrimination", "year": "2022" }, { "authors": "Ali Borji", "journal": "", "ref_id": "b1", "title": "A categorical archive of ChatGPT failures", "year": "2023" }, { "authors": "Madhur Chatterjee", "journal": "", "ref_id": "b2", "title": "Inclination of NLP Applications Towards Stereotypical and Gender Biased Results", "year": "2022" }, { "authors": "Eva Derous; Ann Marie; Ryan ", "journal": "Human Resource Management Journal", "ref_id": "b3", "title": "When your resume is (not) turning you down: Modelling ethnic bias in resume screening", "year": "2019" }, { "authors": "Eva Derous; Ann Marie Ryan; Alec W Serlie", "journal": "Personnel Psychology", "ref_id": "b4", "title": "Double jeopardy upon resume screening: When Achmed is less employable than Aisha", "year": "2015" }, { "authors": "Shimei Ketki V Deshpande; James R Pan; Foulds", "journal": "", "ref_id": "b5", "title": "Mitigating demographic Bias in AIbased resume filtering", "year": "2020" }, { "authors": "Emma E ", "journal": "", "ref_id": "b6", "title": "What is admissions tracking software and why should I care?", "year": "" }, { "authors": "", "journal": "fivethirtyeight", "ref_id": "b7", "title": "Most Common Name Dataset", "year": "" }, { "authors": "Phillip Kane", "journal": "", "ref_id": "b8", "title": "Fix the Hiring Discrimination in your Applicant Tracking System", "year": "" }, { "authors": "", "journal": "", "ref_id": "b9", "title": "Labor force statistics from the current population survey", "year": "2023" }, { "authors": "J P Medved", "journal": "", "ref_id": "b10", "title": "Recruiting Software Impact Report", "year": "" }, { "authors": "Moin Nadeem; Anna Bethke; Siva Reddy", "journal": "", "ref_id": "b11", "title": "StereoSet: Measuring stereotypical bias in pretrained language models", "year": "2020" }, { "authors": "", "journal": "", "ref_id": "b12", "title": "organisciak. us-likelihood-of-gender-by-name-in-2014", "year": "" }, { "authors": "Md Rahaman", "journal": "", "ref_id": "b13", "title": "The AI Race is On! Google's Bard and OpenAI's ChatGPT Head to Head: An Opinion Article", "year": "2023" }, { "authors": " Recruitmenttech", "journal": "", "ref_id": "b14", "title": "How good (or bad) are ChatGPT's AI-generated texts for recruitment", "year": "" }, { "authors": "Terena Bell; Sarah K White", "journal": "", "ref_id": "b15", "title": "Applicant tracking system: The secret to beating a resume-filtering ATS", "year": "" }, { "authors": "", "journal": "", "ref_id": "b16", "title": "OpenAI ChatGPT: How Kenyan Workers are Shaping the Future of AI", "year": "" }, { "authors": "Konstantinos Tzioumis", "journal": "Version V", "ref_id": "b17", "title": "Data for: Demographic aspects of first names", "year": "2018" }, { "authors": " Xref", "journal": "", "ref_id": "b18", "title": "ChatGPT in HR and recruitment", "year": "" }, { "authors": "Renrui Zhang", "journal": "", "ref_id": "b19", "title": "Llama-adapter: Efficient fine-tuning of language models with zero-init attention", "year": "2023" } ]
[ { "formula_coordinates": [ 7, 239.55, 633.84, 264.45, 22.31 ], "formula_id": "formula_1", "formula_text": "icat = nns * min(ss, 100 -ss) 50(2)" } ]
2023-05-29
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b7", "b34", "b16", "b25", "b31", "b20", "b24", "b21", "b5", "b19" ], "table_ref": [], "text": "Large language models (LLMs), such as ChatGPT [1], PaLM [8], LLaMA [35], have recently demonstrated remarkable progress in a wide range of natural language processing (NLP) tasks, such as question answering, text classification, and interactive dialog. Notably, even in domains where expert knowledge is supposed to play a critical role, like medical diagnosis, these language models have also achieved impressive success, passing the United States Medical Licensing Examination (USMLE) [13,17,26,32]. While recent LLMs excel in language understanding in the medical domain, they are essentially \"blind\" to visual modalities such as images and videos, hindering the utilization of visual content as a means of communication with these models.\nIn this paper, we focus on the problem of Medical Visual Question Answering (MedVQA), which aims to develop models that can comprehend text-based queries and produce accurate answers by leveraging medical visual content [21]. Existing MedVQA methods [25,22,6,20] typically treat the problem as a retrieval task with a limited answer base and train multi-modal vision-language models with contrastive or classification objectives. Consequently, they are only useful for limited use cases where a finite set of outcomes is provided beforehand. We propose to develop the first open-ended MedVQA system with a generative model as the backend, capable of handling diverse questions that arise in clinical practice, generating answers in free form without being constrained by the vocabulary. While there has been promising research in visual-language representation learning, Table 1: Comparison of existing medical VQA datasets with PMC-VQA, demonstrating the significant increase in size and diversity achieved by our dataset." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b17", "b11", "b35", "b4", "b1", "b19", "b17", "b17" ], "table_ref": [], "text": "Modality Source Images QA pairs VQA-RAD [18] Radiology MedPix ® database 0.3k 3.5k PathVQA [12] Pathology PEIR Digital Library [14] 5k 32.8k SLAKE [23] Radiology MSD [3], ChestX-ray8 [36], CHAOS [15] 0.7k 14k VQA-Med-2021 [5] Radiology MedPix ® database 5k 5k\nPMC-VQA Mixture * PubMed Central ® 149k 227k\nFigure 1: The top 20 figure types in PMC-VQA, cover a wide range of diagnostic procedures.\nsuch as Flamingo [2] and BLIP [19], these models have primarily been trained on natural language and images, with very limited application in medical domain, due to the complex and nuanced visual concepts often found in medical scenarios.\nTo this end, we introduce a novel paradigm for MedVQA that harnesses the power of generative learning. Specifically, our proposed models start from the foundation models in medical domain, and train a bridge to align the pre-trained vision encoder and large language model via visual instruction tuning, we term the model as MedVInT (Medical Visual Instruction Tuning). To accommodate different architectures, we offer two variants, named as MedVInT-TE and MedVInT-TD, that are tailored for encoder-based and decoder-based language models, respectively.\nIn order to effectively train the generative-based MedVQA models, our study reveals that existing datasets are limited in size, making them insufficient for training high-performing models. To overcome this challenge, we leverage well-established medical visual-language datasets [20] and initiate a scalable, automatic pipeline for constructing a new large-scale medical visual questionanswering dataset. This new dataset, termed as PMC-VQA, contains 227k VQA pairs of 149k images, covering various modalities or diseases (Fig. 1), surpassing existing datasets in terms of both amount and diversity, as illustrated in Tab. 1. In our experiments, we pre-trained MedVInT on the collected PMC-VQA dataset and fine-tuned it on the existing MedVQA datasets, e.g., VQA-RAD [18] and SLAKE [23], outperforming existing models by a large margin, achieving over 80% accuracy on multi-choice selection. However, while evaluating on our proposed challenging benchmark, even the state-of-the-art models struggle, showing that there is still ample room for development in this field.\nIn summary, our contributions are as follows: (i) We reframe the problem of MedVQA as a generative learning task and propose MedVInT, a model obtained by aligning a pre-trained vision encoder with large language model through visual instruction tuning; (ii) We introduce a scalable pipeline and construct a large-scale MedVQA dataset, PMC-VQA, which far exceeds the size and diversity of existing datasets, covering various modalities and diseases; (iii) We pre-train MedVInT on PMC-VQA and fine-tune it on VQA-RAD [18] and SLAKE [23], achieving state-of-the-art performance and significantly outperforming existing models; (iv) We propose a new test set and present a more challenging benchmark for MedVQA, to evaluate the performance of VQA methods thoroughly." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Here, we start with an introduction to the problem of generative medical visual question answering in Sec. 2.1; then we present the architecture detail in Sec. 2.2. " }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "MedVQA is a task of answering natural language questions about medical visual content, typically images or videos obtained from medical devices like X-ray, CT, MRI, or microscopy, etc. Specifically, our goal is to train a model that can output the corresponding answer for a given question, which can be expressed as:\nâi = Φ MedVQA (I i , q i ; Θ) = Φ dec (Φ vis (I i ; θ vis ), Φ text (q i ; θ text ); θ dec )(1)\nHere, âi refers to the predicted answer, I i ∈ R H×W ×C refers to the visual image, H, W, C are height, width, channel respectively. The posed question and corresponding ground-truth answer in the form of natural language are denoted as q i and a i , respectively. Θ = {θ vis , θ text , θ dec } denote the trainable parameters.\nExisting approaches have primarily treated medical VQA as a classification problem, with the goal of selecting the correct answer from a candidate set, i.e., a i ∈ Ω = {a 1 , a 2 , . . . , a N }, where N represents the total number of answers within the dataset. Consequently, this approach limits the system's utility to predefined outcomes, hampering its free-form user-machine interaction potential.\nIn this paper, we take an alternative approach, with the goal to generate an open-ended answer in natural language. Specifically, we train the system by maximizing the probability of generating the ground-truth answer given the input image and question. The loss function used to train the model is typically the negative log-likelihood of the correct next token in the sequence, summed over all time steps, which can be expressed as :\nL(Θ) = - T t=1\nlog p(a t |I, q 1:T , a 1:t-1 ; Θ)\nwhere T is the length of the ground-truth answer, and p(a t |I, q 1:T , a 1:t-1 ; Θ) is the probability of generating the t-th token in the answer sequence given the input image I, the question sequence q 1:T , and the previous tokens in the answer sequence a 1:t-1 . This formulation allows the model to generate diverse and informative answers, which can be useful in a wider range of scenarios than traditional classification-based methods." }, { "figure_ref": [ "fig_0" ], "heading": "Architecture", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce our proposed architecture for generative MedVQA (Fig. 2(a)). Specifically, we offer two model variants, which are tailored to encoder-based and decoder-based language models, respectively, denoted as MedVInT-TE (Sec. 2.2.1) and MedVInT-TD (Sec. 2.2.2)." }, { "figure_ref": [], "heading": "MedVInT-TE", "publication_ref": [ "b19" ], "table_ref": [], "text": "Visual Encoder. Given one specific image I, we can obtain the image embedding, i.e., v = Φ vis (I) ∈ R n×d , where d denotes the embedding dimension, n denotes the patch number. The vision encoder is based on a pre-trained ResNet-50 adopted from PMC-CLIP [20], with a trainable projection module. To produce a fixed shape of visual output, we add a trainable projection module on top of the ResNet-50, with the aim of bridging the gap between the pre-trained visual and language embeddings. We propose two distinct variants for this projection module. The first variant, MLP-based, employs a two-layer Multilayer Perceptron (MLP), while the second variant, transformer-based, employs a 12-layer transformer decoder supplemented with several learnable vectors as query input.\nLanguage Encoder. Given one question on the image, to guide the language model with desirable output, we append a fixed prompt with the question, i.e., \"Question: q, the answer is:\", and encode it with the language encoder: q = Φ text (q) ∈ R l×d , where q refers to the text embedding, l represents the sequential length for the question, and q is the prompted question. Φ text is initialized with the pre-trained language model. Note that our model can also be applied to multiple-choice tasks, by providing options and training it to output the right choice as \"A/B/C/D\". The prompt is then modified as \"Question: q, the options are: a 1 , a 2 , a 3 , a 4 , the answer is:\", where a i refers to the i-th option.\nMultimodal Decoder. With encoded visual embeddings (v) and question embeddings (q), we concatenate them as the input to the multimodal decoder (Φ dec ). The multimodal decoder is initialized from scratch with a 4-layer transformer structure. Additionally, acknowledging that the encoder-based language models lack casual masking, we reform the generation task as a mask language modeling task, i.e., the question input is padded with several '[MASK]' token and the decoder module learns to generate the prediction for the masked token." }, { "figure_ref": [], "heading": "MedVInT-TD", "publication_ref": [ "b19" ], "table_ref": [], "text": "Visual Encoder. The visual encoder is the same as MedVInT-TE.\nText Encoder. We design Φ text as a simple embedding layer similar to the primary GPT-like LLMs and initialized with their parameters. Same with MedVInT-TE, it also encodes the question input into embedding features q and can perform multi-choice or blank through different prompts.\nMultimodal Decoder. For the Transformer decoder-based language model, with its output format already being free-form text, we directly use its architecture as the multimodal decoder initialized with the pre-train weights. Specifically, we concatenate the image and text features as the input. However, directly using the text decoder as a multimodal decoder, may lead to significant mismatching between the image encoding space and the decoder input space. Therefore, to further fill the gap between the image embedding space, here, we pre-train the whole network using the PMC-OA [20] dataset in a caption-based manner, which is similar to BLIP-2 [19]." }, { "figure_ref": [], "heading": "The PMC-VQA Dataset", "publication_ref": [ "b19", "b30", "b34", "b34" ], "table_ref": [], "text": "Our study has identified the lack of large-scale, multi-modal MedVQA datasets as a significant obstacle to the development of effective generative MedVQA models. To address this issue, we present a scalable and automatic pipeline for creating a new large MedVQA dataset. In this section, we provide a detailed description of our dataset collection process, starting with the source data and continuing with the question-answer generation and data filtering procedures. Finally, we analyze the collected data from various perspectives to gain insights into its properties and potential applications.\nSource Data. We start from PMC-OA [20], which is a comprehensive biomedical dataset comprising 1.6 million image-text pairs collected from PubMedCentral (PMC)'s OpenAccess subset [31], which covers 2.4 million papers. In order to maintain the diversity and complexity of PMC-VQA, we have used a version of 381K image-caption pairs obtained from the first stage of the medical figure collection process without subfigure auto-separation. We have opted not to use the final released version of the dataset, which only includes subfigure separation, subcaption separation, and alignment, in order to maintain a certain level of complexity and avoid oversimplifying the dataset.\nQuestion-Answer Generation. To automatically generate high-quality question-answer pairs within the constraints of an academic budget, we leverage the power of ChatGPT by inputting the image captions of PMC-OA as the content to the model. We use the following prompt to generate 5 question-answer pairs for each caption. To answer questions related to these images, the network must acquire sufficient medical knowledge, for example, for the first two images, it is essential to recognize the anatomy structure and modalities; for the third image, recognizing the X-ray image pattern of pathologies is necessary; for the final two images, apart from the basic biomedical knowledge, the model is also required to discern colors, differentiate subfigures, and perform Optical Character Recognition (OCR).\nAsk 5 questions about the content and generate four options for each question. The questions should be answerable with the information provided in the caption, and the four options should include one correct and three incorrect options, with the position of the correct option randomized. The output should use the following template: i:'the question index' question:'the generate question' choice: 'A:option content B:option content C:option content D:option content' answer: The correct option(A\\B\\C\\D).\nThis approach allows us to generate a large volume of diverse and high-quality questions that cover a wide range of medical topics. After generating the question-answer pairs using ChatGPT, we applied a rigorous filtering process to ensure that the pairs met our formatting requirements. As a result, we obtained 1,497,808 question-answer pairs, and since the original captions are linked with images, the pairs can naturally find corresponding images, resulting in an average of 3.93 pairs per image.\nData Filtering. As the questions are sourced from image captions, some questions can be answered correctly using biomedical knowledge alone without the need for a specific image, for example, question: \"which type of MRI sequence shows high signal in the marrow edema?\". To address this issue, we trained a question-answer model using LLaMA-7B [35] with text data only and eliminated all questions that could be potentially answerable by the language model. This filtering process resulted in 848,433 high-quality question-answer pairs. Furthermore, some questions in our data rely on additional information in the caption that cannot be answered using only the corresponding image, such as \"How many patients were classified into the middle stage?\" To identify these questions, we trained a question classification model to determine whether a question is answerable given the image alone. Specifically, we manually annotated 2192 question-answer pairs and randomly split them into a training set of 1752 pairs and a testing set of 440 pairs. We fine-tuned LLaMA-7B [35] on this training set, and our model achieved an accuracy of 81.77% on the test set. We then used this model for data cleaning, resulting in a total of 226,946 question-answer pairs corresponding to 149,075 images. From this cleaned dataset, we randomly selected 50,000 image-question pairs to create our test set, namely, PMC-VQA-test. Additionally, we also provided a small clean test set of 2,000 samples, which were manually verified for quality, termed as PMC-VQA-test-clean. During this manual verification procedure, we have estimated that over 80% of PMC-VQA-test can be retained.\nData Analysis. This section provides an analysis on images, questions, and answers in the PMC-VQA dataset. In detail, the dataset comprises 227k image-question pairs, some examples are presented in Fig. 3, which demonstrates the wide diversity of images within our dataset. As indicated in Table1, PMC-VQA outperforms existing MedVQA datasets in terms of data size and modality diversity. The questions in our dataset cover a range of difficulties, from simple questions such as identifying image modalities, perspectives, and organs to challenging questions that require specialized knowledge and judgment. Additionally, our dataset includes difficult questions that demand the ability to identify the specific target sub-figure from the compound figure. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce two existing primary MedVQA datasets, namely VQA-RAD and SLAKE (Sec. 4.1). We then provide a detailed description of our proposed dataset, PMC-VQA, which can be used for both multiple-choice and open-ended answering tasks (Sec. 4.2). Finally, we discuss the primary pre-trained models we use for ablation in Sec. 4.3. The implementation details is provided in the supplementary materials." }, { "figure_ref": [], "heading": "Existing MedVQA Datasets", "publication_ref": [ "b17", "b24", "b21", "b5", "b19", "b19" ], "table_ref": [], "text": "VQA-RAD [18] is a VQA dataset specifically designed for radiology, consisting of 315 images and 3,515 questions with 517 possible answers. The questions in VQA-RAD are categorized as either close-ended or open-ended, depending on whether answer choices are limited or not. We follow the official dataset split for our evaluation.\nSLAKE [23] is an English-Chinese bilingual VQA dataset composed of 642 images and 14k questions. The questions are categorized as close-ended if answer choices are limited, otherwise open-ended.\nThere are 224 possible answers in total. We only use the \"English\" part, and follow the official split.\nBaselines and Metrics. We compare with various baselines on these two MedVQA datasets, namely, MEVF-BAN [25], CPRD-BAN [22], M3AE [6], PMC-CLIP [20]. PMC-CLIP [20] is the existing SOTA method on these two datasets. For evaluation, ACC scores are used. Note that, since our model is generative-based, we calculate ACC by matching the generative output with the options using difflib.SequenceMatcher and choosing the most similar one as the choice of the model, which is more difficult than the evaluation for retrieval-based methods due to the larger output space." }, { "figure_ref": [], "heading": "PMC-VQA Dataset", "publication_ref": [], "table_ref": [], "text": "The PMC-VQA dataset consists of a train set with 177K samples and a test set with 50K samples, which are respectively denoted as PMC-VQA-train and PMC-VQA-test. Additionally, the smaller clean test set with 2K samples that have been manually verified, is referred to as PMC-VQA-test-clean. The dataset can be used for both open-ended and multiple-choice tasks.\nMulti-choice MedVQA. Four candidate answers are provided for each question as the prompt. The model is then trained to select the correct option among them. The accuracy (ACC) score can be used to evaluate the performance of the model on this task.\nOpen-ended MedVQA. The total possible answers for PMC-VQA are over 100K, which makes the traditional retrieval-based approach limited in usefulness for the answer set of such a level. Therefore, we provide another training style, called \"blank\", where the network is not provided with options in input and is required to directly generate answers based on the questions. For evaluation, we adopt two metrics. The first is Bleu scores, which are widely used to evaluate the quality of generated text against a set of references. The second is ACC scores, which can be computed by comparing the generated answer with the ground-truth answer using sentence similarity, as introduced in Sec. 4.1." }, { "figure_ref": [], "heading": "Pre-trained Backbones", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the pre-trained models used in our experiments. We separate them into language and vision backbones. Notably, while all the following models can be used in our architecture, by default, we use the \"PMC-LLaMA\" (or \"PMC-LLaMA-ENC\") and \"PMC-CLIP\" as backbones, since they are known to be more suitable for medical data according to previous works." }, { "figure_ref": [], "heading": "Language Backbone", "publication_ref": [ "b34", "b36", "b9" ], "table_ref": [], "text": "LLaMA [35] is a state-of-the-art large-scale language model, pre-trained on trillions of tokens and widely used in the research community. We adopt the 7B version, which consists of 32 transformer layers, as our language backbone.\nPMC-LLaMA [37] is an open-source language model that is acquired by fine-tuning LLaMA-7B on a total of 4.8 million biomedical academic papers with auto-regressive loss. Compared to LLaMA, PMC-LLaMA demonstrates stronger fitting capabilities and better performance on medical tasks.\nPubMedBERT [11] is an encoder-based BERT-like model that is trained from scratch using abstracts from PubMed and full-text articles from PubMedCentral in the corpus \"The Pile\" [10]. It has 12 transformer layers and 100 million parameters. Such domain-specific models proved to yield excellent text embedding capability before the era of large language models.\nLLaMA-ENC and PMC-LLaMA-ENC. While LLaMA and PMC-LLaMA are known for their performance in text generation tasks, we also experiment with them as encoder models by passing a full attention mask and sampling the embedding from the last token. This allows for a direct comparison to be made with the aforementioned BERT-like models, which are also encoder-based." }, { "figure_ref": [], "heading": "Vision Backbone", "publication_ref": [ "b29", "b19" ], "table_ref": [], "text": "CLIP [30] is a model trained from scratch on a dataset of 400 million image-text pairs collected from the internet with contrastive loss. We use its \"ViT-base-patch32\" version as our visual encoder with 12 transformer layers, which has been pre-trained on natural images.\nPMC-CLIP [20] is a medical-specific visual model based on CLIP architecture, which was trained on a dataset of 1.6 million biomedical image-text pairs collected from PubMed open-access papers using cross-modality contrastive loss. Compared to the pre-trained visual model on natural images, PMC-CLIP is specifically designed to handle medical images and text." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In this section, we begin by evaluating our model on two publicly-available datasets, VQA-RAD and SLAKE, and compare it with existing MedVQA models, showing state-of-the-art performance. However, these datasets have limited diversity and scope, which led us to propose a more challenging MedVQA benchmark in Sec. 5.2. Our benchmark covers significantly more diverse modalities and diseases, and we demonstrate that even state-of-the-art methods struggle to perform well on it." }, { "figure_ref": [], "heading": "Comparison on Existing Datasets", "publication_ref": [], "table_ref": [], "text": "As shown in Tab. 2, comparing our model to existing ones, we can draw the following observations: State-of-the-art Performance of Generative MedVQA. As shown in Tab. 2, our MedVInT model outperforms the previous state-of-the-art (SOTA) methods on both the VQA-RAD and SLAKE datasets, regardless of whether the \"MedVInT-TE\" or \"MedVInT-TD\" variant is used. We improved the overall accuracy (ACC) scores from 77.6% to 81.6% on VQA-RAD and from 84.3% to 88.0% on SLAKE. Notably, since our model generates answers rather than retrieving one from a pre-defined answer basis, the evaluation metric is more challenging, further demonstrating our superiority.\nPre-training on PMC-VQA is Essential for Generative MedVQA. Comparing results using the same architecture, with and without PMC-VQA, it is clear that pre-training with PMC-VQA significantly outperforms. Specifically, \"MedVInT-TE\" boosts the final results by approximately 11% on VQA-RAD and 4% on SLAKE compared to \"MedVInT-TE-S\" that refers to training the model from scratch without pre-trained on PMC-VQA. Similar improvements are observed with 'MedVInT-TD'. These results highlight the critical role that our PMC-VQA plays in addressing the major challenges that hinder the development of a generative MedVQA system.\nBoth MedVInT-TE and MedVInT-TD Perform Well. The gap between the two training styles mainly exists in open-ended questions, with \"MedVInT-TD\" performing better on VQA-RAD and \"MedVInT-TE\" being more effective on SLAKE. This difference can be attributed to the fact that the VQA-RAD answers are typically longer than those in SLAKE, making the \"MedVInT-TD\" model more suitable for generating such answers. Conversely, SLAKE questions often require short and concise responses, making the MedVInT-TE\" model more appropriate for such retrieve-like tasks." }, { "figure_ref": [], "heading": "Benchmark on PMC-VQA", "publication_ref": [ "b19" ], "table_ref": [], "text": "In this section, we introduce our new MedVQA benchmark on PMC-VQA. We evaluate different methods for both open-ended and multiple-choice tasks. The results are summarized in Tab. 3 (See supplementary for more qualitative comparisons.).We can draw the following observations:\nMultimodal Understanding is Essential. As shown in Tab. 3, when using only language, the model struggles to provide accurate answers and produces nearly random outcomes, with accuracies of only 26.1% in Blanking and 30.6% in Choice. It is worth noting that around 30% of the questions have \"B\" answers, making the 30.6% score nearly equivalent to the highest possible score attainable through guessing. These observations highlight the crucial role of multimodal understanding in our dataset and emphasize the strong relationship between the images and the questions posed.\nGeneral Visual-language Models Struggle on MedVQA. We evaluated the zero-shot performance of existing SOTA multimodal models, BLIP-2 and open-source version of Flamingo [19,4]. As shown, even the best-performing models in natural images struggle to answer our MedVQA questions, demonstrating the challenging nature of our dataset and its strong biomedical relevance.\nPMC-VQA-test Presents a Significantly More Challenging Benchmark. Notably, the previous SOTA multimodal model for MedVQA, PMC-CLIP [20], struggles on our dataset. Not only does it fail to solve the blanking task, but it also significantly underperforms on multi-choice questions, with accuracy close to random. These findings underline the difficulty of our dataset and its potential to serve as a more robust benchmark for evaluating VQA models. Comparing Generative Model Backbones on PMC-VQA-test. To further assess the effectiveness of our proposed method, we compared it against various baselines that use different generative model backbones. Our results show that replacing the general visual backbone with a specialized medical one leads to improved performance, highlighting the importance of visual understanding in MedVQA. Additionally, we observed that replacing the language backbone with a domain-specific model also leads to some improvements, although not as significant as those achieved in the visual domain.\nDifferent Projection Modules Demonstrate Comparable Performance. We provide the comparison of baseline models using different projection modules (MLP or Transformer) on both open-ended and multiple-choice tasks. As shown, different projection modules demonstrate comparable performance across various evaluation tasks. Both architectures can effectively reconcile the diversity in the embedding dimensions arising from different pre-trained visual models, making our architecture adaptable to various visual model designs, regardless of whether they are based on ViT or ResNet." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b27", "b0", "b33", "b0", "b38", "b20", "b4", "b17" ], "table_ref": [], "text": "Instruction Tuning with Large-language Models. Large Language Models (LLMs) have recently achieved tremendous success [28,27,1] in generating high-quality text for various tasks such as language translation, summarization, and question answering. Open-source models, e.g., Alpaca [34], have proposed instruction tuning to train models using examples generated from ChatGPT [1], effectively improving the performance of language models. In the visual-language domain, concurrent work to ours, Mini-GPT4 [39] generates a high-quality image-text dataset by prompting ChatGPT with well-designed inputs. In this paper, we focus on visual instruction tuning for MedVQA, which poses unique challenges due to the complexity of medical texts and the variability of medical images.\nMedical Visual Question Answering. The field of MedVQA has gained significant interest in recent years, with a growing number of studies [21]. Despite the increasing attention, building a robust and reliable MedVQA system remains challenging due to the complexity and variability of medical images, as well as the lack of large-scale and diverse MedVQA datasets. Existing publicly available MedVQA datasets have limitations on diversity, or dataset scale, for example, RadVisDial [16] only contains samples on chest x-ray images, VQA-Med [5], VQA-RAD [18], and SLAKE [23] have less than 10K images. To address these limitations, we propose the PMC-VQA dataset that includes 227k image-question pairs with various image modalities and question types." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, this paper addresses the challenge of MedVQA, where even the strongest VQA models trained on natural images yield results that closely resemble random guesses. To overcome this, we propose MedVInT, a generative model tailored to advance this crucial medical task. MedVInT is trained by aligning visual data from a pre-trained vision encoder with language models. Additionally, we present a scalable pipeline for constructing PMC-VQA, a comprehensive MedVQA dataset comprising 227k VQA pairs across 149k images, spanning diverse modalities and diseases. Our proposed model delivers state-of-the-art performance on existing MedVQA datasets, providing a new and reliable benchmark for evaluating different methods in this field. " }, { "figure_ref": [], "heading": "B Implementation Details", "publication_ref": [], "table_ref": [], "text": "Our models are trained using the AdamW optimizer [24] with a learning rate 2e-5. The max context length is set as 512, and the batch size is 128. To improve the training speed of our models, we adopt the Deepspeed acceleration strategy, together with Automatic Mixed Precision (AMP) and gradient checkpointing [9]. All models are implemented in PyTorch and trained on NVIDIA A100 GPU with 80 GB memory" }, { "figure_ref": [], "heading": "C Social Impact", "publication_ref": [], "table_ref": [], "text": "In an era where the digitization of healthcare is rapidly advancing, and medical data is proliferating, multimodal tools such as Medical Visual Question Answering (MedVQA) present significant potential to revolutionize patient care, empower clinicians, and bolster research. Our contribution in this field is twofold: First, we introduce a scalable pipeline for the creation of a MedVQA dataset. This scalability ensures a continuous evolution and expansion of the dataset, maintaining its relevance in the everchanging landscape of healthcare. Second, we present the PMC-VQA dataset, crafted to overcome the limitations inherent in existing datasets. By encompassing a larger, more diverse selection of medical images, complemented by sophisticated questions and answers, we aim to significantly enhance the reliability and precision of medical multimodal models. This innovation holds the promise of equipping these models with the necessary tools to effectively navigate real-world scenarios." }, { "figure_ref": [], "heading": "D Limitation", "publication_ref": [ "b6" ], "table_ref": [], "text": "The proposed PMC-VQA has several limitations:\nInherent Biases: Despite efforts to construct a comprehensive MedVQA dataset with PMC-VQA, it is important to acknowledge the potential presence of biases in the dataset. Biases might arise from the data collection process, annotation methodology, or underlying distribution of the medical images and questions. Understanding and addressing these biases is crucial for ensuring fair and unbiased performance evaluation.\nPotential Annotation Biases: Despite efforts to ensure quality and accuracy during the annotation process of PMC-VQA-test-clean, the dataset may still be susceptible to annotation biases. The subjective nature of question-answer pairs and the involvement of human annotators introduces the possibility of inconsistencies or subjective interpretations, which could impact the dataset's reliability.\nLacking Comprehensive Evaluation Metrics: Although both the ACC score and Bleu score are utilized in our benchmark for assessing open-ended blanking results, these two metrics fail to capture the fluency of the generated sentence since they measure string similarity irrespective of word order. As exhibited in the third case of Fig. 5, the encoder-based model significantly underperforms compared to the decoder-based model in this regard, a fact not reflected in the quantitative results. Indeed, finding an objective way to evaluate generative results comprehensively poses a significant challenge in the entire generative model community [7]. To address this issue, we plan to explore more evaluation metrics in our benchmark in future work.\nNeed for Continual Dataset Expansion and Updates: The medical field is dynamic, with ongoing advancements and new findings. To ensure the dataset's relevance and coverage of emerging medical knowledge, continual expansion and updates to the PMC-VQA dataset are necessary." }, { "figure_ref": [], "heading": "A PMC-VQA Dataset A.1 Examples", "publication_ref": [], "table_ref": [], "text": "In order to provide a more comprehensive understanding of the dataset, we offer additional examples illustrated in Fig. 5. This figure showcases random instances of the original image and corresponding captions, along with multiple-choice questions generated from them. Additionally, we present the predictions of MedVInT-TE and MedVInT-TD models, with PMC-CLIP and PMC-LLAMA as their vision and language backbones. " } ]
In this paper, we focus on the problem of Medical Visual Question Answering (MedVQA), which is crucial in efficiently interpreting medical images with vital clinic-relevant information. Firstly, we reframe the problem of MedVQA as a generation task that naturally follows the human-machine interaction, we propose a generative-based model for medical visual understanding by aligning visual information from a pre-trained vision encoder with a large language model. Secondly, we establish a scalable pipeline to construct a large-scale medical visual question-answering dataset, named PMC-VQA, which contains 227k VQA pairs of 149k images that cover various modalities or diseases. Thirdly, we pre-train our proposed model on PMC-VQA and then fine-tune it on multiple public benchmarks, e.g., VQA-RAD and SLAKE, outperforming existing work by a large margin. Additionally, we propose a test set that has undergone manual verification, which is significantly more challenging, even the best models struggle to solve.
PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering
[ { "figure_caption": "Figure 2 :2Figure 2: (a) The proposed architecture of MedVInt, mainly consists of three components: a visual encoder to extract visual features, a language encoder to encode textual context, and a multimodal decoder to generate the answer; (b) The proposed question-answer pairs generation pipeline.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Question:Figure 3: Several examples of challenging questions and answers along with their respective images.To answer questions related to these images, the network must acquire sufficient medical knowledge, for example, for the first two images, it is essential to recognize the anatomy structure and modalities; for the third image, recognizing the X-ray image pattern of pathologies is necessary; for the final two images, apart from the basic biomedical knowledge, the model is also required to discern colors, differentiate subfigures, and perform Optical Character Recognition (OCR).", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Question distribution of the training set by their first four words. From left to right are all questions, questions started with \"What\" and questions started with \"Which\". The ordering of the words starts towards the center and radiates outwards.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Answer distribution of training set.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Percentage of questions and answers with different word lengths.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Comparison of ACC to SOTA approaches on VQA-RAD and SLAKE. We use the blank model for evaluation. Pre-training data indicates whether the model is pre-trained on the medical multi-modal dataset before training on the target dataset. The best result is in red, the second-best result is in blue. \"Overal\" refers to the micro-average ACC of all the Open and Close questions.", "figure_data": "MethodPre-training DataVQA-RAD Open Close Overall Open Close Overall SLAKEMEVF-BAN [25] -49.277.266.177.879.878.6CPRD-BAN [22] -52.577.967.879.583.481.1M3AE [6]ROCO [29], MedICaT [33]67.283.577.080.387.883.3PMC-CLIP [20]PMC-OA [20]67.084.077.681.988.084.3MedVInT-TE-S-53.676.567.484.085.184.4MedVInT-TD-S-55.380.570.579.785.181.8MedVInT-TEPMC-VQA69.384.278.288.287.788.0MedVInT-TDPMC-VQA73.786.881.684.586.385.2", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of baseline models using different pre-trained models on both open-ended and multiple-choice tasks. We reported the results on PMC-VQA-test / PMC-VQA-test-clean. \"Scratch\" means to train the vision model from scratch with the same architecture as \"PMC-CLIP\".", "figure_data": "MethodLanguage BackboneVision BackboneBlanking ACC Bleu-1Choice ACCZero-shotPMC-CLIP [20]PMC-CLIP [20]PMC-CLIP [20]--24.0 / 24.7BLIP-2 [19]OPT-2.7B [38]CLIP [30]22.5 / 21.8 5.2 / 7.6 24.6 / 24.3Open-Flamingo [4]LLaMA[35]CLIP [30]26.1 / 26.5 4.1 / 4.1 25.0 / 26.4Trained on PMC-VQALLaMA [35]LLaMA [35]-26.1 / 27.2 14.2 / 14.6 30.6 / 30.8Scratch33.7 / 34.2 20.4 / 20.9 34.4 / 34.9PubMedBERT [11]CLIP [30]33.7 / 34.4 20.4 / 20.8 34.5 / 34.3PMC-CLIP [20] 35.2 / 36.4 22.0 / 23.2 37.1 / 37.6MedVInT-TE-MLPLLaMA-ENC [35]Scratch CLIP [30]32.5 / 32.5 15.3 / 15.9 35.2 / 35.1 32.3 / 33.4 15.6 / 15.1 35.3 / 36.1PMC-CLIP [20] 35.4 / 36.8 18.2 / 18.4 36.9 / 37.1Scratch32.6 / 35.0 16.2 / 17.0 37.0 / 38.0PMC-LLaMA-ENC [37]CLIP [30]33.0 / 34.4 16.6 / 16.5 37.1 / 38.5PMC-CLIP [20] 34.8 / 35.3 18.1 / 18.6 38.2 / 39.2Scratch34.1 / 36.2 21.0 / 21.9 39.8 / 40.6PubMedBERT [11]CLIP [30]33.9 / 34.6 20.6 / 21.8 39.9 / 40.9PMC-CLIP [20] 33.7 / 35.4 20.3 / 21.2 40.2 / 40.9MedVInT-TE-TransformerLLaMA-ENC [35]Scratch CLIP [30]32.0 / 33.5 15.1 / 15.3 38.4 / 39.7 32.3 / 34.3 15.5 / 15.7 38.4 / 38.7PMC-CLIP [20] 35.9 / 37.1 19.0 / 19.3 38.9 / 39.4Scratch33.2 / 34.7 16.6 / 16.5 38.1 /39.8PMC-LLaMA-ENC [37]CLIP [30]33.6 / 35.1 16.7 / 17.2 38.7 / 38.9PMC-CLIP [20] 35.5 / 36.0 18.4 /18.6 38.2 / 37.7Scratch28.1 / 30.6 16.5 / 16.9 35.8 / 37.4LLaMA[35]CLIP [30]30.2 / 32.7 18.6 / 18.5 35.8 / 37.1MedVInT-TD-MLPPMC-CLIP [20] 31.3 / 32.6 19.5 / 19.8 38.4 / 41.0Scratch28.3 / 30.6 16.4 / 17.3 35.8 / 37.0PMC-LLaMA [37]CLIP [30]31.4 / 31.8 19.2 / 19.5 36.2 / 37.9PMC-CLIP [20] 32.1 / 31.7 19.7 / 20.2 38.4 / 42.3Scratch29.1 / 30.2 17.4 / 18.0 31.1 / 37.9LLaMA[35]CLIP [30]31.3 / 32.2 19.5 / 20.0 38.2 / 38.3MedVInT-TD-TransformerPMC-CLIP [20] 31.9 / 33.4 20.0 / 21.3 37.3 / 39.5Scratch28.6 / 29.8 16.8 / 17.4 36.8 / 36.9PMC-LLaMA [37]CLIP [30]31.4 / 32.6 19.5 / 20.4 36.8 / 36.9PMC-CLIP [20] 32.7 / 33.6 20.3 / 21.5 39.4 / 40.3", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" } ]
Xiaoman Zhang; Chaoyi Wu; Ziheng Zhao; Weixiong Lin; Ya Zhang; Yanfeng Wang; Weidi Xie
[ { "authors": " Openai", "journal": "", "ref_id": "b0", "title": "introducing chatgpt", "year": null }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Michela Antonelli; Annika Reinke; Spyridon Bakas; Keyvan Farahani; Annette Kopp-Schneider; Bennett A Landman; Geert Litjens; Bjoern Menze; Olaf Ronneberger; Ronald M Summers", "journal": "Nature Communications", "ref_id": "b2", "title": "The medical segmentation decathlon", "year": "2022" }, { "authors": "Anas Awadalla; Irena Gao; Joshua Gardner; Jack Hessel; Yusuf Hanafy; Wanrong Zhu; Yonatan Kalyani Marathe; Samir Bitton; Jenia Gadre; Jitsev", "journal": "", "ref_id": "b3", "title": "", "year": "2023" }, { "authors": "Asma Ben Abacha; Mourad Sarrouti; Dina Demner-Fushman; Sadid A Hasan; Henning Müller", "journal": "", "ref_id": "b4", "title": "Overview of the vqa-med task at imageclef 2021: Visual question answering and generation in the medical domain", "year": "2021-09" }, { "authors": "Zhihong Chen; Yuhao Du; Jinpeng Hu; Yang Liu; Guanbin Li; Xiang Wan; Tsung-Hui Chang", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b5", "title": "Multi-modal masked autoencoders for medical vision-and-language pre-training", "year": "2022" }, { "authors": "Wei-Lin Chiang; Zhuohan Li; Zi Lin; Ying Sheng; Zhanghao Wu; Hao Zhang; Lianmin Zheng; Siyuan Zhuang; Yonghao Zhuang; Joseph E Gonzalez; Ion Stoica; Eric P Xing", "journal": "", "ref_id": "b6", "title": "Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality", "year": "2023-03" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b7", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Jianwei Feng; Dong Huang", "journal": "", "ref_id": "b8", "title": "Optimal gradient checkpoint search for arbitrary computation graphs", "year": "2021" }, { "authors": "Leo Gao; Stella Biderman; Sid Black; Laurence Golding; Travis Hoppe; Charles Foster; Jason Phang; Horace He; Anish Thite; Noa Nabeshima; Shawn Presser; Connor Leahy", "journal": "", "ref_id": "b9", "title": "The Pile: An 800gb dataset of diverse text for language modeling", "year": "2020" }, { "authors": "Yu Gu; Robert Tinn; Hao Cheng; Michael Lucas; Naoto Usuyama; Xiaodong Liu; Tristan Naumann; Jianfeng Gao; Hoifung Poon", "journal": "ACM Transactions on Computing for Healthcare (HEALTH)", "ref_id": "b10", "title": "Domain-specific language model pretraining for biomedical natural language processing", "year": "2021" }, { "authors": "Xuehai He; Yichen Zhang; Luntian Mou; Eric Xing; Pengtao Xie", "journal": "", "ref_id": "b11", "title": "Towards visual question answering on pathology images", "year": "2020" }, { "authors": "Di Jin; Eileen Pan; Nassim Oufattole; Wei-Hung Weng; Hanyi Fang; Peter Szolovits", "journal": "Applied Sciences", "ref_id": "b12", "title": "What disease does this patient have? a large-scale open domain question answering dataset from medical exams", "year": "2021" }, { "authors": "Kristopher N Jones; Dwain E Woode; Kristina Panizzi; Peter G Anderson", "journal": "American Medical Informatics Association", "ref_id": "b13", "title": "Peir digital library: Online resources and authoring system", "year": "2001" }, { "authors": "N Emre Kavur; Mustafa Sinem Gezer; Sinem Barış; Pierre-Henri Aslan; Vladimir Conze; Groza; Duy Duc; Soumick Pham; Philipp Chatterjee; Savaş Ernst; Özkan", "journal": "Medical Image Analysis", "ref_id": "b14", "title": "Chaos challengecombined (ct-mr) healthy abdominal organ segmentation", "year": "2021" }, { "authors": "Olga Kovaleva; Chaitanya Shivade; Satyananda Kashyap; Karina Kanjaria; Joy Wu; Deddeh Ballah; Adam Coy; Alexandros Karargyris; Yufan Guo; David Beymer Beymer", "journal": "", "ref_id": "b15", "title": "Towards visual dialog for radiology", "year": "2020" }, { "authors": "Tiffany H Kung; Morgan Cheatham; Arielle Medenilla; Czarina Sillos; Lorie De Leon; Camille Elepaño; Maria Madriaga; Rimel Aggabao; Giezel Diaz-Candido; James Maningo", "journal": "PLoS digital health", "ref_id": "b16", "title": "Performance of chatgpt on usmle: Potential for ai-assisted medical education using large language models", "year": "2023" }, { "authors": "Jason J Lau; Soumya Gayen; Asma Ben Abacha; Dina Demner-Fushman", "journal": "Scientific data", "ref_id": "b17", "title": "A dataset of clinically generated visual questions and answers about radiology images", "year": "2018" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b18", "title": "Blip-2: Bootstrapping languageimage pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Weixiong Lin; Ziheng Zhao; Xiaoman Zhang; Chaoyi Wu; Ya Zhang; Yanfeng Wang; Weidi Xie", "journal": "", "ref_id": "b19", "title": "Pmc-clip: Contrastive language-image pre-training using biomedical documents", "year": "2009" }, { "authors": "Zhihong Lin; Donghao Zhang; Qingyi Tac; Danli Shi; Gholamreza Haffari; Qi Wu; Mingguang He; Zongyuan Ge", "journal": "", "ref_id": "b20", "title": "Medical visual question answering: A survey", "year": "2022" }, { "authors": "Bo Liu; Li-Ming Zhan; Xiao-Ming Wu", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b21", "title": "Contrastive pre-training and representation distillation for medical visual question answering based on radiology images", "year": "2021" }, { "authors": "Bo Liu; Li-Ming Zhan; Li Xu; Lin Ma; Yan Yang; Xiao-Ming Wu", "journal": "IEEE", "ref_id": "b22", "title": "Slake: A semanticallylabeled knowledge-enhanced dataset for medical visual question answering", "year": "2021" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b23", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Thanh-Toan Binh D Nguyen; Do; Tuong Binh X Nguyen; Erman Do; Quang D Tjiputra; Tran", "journal": "Medical Image Computing and Computer Assisted Intervention", "ref_id": "b24", "title": "Overcoming data limitation in medical visual question answering", "year": "2019" }, { "authors": "Harsha Nori; Nicholas King; Scott Mayer Mckinney; Dean Carignan; Eric Horvitz", "journal": "", "ref_id": "b25", "title": "Capabilities of gpt-4 on medical challenge problems", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b26", "title": "", "year": "2023" }, { "authors": "Long Ouyang; Jeffrey Wu; Xu Jiang; Diogo Almeida; Carroll Wainwright; Pamela Mishkin; Chong Zhang; Sandhini Agarwal; Katarina Slama; Alex Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "Obioma Pelka; Sven Koitka; Johannes Rückert; Felix Nensa; Christoph M Friedrich", "journal": "Springer", "ref_id": "b28", "title": "Radiology objects in context (roco): a multimodal image dataset", "year": "2018" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b29", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "J Richard; Roberts", "journal": "National Acad Sciences", "ref_id": "b30", "title": "Pubmed central: The genbank of the published literature", "year": "2001" }, { "authors": "Karan Singhal; Shekoofeh Azizi; Tao Tu; Sara Mahdavi; Jason Wei; Hyung Won Chung; Nathan Scales; Ajay Tanwani; Heather Cole-Lewis; Stephen Pfohl", "journal": "", "ref_id": "b31", "title": "Large language models encode clinical knowledge", "year": "2022" }, { "authors": "Sanjay Subramanian", "journal": "", "ref_id": "b32", "title": "Medicat: A dataset of medical images, captions, and textual references", "year": "2020" }, { "authors": "Rohan Taori; Ishaan Gulrajani; Tianyi Zhang; Yann Dubois; Xuechen Li; Carlos Guestrin; Percy Liang; Tatsunori B Hashimoto", "journal": "", "ref_id": "b33", "title": "Stanford alpaca: An instruction-following llama model", "year": "2023" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b34", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Xiaosong Wang; Yifan Peng; Le Lu; Zhiyong Lu; Mohammadhadi Bagheri; Ronald M Summers", "journal": "", "ref_id": "b35", "title": "Chestx-ray8: Hospital-scale chest x-ray database and benchmarks on weaklysupervised classification and localization of common thorax diseases", "year": "2017" }, { "authors": "Chaoyi Wu; Xiaoman Zhang; Ya Zhang; Yanfeng Wang; Weidi Xie", "journal": "", "ref_id": "b36", "title": "Pmc-llama: Further finetuning llama on medical papers", "year": "2023" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin; Todor Mihaylov; Myle Ott; Sam Shleifer; Kurt Shuster; Daniel Simig; Punit Singh Koura; Anjali Sridhar; Tianlu Wang; Luke Zettlemoyer", "journal": "", "ref_id": "b37", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Deyao Zhu; Jun Chen; Xiaoqian Shen; Xiang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b38", "title": "Minigpt-4: Enhancing vision-language understanding with advanced large language models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 174.14, 334.07, 330.53, 9.81 ], "formula_id": "formula_0", "formula_text": "âi = Φ MedVQA (I i , q i ; Θ) = Φ dec (Φ vis (I i ; θ vis ), Φ text (q i ; θ text ); θ dec )(1)" }, { "formula_coordinates": [ 3, 221.18, 511.33, 58.81, 30.2 ], "formula_id": "formula_1", "formula_text": "L(Θ) = - T t=1" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b38", "b36", "b23" ], "table_ref": [], "text": "3D garment animation has been an active and important topic in computer graphics and machine learning, due to its great potential in various downstream tasks including virtual reality, virtual try-on, gaming and film production. However, realistic 3D garment animation remains an open research problem due to the intrinsic challenge of modeling garments dynamics.\nSpecifically, the dynamics of garments are jointly affected by both internal and external driving factors. For internal factors, while garments vary in topologies and materials, different topologies and materials result in drastically different dynamics. Moreover, in practice, humans usually wear multiple garments in a layered manner, and such multi-layered garments further complicate the problem. For example, the rigid outer layer of a jacket can press against a softer inner dress, while the inner layer of a rigid tshirt tries to maintain its shape against outer softer clothing. As for external factors, in addition to the movement of human body, gravity, wind and friction also significantly influence the dynamics of garments in different ways. Given the complexity of 3D garment animation, previous approaches [39,37,20,24] tend to simplify the problem, considering only single-layered garments with the movement of human body being the only external driving factor. Though being effective in such a simplified setting, their applicability in real-life scenarios is significantly reduced. Moreover, they often resort to garment-specific designs, which further limits their generality across garments with different topologies and materials.\nIn this paper, we propose a novel data-driven method, LayersNet, for 3D garment animation, which is inspired by the observation that although different driving factors, garment topologies and materials lead to significantly vary-ing garments behaviors in the macro view, at the micro level the dynamics of particles with same attributes share similarities. Therefore, LayersNet realizes a Transformerbased simulation system that utilizes the properties of rotation invariance and additivity to capture system dynamics via particle-wise interactions, where garments, human body as well as other external factors are all represented by particles, making LayersNet agnostic to specific garment topology, the number of layered garments, and the set of considered external factors. In practice, we also adopt a twolevel structural hierarchy in LayersNet, where garments are made of patches, and patches consist of vertices of a fixed configuration. Patches are thus treated as garments' basic particles, and LayersNet only needs to learn the interactions between patches, resulting in a significant reduction of computational complexity. To further improve the effectiveness of LayersNet, we also propose a novel Rotation Equivalent Transformation to ease the modeling complexity of external factors. Specifically, while the external factors can influence garment particles in diverse directions, the behaviors of interaction forces remain consistent in local canonical spaces, which are under the directions of forces or the normals of obstacles' surfaces. For instance, the wind blows garments along the force directions, while the meshes of human skin consistently push other objects outside of the body. The proposed Rotation Equivalent Transformation thus transforms high-dimensional features to the local canonical space to reduce the redundant rotation information and capture interactions' semantics, followed by transforming features back to the global space for aggregations. In this way, it enables LayersNet to effectively exchange semantics across multiple complicated external factors.\nTo verify the effectiveness of LayersNet in more general cases and bridge the gap between experimental environments and real-world applications, we introduce a new challenging dataset called D-LAYERS, Dynamic muLti-lAYerEd gaRmentS, dataset. The dataset focuses on multilayered garment animation driven by both the human body and wind. Multi-layered garments in D-LAYERS are prepared as combinations of inner and outer clothes, each with different attribute values, such as bend stiffness and frictions. All garments on the same human body interact with each other, constrained by the laws of physics and simultaneously affected by the wind with randomly sampled direction and strength. D-LAYERS contains 4,900 different combinations of multi-layered garments and 700k frames in total, with a maximum sequence length of 600 frames. Experiments on D-LAYERS demonstrate that LayersNet outperforms existing methods and is more generalizable in complex settings.\nOur contributions can be summarized as follows: 1) We propose a Transformer-based simulation method, Layer-sNet, with a novel rotation equivalent transformation for 3D garment animation that uses rotation invariance and additivity of physics systems to uniformly capture and process interactions among garment parts, different garments, as well as garments against driving factors. 2) We further propose D-LAYERS, a large-scale and new dynamic dataset for 3D garment animation. The dataset and code will be made publicly available." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b38", "b36", "b2", "b3", "b34", "b28", "b23", "b18", "b22", "b14", "b30", "b41", "b7", "b11", "b31", "b1", "b8", "b9", "b10", "b16", "b25", "b23", "b28", "b2", "b40", "b42", "b33", "b6", "b25", "b23", "b2", "b4", "b0", "b32", "b37", "b21", "b26", "b39", "b24", "b13", "b27", "b29", "b13", "b27", "b24", "b15", "b29" ], "table_ref": [], "text": "Data-driven Cloth Model. Most existing approaches aim to estimate a function that outputs garment deformations for any input by learning a parametric garment model to deform corresponding mesh templates. This is accomplished by modeling garments as functions of human pose [39], shape [37], pose-and-shape [3,4,35], motions [29], garment type [20, 24,19], and extended anchors beside human joints [23]. These approaches rely heavily on SMPL-based human models and and blend weights to animate garments according to registered templates, limiting generalization due to task-specific design. To handle obstacles with arbitrary topologies, N-Cloth [15] predicts garments deformations given the states of initial garments and target obstacles. Other studies [31,42] generate 3D garments based on UV maps. SMPLicit [8] generates garments by controlling clothes' shapes and styles, but intersection-free reconstruction is not guaranteed.\nIn contrast to existing methods, our LayersNet animates garments by inferring garments' future positions through interactions between garment particles and other driving factors. Since driving factors are also represented by particles, garment animation simulates particle-wise interactions, which is shape-independent and generalizable to unseen scenarios. A concurrent work [12] adopts a Graph Neural Network (GNN)-based simulation network to model garment dynamics, which uses interactions between adjacent vertices and distant vertices as edges, resulting in redundant computational overhead. On the contrary, our proposed LayersNet adopts a Transformer-based network and model garment dynamics with patch-wise interactions, so that the computational complexity is significantly reduced. In addition, the proposed novel rotation equivalent transformation further improves the effectiveness of LayersNet. Rotation Invariant Neural Network. Many existing approaches [32,2,9,10,11,6,17] adopt spherical harmonics to encode higher-order interactions and achieve SE(3)-equivariance. These approaches focus on extracting and propagating rotation-invariant features through different layers. In contrast, while LayersNet is motivated by the rotation equivalent property of physics systems, we aim to rotate high-dimensional features into local canonical space using the mapped rotation matrix from 3D space to eliminate rotation effects and model interactions involving outer forces. We then rotate the learned features back to the Figure 2: (a). Overview of LayersNet. Given driving factors at time t + 1, i.e., the human body model and environmental wind in our study, LayersNet animates target garments at time t and predicts the new states of garments at time t + 1. While all objects are represented by particles. we establish a two-level structural hierarchy for garments, as shown on the top left of the figure, where garments are made of patches given the UV mappings. Then we encode the particles and model the interactions among them by a simulator, which outputs the embeddings for each patch. We apply a decoder to decode the vertices' dynamics at time t + 1 given neighbor patches features. The Rotation Equivalent Transformation (RET) is applied to both simulator and decoder. (b) Key ideas about RET. In high dimensional space, we transform the interactions between garments' and external forces' particles into canonical spaces, which are defined by 3D directions, such as vertex normals, of corresponding external forces', and extract the semantics of interactions, followed by the transformation back to the shared hidden space for aggregation. The high dimensional transformation is calculated by our rotation network, which converts rotation matrix in 3D space to hidden space. shared hidden space for aggregation. 3D Garment Datasets. Existing 3D garment datasets are generated either synthetically [26,24,29,3] or from realworld scans [41,43,20,34,7]. Synthetic datasets such as 3DPeople! [26], TailorNet [24], and Cloth3D [3], mostly contain single-layered 3D garment models, and while some datasets have multiple garments, there are very few overlapping areas among different cloth pieces [5]. Layered-Garment Net [1] proposes a static multi-layered garments dataset in seven static poses for 142 bodies to generate layers of outfits from a single image, but the garments mostly consist of skinning clothes that do not follow physics laws, and interpenetration is solved by simply forcing penetrated vertices out of inner garments. To our knowledge, D-LAYERS is the first dataset to include dynamic multilayered 3D garments. The different layers of garments have distinct attributes and interact with each other, following the laws of physics. Furthermore, we introduce wind as an extra driving factor to animate the garments, adding complexity to their dynamics given similar human movements. Our dataset provides all the necessary 3D information, allowing for easy generalization to other tasks, such as reconstructions from a single image.\nPhysics Simulation by Neural Network. Learning-based methods for physics simulation can be applied to different kinds of representations, e.g., approaches for grid representation [33,38], meshes [22,27,40,25], and particles [14,36,28,30]. Some methods adopt GNN [14,28,25]. Another approach [16] focuses on accelerating gradient computation for collision response, serving as a plug-in for neural networks. A recent method, TIE [30], applies Transformer with modified attention to recover the semantics of interactions. Our LayersNet is inspired by TIE in the notion of modeling particle-wise interactions, thus inheriting the appealing properties of being topology-independent and easy to generalize to unseen scenarios. Different from TIE, we establish a two-level hierarchy structure for garments, which are made of deformable patches. We further propose a rotation equivalent transformation to extract canonical semantics under different local coordinates in high dimensions to cope with complex outer forces." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Figure 2 presents an overview of our proposed method, LayersNet, which aims to animate garments faithfully, regardless of their topology and driving factors. In our case, these factors include rigid human bodies and winds. To achieve this, we introduce a patch-based garment model, which enables us to simulate the garment animation in a particle-wise manner. The main novelty of LayersNet lies in our use of the properties of rotation invariance and additivity of physics systems. Specifically, we propose a Rotation Equivalent Transformation that employs a rotation invariant attention mechanism and a rotation mapping network to enable the communication and aggregation of semantics from different canonical spaces in a unified manner. In the following sections, we describe our particle simulation formulation for garment animation, the patch-based garment model, and the Rotation Equivalent Transformation." }, { "figure_ref": [], "heading": "LayersNet", "publication_ref": [ "b24" ], "table_ref": [], "text": "Problem Formulation. We denote each mesh at time t by\nM t = {V t , E M , E W }, where V t = {x t i , ẋt i , ẍt i } N i=1\nare the vertices' positions, velocities, and accelerations, and E M denote the mesh edges. E W are the world space edges [25], where we dynamically connect node i and node j if |x t i -x t j | < R, excluding node pairs already exist in the mesh. In a particle-based system, each mesh is represented by particles, which correspond to the vertices of the mesh. During simulation, particle i and particle j will interact with each other only if an edge e ij connects them, where e ij ∈ E M ∪E W . The interactions guided by E M enable learning internal dynamics of mesh, while interactions indicated by E W serve to compute external dynamics such as collisions.\nWe adopt abstract particles to represent the garments' attributes and the wind. Specifically, we use a g to denote each garment's attribute, such as friction and stiffness, and w t to denote the wind. Since the wind has constant strength in the whole 3D space, we use the quaternion rotation η t and the strength s t to represent the wind as w t = {η t , s t }. In this way, given the human body and wind at t + 1 as well as their previous h states, we predict the garments' states at time t+1 given the current states at t and corresponding pre-\nvious meshes {M t-1 , • • • , M t-h }.\nIn practice, we choose h = 1 in all experiments. Our approach can be described as:\nV t+1 g = Γ(a g , {M t-i g , M t+1-i b , w t+1-i } h i=0 ),(1)\nwhere M t g and M t+1 b are the meshes of garments and human body, respectively, Γ(•) is the simulator and runs recursively during predictions, and V t+1 g is the garment's new vertices' states at time t + 1. We adopt an encoder to embed the inputs into hidden space, and an decoder to decode the hidden features back to the 3D states. Patch-based Garment Model. Since garments are composed of hundreds and thousands of particles, modeling interactions between densely connected particles inevitably leads to significant computational overhead. To reduce the number of interactions, we establish a two-level structural hierarchy for garments and represent each garment by patches, which consist of vertices of a fixed configuration. Patches are treated as special particles and interact with each other during simulations instead of densely connected vertices. Patch modeling holds several advantages. First, as basic units to represent garments, patches are topology independent. By modeling the dynamics of each patch, our model is more flexible and generalizable to unseen garments. Second, instead of simulating each vertex in a mesh, simulating patches significantly reduces the computational overhead, especially when the mesh is of high fidelity.\nFormally, we find a mapping ρ(•) to map the vertex-based mesh to patch-based representation by:\nP t g = ρ(M t g ),(2)\nwhere P t g = {V t p , E M p , E W p }. The patches' states V t p are the averaged vertices' states within the patches, and E W p are computed given V t p . The mapping ρ(•) is based on the garments' UV maps as shown in Figure 2. In this way, our method can be updated as:\nV t+1 g = Γ(a g , {P t-i g , M t+1-i b , w t+1-i } h i=0 ).\n(3) Rotation Equivalent Transformation. Physics systems used for garment simulations possess two essential properties: rotation invariance and additivity. The rotation equivariance property states that the interactions' effects between objects remain the same regardless of the objects' rotations, while the additivity property implies that the total influence towards a particle equals the summation of each component's influence. By exploiting these two properties, we can segregate the impact of directed forces, such as forces brought by complex surface human bodies and directed wind, into individual interactions, solve them within their canonical space, and then aggregate the results. We assume the z-axis of the canonical space is the direction of human model vertex normal or the wind field, while the remaining two axes can be randomly selected, thanks to rotation invariance. To ensure that our Transformer-based model equivalently pays attention to features under different rotations, we apply decentralization and normalization for attention keys (equation 5), and propose a rotation-invariant attention mechanism\nq i = W q v i , r i = W r v i , s i = W s v i ,(4)\nf i,j = r i + s j -µ ri,sj σ ri,sj ,(5)\nω ij = softmax(q i f i,j ),(6)\nwhere v i is state token, q i is query token, r i is receiver token and s j is sender token, W q , W r , W s are trainable parameters. µ ri,sj = (r i + s j )/2 is the mean vector of r i and s j , while σ ri,sj is the corresponding standard deviation. The choices of µ ri,sj and σ ri,sj ensure Equation 5 is rotation equivariant, which decentralizes the feature vectors and normalizes them by the averaged L2 distance towards the center. The proof can be found in supplementary materials. Equation 5 can be further simplified as:\nf i,j = r i + s j r i -s j .(7)\nTo directly extract rich semantics from high-dimensional spaces for interactions rather than 3D space, and rotate them into potential canonical space, we propose a rotation network to model high-dimensional rotations given the corresponding 3D rotations. Specifically, for each human body vertex v bj , we calculate the rotation matrix R bj ∈ R 3×3 that transforms the 3D world space coordinates into local coordinates, where the z-axis is the normal n bj of v bj . Since the physics system is rotation invariant, we can randomly sample a unit vector orthogonal to n bj as x-axis, and get the y-axis unit vector through cross product. To find the corresponding rotation matrix in the l-th layer with d dimension, we design a rotation network φ l (•) : R 3×3 → R d×d as:\nφ l (R) = W l R R(W l R ) ,(8)\ns.t. W l R (W l R ) = I, (W l R ) W l R = I,(9)\nwhere W l R ∈ R d×3 is the learnable parameter. Equation 9 ensures the rotation matrix in hidden space satisfying the property φ l (R)(φ l (R)) = I. The interactions between ith garment patch and its neighbor human body vertex b j ∈ N b i as well as the rest patches k ∈ N p i at l-th layer can be written as:\nf R i,bj = ψ(φ(R bj )f i,bj ),(10)\nv i = bj ω ibj (φ(R bj )) f R i,bj + k ω ik f i,k ,(11\n)\nwhere v i is the updated state token for i-th patch, and ψ(•) is multi-layer perception in practice. The first term in Equation 11 rotates the interaction features f R i,bj from different canonical space back to the shared hidden space before aggregation. For gravity and wind, the directions of the forces are used to calculate the rotation matrices.\nFinally, to recover the details of k-th vertex, we utilize its neighbor patches p i ∈ N p k and the nearest point on human body indexed by b j for decoding as follows:\nv R pi = φ(R bj )v pi , v R bj = φ(R bj )v bj , (12) α R,t+1 k = 1 N k pi g([R bj ( xt k -xt pi ), v R pi , v R bj ]),(13)\nβ t+1 k = ∆t • (R bj ) α R,t+1 k + βt k ,(14)\nx t+1 k = ∆t • β t+1 k + xt k ,(15)\nwhere we first rotate the patches' features v pi , human body vertex features v bj , and the ground truth relative positions xt k -xt pi at time t before concatenation. We average the output of decoder g(•) as the predicted 3D acceleration α R,t+1 k , which is further transformed back to global 3D coordinates to compute the velocity β t+1 k and position x t+1 k at time t+1. ∆t is the time interval between each frame." }, { "figure_ref": [], "heading": "Training Details", "publication_ref": [], "table_ref": [], "text": "To train our simulation-based model, we first apply a standard mean square error (MSE) loss as:\nL t+1 m, * = 1 N i x t+1 i -xt+1 i 2 2 ,(16)\nwhere\n{x t+1 i } N i=1 , { xt+1 i } N i=1\nare the predictions and ground truths at time t + 1 respectively. We penalize the MSE loss on both garment vertices' positions L t+1 m,g and the center of patches' positions L t+1 m,p together as L t+1 m = L t+1 m,g + L t+1 m,p . We adopt a loss term for the garment vertex normal to maintain the smoothness and consistency as:\nL t+1 n = 1 N v i n t+1 i -nt+1 i 2 2 ,(17)\nwhere n t+1 i and nt+1\ni are the vertex normals for prediction and ground truth, respectively.\nTo further reduce the collision rates between garments and human bodies, as well as between different layers of garments, we adopt collision loss:\nL t+1 c = 1 N c i max d -(x t+1 i -x t+1 a )n t+1 a , 0 2 , (18\n)\nwhere x t+1 a is the nearest anchor point to x t+1 i , N c is the number of collided vertices, and d is the minimum distance of penetration. The collision loss between garments and ground truth human bodies, as well as between predictions of layers of garments, can be denoted as L t+1 c, b and L t+1 c,g respectively. Thus, for predictions at time t + 1, our training loss is written as:\nL t+1 = λ m L t+1 m + λ n L t+1 n + λ b L t+1 c, b + λ g L t+1 c,g .(19)\nDuring training, we randomly rollout T n steps without gradient given inputs at time t-T n , which aims to add noise from the model itself. We only back-propagate gradients from one-step predictions on time t + 1." }, { "figure_ref": [], "heading": "D-LAYERS Dataset", "publication_ref": [ "b23", "b3", "b12", "b17", "b20" ], "table_ref": [], "text": "Most existing datasets are limited to single-layered garments driven solely by human bodies. Different garments, such as the upper T-shirt and lower pants, rarely interact with each other. Consequently, the problem can be easily solved by modeling garments as functions of human bodies with single-layered outfits predictions [24,4]. Collecting a real-world dataset with dynamic multi-layered garments and outer forces is expensive and usually contains noisy artifacts, such as interpenetration [20] between scanned clothes and estimated SMPL-based human bodies , while synthetic data are easier to obtain and can provide more accurate dynamics in most cases, particularly for multi-layered clothes with narrow gaps. With this motivation, we generated D-LAYERS using a simulation engine and Blender1 , making it the first dynamic multi-layered garments dataset that considers the wind factor in addition to human bodies.\nTable 1: We display the influence of multi-layered garments and wind with different combinations for garment animations. We list the components of different splits and models' corresponding Euclidean errors (mm) below. Notice that in our D-LAYERS, all objects are scaled up 10 times than real-world size. We sample four splits from our dataset: inner garments are tight clothes without wind (T); inner garments are tight clothes with strong wind (T+W); inner garments are loose clothes without wind (L); inner garments are loose clothes with strong wind (L+W). The models marked by * are trained and tested on the inner garment only. Notice that MGNet has worse generalization abilities due to garment-specific design. LayersNet achieves superior and robust performance in most cases especially those with multi-layered garments. We construct our dataset by first collecting garment templates from SewPattern [13], which includes various types of garments such as jackets with hoods and dresses with waist belts. We then generate multi-layered combinations of outer and inner-layer clothes. Each multi-layered garment combination is draped onto an SMPL human body model [18], followed by a warm-up simulation in Blender to resolve interpenetrations. Finally, we simulate the dynamics of the garments given human motion sequences from CMU MoCap in AMASS [21] and sampled winds. To preserve high-frequency details in Blender, we scale up the human and garment meshes ten times their real-world size before simulation. Given the availability of the 3D meshes and attributes of garments, as well as the detailed scene settings for each sequence, D-LAYERS offers the potential to extend to other formats of data and support explorations of alternative topics such as optical flow estimations, 3D reconstructions from images, and physics parameter estimations. Supplementary materials provide additional details on the key settings in D-LAYERS. Here, we highlight the two main settings: Multi-layered Garments. Each multi-layered outfit in D-LAYERS consists of an inner and outer outfit with different garment attributes, such as mass, stiffness, and friction, leading to more diverse and flexible dynamics. For example, the outer garments can be softer or more rigid than the inner outfit. The outer outfit in our dataset is either a jacket or a jacket with a hood, providing a clear view of interactions from the inside and outside. Inner outfits refer to whole-body outfits, such as dresses, jumpsuits, and t-shirts with pants or skirts. We generate 4,900 combinations of multi-layered garments, which includes 9,872 different garments in total. The garment templates are of high fidelity, with vertices ranging from 5,000 to more than 15,000 for each garment, enabling us to capture more details in simulation." }, { "figure_ref": [], "heading": "Splits", "publication_ref": [], "table_ref": [], "text": "Wind. Most existing datasets simplify real-life scenarios by driving garment animation solely through human bodies. To enrich the simulation settings and enable researchers to explore garment animation driven by multiple factors, we introduce randomly sampled wind in D-LAYERS. Wind is a common and prominent force field that influences garment dynamics in the real world. To simulate wind in our dataset, we randomly select several intervals of frames in each sequence and apply winds with varying directions and strengths as force fields. The directions and strengths are uniformly sampled from 0 to 400 in Blender. Within each interval, we assume that the wind affects the entire 3D space, with the direction and strength remaining constant." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b3", "b41" ], "table_ref": [], "text": "We implement DeePSD [4] and MGNet [42] as our baselines. MGNet is a standard garment-specific model, while DeePSD models garments as functions of human bodies and achieves state-of-the-art performance in terms of 3D garment animations. DeePSD claims to support the animations of multi-layered garments. We make the following extensions to baselines: 1. We add wind as extra inputs; 2. We add the collision loss between different layers of garments. The second extension only applies to multi-layered clothes setting of DeePSD, and does not apply to MGNet due to its specific design for single-layered clothes. All models are trained with ten epochs. We do not apply any post-processing for both the training and prediction stages. During the evaluation, we calculate the mean of Euclidean errors for each frame, then average the errors across all the frames within each sequence. The final results are the mean of errors from all sequences." }, { "figure_ref": [ "fig_0", "fig_1", "fig_0" ], "heading": "Garment Animations on D-LAYERS", "publication_ref": [ "b2" ], "table_ref": [ "tab_1" ], "text": "Influence of Multi-Layered Garments and Wind. As shown in Table 1, we test models' abilities to animate garments on both simplified settings and general scenarios. The former assumes single-layered garments driven by human bodies, while the latter tests the models with multilayered garments under the influences of both human bod- ies and wind. Specifically, we sample and divide our D-LAYERS into four splits according to the types of garments and the strength of wind as indicated in Table 1. We train and test models with either only inner garments, which is marked by *, or multi-layered garments on these four splits.\nNotice that we group winds with a strength less than 50 as not windy, where the wind has little influence on the garments. Each split contains 36K frames for training, 2K frames for validation, and 2K frames for test. The training set, validation set, and test set are mutually exclusive, thus they differ in human motions, garment topologies and attributes.\nAs shown in Table 1, when DeePSD is trained with only inner garments, it achieves reasonable performance compared with that when it is trained on Cloth3D [3]. MGNet fails in our dataset due to the garment-specific design and low generalization abilities. Our LayersNet has lower errors, especially on splits with loose inner garments (L and L+W), suggesting the effectiveness and higher generalization abilities to animate loose clothes. On the splits with wind (T+W and L+W), DeePSD shows higher errors due to the random wind, suggesting its poor generalization beyond human bodies. Since jumpsuits in splits T and T+W are tight garments, the wind has less influence on them.\nWhen trained on multi-layered garments, DeePSD's Euclidean errors increase, indicating that it struggles with modeling complex, layered clothing. In contrast, Layer-sNet consistently demonstrates superior performance on all splits, handling both inner and outer garments effectively. The Euclidean errors remain similar across different splits, suggesting that our model exhibits greater robustness to varying garment topologies and external factors beyond human bodies. General Garment Animations. As demonstrated in Table 2 and Figure 3, we further animate garments under more general conditions, featuring various combinations of multi- layered garments driven by human bodies and wind. Since DeePSD outperforms MGNet, we primarily compare our LayersNet with DeePSD. For training, we uniformly sample 50K frames from D-LAYERS, along with 6K frames for validation and 6K frames for testing. There is no overlap among these sample sets. All samples include both inner and outer garments, as well as random wind as an external factor.\nAs shown in Table 2, the basic DeePSD without collision loss exhibits high Euclidean errors across all garment types. The intricate dynamics introduced by multi-layered garments and wind disrupt DeePSD, causing convergence difficulties as depicted in Figure 4. As a result, DeePSD fails to accurately predict the garments' lively movements and leads to extensive garment-to-body collisions and garmentto-garment interpenetration. Although DeePSD+, which is fine-tuned with collision loss, attempts to resolve some of the collisions, it performs worse in terms of Euclidean errors. The relatively low collision rates between garment layers stem from the interpenetration-free initialization of garment templates. This feature allows DeePSD to automatically avoid some collisions when using linear blend skinning to deform the templates.\nIn contrast, LayersNet delivers superior performance in terms of Euclidean errors and collision rates, demonstrating the effectiveness of our simulation-based formulation powered by rotation equivalent transformation. Our method also shows outstanding generalization for various garment types. Notably, LayersNet achieves low collision rates and small Euclidean errors without penalizing collisions explicitly, resulting in more accurate outcomes. Since the core concept of simulation involves modeling object interactions, such as energy transitions and collisions, LayersNet can resolve collisions implicitly. By incorporating collision loss, Lay-ersNet further minimizes interpenetration and strikes a balance between Euclidean errors and collision rates. We display the qualitative results of our LayersNet in Figure 3. Additional qualitative comparisons can be found in the supplementary materials. Ablation Study. We investigate the effectiveness of our Rotation Equivalent Transformation (RET) and the impact of various collision losses in Table 3. LayersNet with garmentto-human collision loss attains lower Euclidean errors and reduces collision rates between garments and human bodies. When trained with RET, LayersNet dramatically decreases garment-to-human penetration by over 71%, from 9.51% to 2.72%, while maintaining low Euclidean errors and garment-to-garment collision rates. This suggests that RET effectively eliminates redundant information from different rotations and enhances LayersNet's ability to capture the semantics of complex interactions. Due to its modeling of particle-wise interactions, which implicitly accounts for collisions, LayersNet still achieves relatively low garmentto-garment penetration rates without Lc, g. Although Lc, g slightly increases garment-to-body penetrations, an optimal combination of Lc, g and Lc, b jointly benefits LayersNet , producing accurate predictions with low errors." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce a Transformer-based simulation method, named LayersNet, designed to animate diverse garments represented by patch-wise particles within a twolevel hierarchical structure. The newly proposed Rotation Equivalent Transformation leverages the rotation equivariance and additivity of physics systems, enabling Layer-sNet to effectively generalize across various garment animation scenarios. Moreover, we propose a large-scale, novel garment animation dataset called D-LAYERS, aiming to bridge the gap between experimental environments and realistic situations. D-LAYERS is a dynamic animation dataset governed by physical laws, encompassing 4,900 distinct combinations of multi-layered garments and a total of 700K frames, with sequences extending up to 600 frames in length. As demonstrated by our experiments, LayersNet delivers superior, robust performance, showcasing compelling generalization capabilities." } ]
Figure 1: We propose LayersNet with novel Rotation Equivarlent Transformation to animate garments in simulation manner. Our LayersNet is able to animate multi-layered garments driven by various external forces, such as human bodies and wind as shown in (a). LayersNet is powered by our proposed D-LAYERS, a novel large-scale 3D garment animation dataset involving realistic and challenging scenarios, as shown in (b)-(d).
Towards Multi-Layered 3D Garments Animation
[ { "figure_caption": "Figure 3 :3Figure 3: Qualitative results by our LayersNet+. Sequences in test set are mutually exclusive from training set samples. LayersNet+ is capable of generalizing to unseen scenarios with faithful and realistic rollouts in terms of vivid dynamics constrained by physics laws, low rates of garment-to-body and garment-to-garment interpenetration. (a). LayersNet+ animates a human walking downstairs with garments falling off the shoulders. The predictions of LayersNet+ shows the inertia of the jacket. (b). A human model climbs up and sits down. LayersNet+ rollouts the complex dynamics between dress and outer jacket, which is pushed aside by inner dress driven by human body. (c). A human model walks towards his left and hugs. While LayersNet+ is able to describe the inertia of the jacket which tends to move towards right, the jacket is hindered by the left side of dress and stopped by the left arm.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Samples of qualitative results by DeePSD. DeePSDhas difficulties in capturing the rich dynamics in our D-LAYERS, leading to difficulties in convergence. With the collision loss, DeePSD+, which is mainly finetuned by collision loss on DeePSD, reduces part of interpenetration. However, DeePSD+ is not able to effectively animate multi-layred garments with diverse dynamics and complex topologies. The low rates of garment-to-garment interpenetration result from the interpenetration-free initialized garment templates.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Table 3 :3Ablation studies in terms of euclidean error (mm) and collision rates. We analyze the effectiveness of our Rotation Equivalent Transformation (RET), and the impacts of different collision loss. LayersNet with default loss term L c,b ,Lc,g adopts weights λ b = 1.0, λg = 0.1, while LayersNet with optimal combinations of L * c,b ,L * c,g adopts weight λ b = 1.3. RET enables Lay-ersNet to reduce collisions especially garment-to-body collisions, making a balance between faithful predictions and low error rates. LayersNet Overall L-Collision(%) H-Collision(%) Vanilla 472.8±330.7 3.13±2.22 10.68±4.53 + L c,b 446.3±304.9 3.10±2.16 9.51±4.55 + RET, L c,b 449.2±315.1 5.21±3.34 2.72±1.60 + RET, L c,b ,Lc,g 466.3±333.3 2.62±2.28 4.28±2.09 + RET, L * c,b ,L * c,g 467.2±330.7 3.77±2.60 2.16±1.46", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Euclidean error (mm) on sampled D-LAYERS with maximum sequence length of 35 frames. The collision rates between different layers of garments are shown under L-Collision, while the collision rates between garments and human bodies are shown under H-Collision. Models trained with collision loss L c,b , Lc,g are marked by +. Our LayersNet achieves superior results in all cases.", "figure_data": "MethodsJacketJacket + HoodDressJumpsuitSkirtDeePSD1385.3±886.6 1087.8±564.5 736.8±466.6535.2±224.71107.3±769.2DeePSD+1830.1±803.3 1566.0±527.1 1333.0±349.2 1219.0±186.81194.7±311.2LayersNet(Ours)571.9±451.9493.9±354.2397.2±342.2264.0±200.2301.3±79.3LayersNet+(Ours) 567.3±425.5491.4±361.3379.1±299.7260.1±222.2299.5±92.3MethodsPantsT-shirtOverallL-Collision(%) H-Collision(%)DeePSD498.8±109.5613.1±338.2 1049.8±549.710.11±5.3123.89±7.89DeePSD+1185.7±213.3 1202.9±233.6 1563.4±486.88.78±5.1219.47±6.38LayersNet(Ours)234.4±206.3273.3±169.0472.8±343.53.13±2.2210.68±4.53LayersNet+(Ours) 200.9±140.1267.8±189.6467.2±330.73.77±2.602.16±1.46", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Yidi Shao; Chen Change Loy; Bo Dai
[ { "authors": "Alakh Aggarwal; Jikai Wang; Steven Hogue; Saifeng Ni; Madhukar Budagavi; Xiaohu Guo", "journal": "", "ref_id": "b0", "title": "Layered-garment net: Generating multiple implicit garment layers from a single image", "year": "2022" }, { "authors": "Brandon M Anderson; Truong-Son Hy; Risi Kondor", "journal": "NeurIPS", "ref_id": "b1", "title": "Cormorant: Covariant molecular neural networks", "year": "2019" }, { "authors": "Hugo Bertiche; Meysam Madadi; Sergio Escalera", "journal": "", "ref_id": "b2", "title": "CLOTH3D: Clothed 3D humans", "year": "2020" }, { "authors": "Hugo Bertiche; Meysam Madadi; Emilio Tylson; Sergio Escalera", "journal": "", "ref_id": "b3", "title": "DeePSD: Automatic deep skinning and pose space deformation for 3D garment animation", "year": "2021" }, { "authors": "Bharat Lal Bhatnagar; Garvita Tiwari; Christian Theobalt; Gerard Pons-Moll", "journal": "", "ref_id": "b4", "title": "Multi-garment net: Learning to dress 3D people from images", "year": "2019" }, { "authors": "Johannes Brandstetter; Rob Hesselink; Elise Van Der Pol; Erik J Bekkers; Max Welling", "journal": "ICLR", "ref_id": "b5", "title": "Geometric and physical quantities improve E(3) equivariant message passing", "year": "2022" }, { "authors": "Zhongang Cai; Daxuan Ren; Ailing Zeng; Zhengyu Lin; Tao Yu; Wenjia Wang; Xiangyu Fan; Yang Gao; Yifan Yu; Liang Pan; Fangzhou Hong; Mingyuan Zhang; Chen Change Loy; Lei Yang; Ziwei Liu", "journal": "CoRR", "ref_id": "b6", "title": "HuMMan: Multi-modal 4D human dataset for versatile sensing and modeling", "year": "2022" }, { "authors": "Enric Corona; Albert Pumarola; Guillem Alenyà; Gerard Pons-Moll; Francesc Moreno-Noguer", "journal": "", "ref_id": "b7", "title": "SMPLicit: Topology-aware generative model for clothed people", "year": "2021" }, { "authors": "Fabian Fuchs; Daniel E Worrall; Volker Fischer; Max Welling", "journal": "NeurIPS", "ref_id": "b8", "title": "SE(3)-Transformers: 3D roto-translation equivariant attention networks", "year": "2020" }, { "authors": "Johannes Gasteiger; Florian Becker; Stephan Günnemann", "journal": "NeurIPS", "ref_id": "b9", "title": "GemNet: Universal directional graph neural networks for molecules", "year": "2021" }, { "authors": "Johannes Gasteiger; Chandan Yeshwanth; Stephan Günnemann", "journal": "NeurIPS", "ref_id": "b10", "title": "Directional message passing on molecular graphs via synthetic coordinates", "year": "2021" }, { "authors": "Artur Grigorev; Bernhard Thomaszewski; Michael J Black; Otmar Hilliges", "journal": "CoRR", "ref_id": "b11", "title": "HOOD: Hierarchical graphs for generalized modelling of clothing dynamics", "year": "2022" }, { "authors": "Maria Korosteleva; Sung-Hee Lee", "journal": "", "ref_id": "b12", "title": "Generating datasets of 3D garments with sewing patterns", "year": "2021" }, { "authors": "Yunzhu Li; Jiajun Wu; Russ Tedrake; Joshua B Tenenbaum; Antonio Torralba", "journal": "ICLR", "ref_id": "b13", "title": "Learning particle dynamics for manipulating rigid bodies, deformable objects, and fluids", "year": "2019" }, { "authors": "Y D Li; M Tang; Y Yang; Z Huang; R F Tong; S C Yang; Y Li; Dinesh Manocha", "journal": "Comput. Graph. Forum", "ref_id": "b14", "title": "N-Cloth: Predicting 3D cloth deformation with mesh-based networks", "year": "2022" }, { "authors": "Junbang Liang; Ming C Lin; Vladlen Koltun", "journal": "NeurIPS", "ref_id": "b15", "title": "Differentiable cloth simulation for inverse problems", "year": "2019" }, { "authors": "Yi Liu; Limei Wang; Meng Liu; Yuchao Lin; Xuan Zhang; Bora Oztekin; Shuiwang Ji", "journal": "ICLR", "ref_id": "b16", "title": "Spherical message passing for 3D molecular graphs", "year": "2022" }, { "authors": "Matthew Loper; Naureen Mahmood; Javier Romero; Gerard Pons-Moll; Michael J Black", "journal": "ACM Trans. Graph", "ref_id": "b17", "title": "SMPL: a skinned multiperson linear model", "year": "2015" }, { "authors": "Qianli Ma; Shunsuke Saito; Jinlong Yang; Siyu Tang; Michael J Black", "journal": "", "ref_id": "b18", "title": "SCALE: Modeling clothed humans with a surface codec of articulated local elements", "year": "2021" }, { "authors": "Qianli Ma; Jinlong Yang; Anurag Ranjan; Sergi Pujades; Gerard Pons-Moll; Siyu Tang; Michael J ", "journal": "", "ref_id": "b19", "title": "Black. Learning to dress 3D people in generative clothing", "year": "2020" }, { "authors": "Naureen Mahmood; Nima Ghorbani; F Nikolaus; Gerard Troje; Michael J Pons-Moll; Black", "journal": "", "ref_id": "b20", "title": "AMASS: Archive of motion capture as surface shapes", "year": "2019" }, { "authors": "Charlie Nash; Yaroslav Ganin; S M Ali Eslami; Peter W Battaglia", "journal": "", "ref_id": "b21", "title": "PolyGen: An autoregressive generative model of 3D meshes", "year": "2020" }, { "authors": "Xiaoyu Pan; Jiaming Mai; Xinwei Jiang; Dongxue Tang; Jingxiang Li; Tianjia Shao; Kun Zhou; Xiaogang Jin; Dinesh Manocha", "journal": "", "ref_id": "b22", "title": "Predicting loose-fitting garment deformations using bone-driven motion networks", "year": "2022" }, { "authors": "Chaitanya Patel; Zhouyingcheng Liao; Gerard Pons-Moll", "journal": "", "ref_id": "b23", "title": "TailorNet: Predicting clothing in 3D as a function of human pose, shape and garment style", "year": "2020" }, { "authors": "Tobias Pfaff; Meire Fortunato; Alvaro Sanchez-Gonzalez; Peter W Battaglia", "journal": "ICLR", "ref_id": "b24", "title": "Learning mesh-based simulation with graph networks", "year": "2021" }, { "authors": "Albert Pumarola; Jordi Sanchez; P T Gary; Alberto Choi; Francesc Sanfeliu; Moreno", "journal": "", "ref_id": "b25", "title": "3DPeople: Modeling the geometry of dressed humans", "year": "2019" }, { "authors": "Yi-Ling Qiao; Junbang Liang; Vladlen Koltun; Ming C Lin", "journal": "", "ref_id": "b26", "title": "Scalable differentiable physics for learning and control", "year": "2020" }, { "authors": "Alvaro Sanchez-Gonzalez; Jonathan Godwin; Tobias Pfaff; Rex Ying; Jure Leskovec; Peter W Battaglia", "journal": "", "ref_id": "b27", "title": "Learning to simulate complex physics with graph networks", "year": "2020" }, { "authors": "Igor Santesteban; Nils Thuerey; Miguel A Otaduy; Dan Casas", "journal": "", "ref_id": "b28", "title": "Self-supervised collision handling via generative 3D garment models for virtual try-on", "year": "2021" }, { "authors": "Yidi Shao; Chen Change Loy; Bo Dai", "journal": "", "ref_id": "b29", "title": "Transformer with implicit edges for particle-based physics simulation", "year": "2022" }, { "authors": "Yu Shen; Junbang Liang; Ming C Lin", "journal": "", "ref_id": "b30", "title": "GAN-based garment generation using sewing pattern images", "year": "2020" }, { "authors": "Nathaniel Thomas; Tess E Smidt; Steven Kearnes; Lusann Yang; Li Li; Kai Kohlhoff; Patrick Riley", "journal": "CoRR", "ref_id": "b31", "title": "Tensor field networks: Rotation-and translation-equivariant neural networks for 3D point clouds", "year": "2018" }, { "authors": "Nils Thuerey; Konstantin Weißenow; Lukas Prantl; Xiangyu Hu", "journal": "AIAA Journal", "ref_id": "b32", "title": "Deep learning methods for reynolds-averaged navier-stokes simulations of airfoil flows", "year": "2020" }, { "authors": "Garvita Tiwari; Bharat Lal Bhatnagar; Tony Tung; Gerard Pons-Moll", "journal": "", "ref_id": "b33", "title": "SIZER: A dataset and model for parsing 3D clothing and learning size sensitive 3D clothing", "year": "2020" }, { "authors": "Lokender Tiwari; Brojeshwar Bhowmick", "journal": "ICCVW", "ref_id": "b34", "title": "DeepDraper: Fast and accurate 3D garment draping over a 3D human body", "year": "2021" }, { "authors": "Benjamin Ummenhofer; Lukas Prantl; Nils Thuerey; Vladlen Koltun", "journal": "ICLR", "ref_id": "b35", "title": "Lagrangian fluid simulation with continuous convolutions", "year": "2020" }, { "authors": "Raquel Vidaurre; Igor Santesteban; Elena Garces; Dan Casas", "journal": "Comput. Graph. Forum", "ref_id": "b36", "title": "Fully convolutional graph neural networks for parametric virtual try-on", "year": "2020" }, { "authors": "Rui Wang; Karthik Kashinath; Mustafa Mustafa; Adrian Albert; Rose Yu", "journal": "", "ref_id": "b37", "title": "Towards physics-informed deep learning for turbulent flow prediction", "year": "2020" }, { "authors": "Y Tuanfeng; Tianjia Wang; Kai Shao; Niloy J Fu; Mitra", "journal": "ACM Trans. Graph", "ref_id": "b38", "title": "Learning an intrinsic garment space for interactive authoring of garment animation", "year": "2019" }, { "authors": "Zehang Weng; Fabian Paus; Anastasiia Varava; Hang Yin; Tamim Asfour; Danica Kragic", "journal": "CoRR", "ref_id": "b39", "title": "Graph-based taskspecific prediction models for interactions between deformable and rigid objects", "year": "2021" }, { "authors": "Chao Zhang; Sergi Pujades; Michael J Black; Gerard Pons-Moll", "journal": "", "ref_id": "b40", "title": "Detailed, accurate, human shape estimation from clothed 3D scan sequences", "year": "2017" }, { "authors": "Meng Zhang; Duygu Ceylan; Niloy J Mitra", "journal": "CoRR", "ref_id": "b41", "title": "Motion guided deep dynamic 3D garments", "year": "2022" }, { "authors": "Zerong Zheng; Tao Yu; Yixuan Wei; Qionghai Dai; Yebin Liu", "journal": "", "ref_id": "b42", "title": "DeepHuman: 3D human reconstruction from a single image", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 63.42, 105.31, 222.44, 12.32 ], "formula_id": "formula_0", "formula_text": "M t = {V t , E M , E W }, where V t = {x t i , ẋt i , ẍt i } N i=1" }, { "formula_coordinates": [ 4, 50.11, 358.21, 142.32, 10.53 ], "formula_id": "formula_1", "formula_text": "vious meshes {M t-1 , • • • , M t-h }." }, { "formula_coordinates": [ 4, 69, 406.75, 217.37, 13.71 ], "formula_id": "formula_2", "formula_text": "V t+1 g = Γ(a g , {M t-i g , M t+1-i b , w t+1-i } h i=0 ),(1)" }, { "formula_coordinates": [ 4, 392.52, 92.54, 152.59, 12.69 ], "formula_id": "formula_3", "formula_text": "P t g = ρ(M t g ),(2)" }, { "formula_coordinates": [ 4, 329.23, 181.28, 197.67, 13.71 ], "formula_id": "formula_4", "formula_text": "V t+1 g = Γ(a g , {P t-i g , M t+1-i b , w t+1-i } h i=0 )." }, { "formula_coordinates": [ 4, 329.41, 449.85, 215.7, 9.68 ], "formula_id": "formula_5", "formula_text": "q i = W q v i , r i = W r v i , s i = W s v i ,(4)" }, { "formula_coordinates": [ 4, 323.09, 462.49, 222.02, 23.86 ], "formula_id": "formula_6", "formula_text": "f i,j = r i + s j -µ ri,sj σ ri,sj ,(5)" }, { "formula_coordinates": [ 4, 324.91, 493.07, 220.2, 10.65 ], "formula_id": "formula_7", "formula_text": "ω ij = softmax(q i f i,j ),(6)" }, { "formula_coordinates": [ 4, 382.94, 624.63, 162.17, 23.25 ], "formula_id": "formula_8", "formula_text": "f i,j = r i + s j r i -s j .(7)" }, { "formula_coordinates": [ 5, 113.82, 177.07, 172.54, 12.69 ], "formula_id": "formula_9", "formula_text": "φ l (R) = W l R R(W l R ) ,(8)" }, { "formula_coordinates": [ 5, 84.06, 197.36, 202.31, 12.69 ], "formula_id": "formula_10", "formula_text": "s.t. W l R (W l R ) = I, (W l R ) W l R = I,(9)" }, { "formula_coordinates": [ 5, 67.38, 294.39, 218.98, 12.69 ], "formula_id": "formula_11", "formula_text": "f R i,bj = ψ(φ(R bj )f i,bj ),(10)" }, { "formula_coordinates": [ 5, 76.99, 313.27, 205.37, 22.21 ], "formula_id": "formula_12", "formula_text": "v i = bj ω ibj (φ(R bj )) f R i,bj + k ω ik f i,k ,(11" }, { "formula_coordinates": [ 5, 282.35, 315.34, 4.01, 8.74 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 63.38, 460.98, 222.98, 43.26 ], "formula_id": "formula_14", "formula_text": "v R pi = φ(R bj )v pi , v R bj = φ(R bj )v bj , (12) α R,t+1 k = 1 N k pi g([R bj ( xt k -xt pi ), v R pi , v R bj ]),(13)" }, { "formula_coordinates": [ 5, 72.45, 509.22, 213.91, 13.91 ], "formula_id": "formula_15", "formula_text": "β t+1 k = ∆t • (R bj ) α R,t+1 k + βt k ,(14)" }, { "formula_coordinates": [ 5, 72.79, 525.95, 213.57, 13.38 ], "formula_id": "formula_16", "formula_text": "x t+1 k = ∆t • β t+1 k + xt k ,(15)" }, { "formula_coordinates": [ 5, 94.51, 689.69, 191.85, 26.65 ], "formula_id": "formula_17", "formula_text": "L t+1 m, * = 1 N i x t+1 i -xt+1 i 2 2 ,(16)" }, { "formula_coordinates": [ 5, 340.69, 72.94, 90.98, 13.16 ], "formula_id": "formula_18", "formula_text": "{x t+1 i } N i=1 , { xt+1 i } N i=1" }, { "formula_coordinates": [ 5, 351.11, 164.67, 194, 26.65 ], "formula_id": "formula_19", "formula_text": "L t+1 n = 1 N v i n t+1 i -nt+1 i 2 2 ,(17)" }, { "formula_coordinates": [ 5, 309.18, 267.72, 231.78, 26.65 ], "formula_id": "formula_20", "formula_text": "L t+1 c = 1 N c i max d -(x t+1 i -x t+1 a )n t+1 a , 0 2 , (18" }, { "formula_coordinates": [ 5, 540.96, 274.78, 4.15, 8.64 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 5, 317.13, 396.05, 227.98, 14.38 ], "formula_id": "formula_22", "formula_text": "L t+1 = λ m L t+1 m + λ n L t+1 n + λ b L t+1 c, b + λ g L t+1 c,g .(19)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b40", "b1", "b10", "b29", "b28" ], "table_ref": [], "text": "Despite tremendous progress in computer vision, a number of limitations remain. One important limitation is that all categories must be known or annotated a-priori. In other words, deep learning cannot discover new categories * Equal contribution Figure 1. We propose a novel retrieval-based clustering mechanism to improve the representation of the input image for generalized category discovery (GCD). (Top) We first leverage CLIP's aligned vision-language representations to retrieve a set of highly relevant text descriptions from a large text corpus using the input image as a query. To further leverage CLIP's large-scaled pre-trained representation, the input image, and its retrieved texts are encoded by a frozen CLIP image and text encoder into a set of feature encodings. (Bottom) Given the concatenated text and image views, we adopt the semi-supervised k-means clustering to cluster features into seen and unseen classes.\nnot reflected in the original training set. This limits applicability to a range of problem domains, including self-driving cars or personal devices, where new categories will inevitably appear often without annotation or even knowledge of which categories are known and which are not. The setting of Novel Category Discovery (NCD) [22] tackles this problem of discovering new categories in unlabeled data, by leveraging pre-training or auxiliary training. Recently, Generalized Category Discovery (GCD) [43] was formalized to make the setting more realistic, where the goal is to jointly discover unknown categories and classify known categories within the unlabeled data. This setting is related to semi-supervised learning but does not assume we know all of the classes in the data [5,41].\nThe state-of-the-art methods for this setting utilize selfsupervised image pre-training (e.g. DINO [2]) as auxiliary information used to encode the images, after which simple clustering is performed [43]. However, even though selfsupervised feature learning can show some out-of-domain generalization [19,25], it is still a difficult challenge as the features may not be relevant to entirely new categories.\nIn this paper, we posit that a key missing element to improve such generalization is a more effective encoding of the semantic relationships between object categories. Recently, aligned multi-modal (vision and language) models have been shown to give a remarkable boost in the generalization of visual learning, especially when scaled up [11,24,30,37]. These models are learned via alignment of visual and language embeddings through large-scale constrastive training of paired image-text data [37]. Such methods have demonstrated a potential for learning open-world visual concepts, since the textual alignment forces visual features to be nearby similar concepts, and hence new categories can be wellplaced in the feature space by the visual encoder.\nGiven the strong zero-shot results of such models, we, therefore, propose to first replace the uni-modal image encoder with one trained in a multi-modal fashion (CLIP [37]). By itself, this simple modification yields significant performance gains, beating all of the current state of the art. Hence, this setting can serve as a simple, but extremely strong, baseline.\nHowever, in just replacing the visual encoder, we discard the text branch of the multi-modal model and thus fail to fully leverage the joint vision-and-language (VL) embedding and its zero-shot generalizability. Furthermore, despite significant gains, the visual encoder from a multi-modal model can still perform poorly when the visual concepts are not well-represented in their training data and are somewhat out-of-distribution.\nIn this paper, we propose to augment the visual embeddings with retrieved textual information. This allows us to better leverage the joint VL embedding and the text encoder as well as provide the ability to extend the contextual knowledge available for clustering unknown and potentially out-ofdomain categories and images. Specifically, inspired by prior image captioning works [29], given an image, we retrieve the top-k most relevant text from a large text corpus [13, 20] (which could be from the multi-modal training set itself). We specifically use the alignment between CLIP's visual encoding (of the image) and textual encoding (pre-indexed for the text corpus). Our key hypothesis is that such pieces of text, and their encodings, can provide valuable contextual clues for clustering unseen categories. The retrieved top-k text are encoded by CLIP's text encoder, are mean-pooled, and then concatenated with the CLIP's visual encoding as the final multi-modal representation for clustering.\nWe show that our proposed method substantially outperforms the established state of the art across a number of datasets. We specifically expand the set of datasets to include out-of-domain data, DomainNet (a domain adaptation dataset), and Flowers102, a generic image recognition dataset. We perform extensive analysis of what corpus to retrieve from, how much to retrieve, and how to combine (or pool) the resulting embeddings. Crucially, we demonstrate in our ablation studies that the combination of our two ideas (using CLIP and retrieving contextual information) is needed to yield strong state of art results. This is because combined clustering of aligned embeddings is significantly more effective than clustering individual image and textual embeddings that are not aligned.\nIn summary, we make the following contributions:\n• We propose a simple but extremely effective baseline for GCD, utilizing CLIP image encodings rather than uni-modal pre-trained ones.\n• We further propose a cross-modal retrieval module by leveraging the cross-modal joint embedding space of CLIP to retrieve a set of contextual text descriptions for unlabeled data containing seen and unseen categories.\n• We perform extensive experimentation, including on more challenging out-of-distribution datasets, demonstrating Significant improvements over the state-of-art (and even our strong baseline) alongside rigorous quantitative and qualitative analysis of our approach." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Novel Category Discovery (NCD)", "publication_ref": [ "b22", "b22", "b16", "b46", "b9" ], "table_ref": [], "text": "NCD is a relatively nascent field, first proposed as \"crosstask transfer\" where learning on labeled data can be transferred to clustering of unseen categories (disjoint from the labeled set) in unlabeled data [22,23]. Several methods have been developed to tackle this task. [22,23] use a pair-wise siamese network trained on labeled data and apply it to train a clustering network on unlabeled data. Subsequent works improved upon this via a specialized deep clustering approach [17]. In RankStat [15, 16], a three-stage pipeline is deployed: The model is trained with self-supervision initially on all data for representation learning, then fine-tuned on labeled data to capture higher-level semantic knowledge, and finally ranking statistics are used to transfer knowledge from the labeled to unlabeled data. [47] presents a contrastive learning approach, generating hard negatives by mixing labeled and unlabeled data in the latent space. UNO [10] introduces a unified cross-entropy loss, jointly training a model on labeled and unlabeled data by trading pseudo-labels from classification heads. Our work builds on top of a new and more realistic setting named Generalized Category Discovery (GCD) [43] where the unlabeled samples can come both from seen and unseen classes. The original GCD method performed k-means based clustering of DINO embeddings, while recent developments such as XCon [9] have improved those results through additional contrastive training. In our paper, we focus on leveraging multi-modal models in several ways, which is orthogonal to such improvements. We also demonstrate superior results compared to all of the current published state of the art." }, { "figure_ref": [], "heading": "Unsupervised Clustering", "publication_ref": [ "b0", "b31", "b45", "b2", "b20", "b37", "b44" ], "table_ref": [], "text": "Clustering has a long history and has long been studied by the machine-learning community. The task is to automatically partition an unlabeled dataset into different semantic groups without access to information from a labeled set. To tackle this task, several shallow [1,32,46] and deep learning [3,12,21,38,45] approaches have been proposed. The deep learning-based methods can be roughly divided into two types, the first of which uses the pairwise similarity of samples to generate pseudo-labels for clustering and the second of which uses neighborhood aggregation to coalesce similar samples while at the same time pushing apart dissimilar samples, achieving a clustering effect. Such advanced clustering methods could be added to our approach, though we focus on improving the underlying feature space such that simple clustering methods can be used." }, { "figure_ref": [], "heading": "Self-Supervised and Multi-Modal Pre-Training", "publication_ref": [ "b13", "b1", "b17" ], "table_ref": [], "text": "Self-supervised learning has advanced rapidly over the years. Some methods leverage contrastive learning, often across augmented copies of the unlabeled image, by breaking symmetry e.g. via projection heads [6] or teacher-student training where the teacher comes from some version of the student (e.g. an exponential moving average of the student over the iterations) [14]. Recently, the advent of Vision Transformers (e.g. ViT) [8], which have significantly more flexibility and capacity, has enabled these methods both to scale (i.e. further improve) with larger unlabeled datasets [2] as well as provide unique opportunities for new mechanisms such as masking [18]. Besides unlabeled data, multi-modal methods leverage image-text pairs mined from the web. Again, methods such as contrastive learning can be used to push image and text embeddings together (when paired) or apart (when not). Methods such as CLIP [37], which do this across very large datasets, have shown impressive zero-shot performance. All of these methods are relevant to the GCD problem, as category discovery benefits from better representations (with self-supervised learning having nice properties out-of-distribution) and zero-shot classification is a similar problem except that in GCD the collection of unlabeled data is available. Further, our method explicitly leverages the alignment between image and text encoders in multi-modal models to better cluster unlabeled data." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we first introduce the notations and definitions of GCD [43]. Then, we explain how to use CLIP in GCD and introduce our method to tackle this task." }, { "figure_ref": [], "heading": "Problem Setup of GCD", "publication_ref": [ "b1", "b31" ], "table_ref": [], "text": "As formalized in [43], dataset D consists of two parts, labeled dataset D L = {(x i , y i )} N i=1 ∈ X ×Y L and unlabeled dataset D U = {(x i , y i )} M i=1 ∈ X × Y U , where Y L ⊂ Y U which is distinct from NCD [17] that assumes Y L ∩ Y U = ∅.\nThe goal is to learn a model to group the instances in D U based on information from D L . Taking advantage of the recent advances in vision transformers and their remarkable performance in various visual recognition tasks specifically for self-supervised representation learning [2], Vaze et al.\n[43] devise a two-stage training pipeline for the GCD task. First, for representation learning, they jointly fine-tune the representation by performing supervised contrastive learning on the labeled data and unsupervised contrastive learning on all the data. Let x i and x i be two views with random augmentations of the same image in a mini-batch B . The unsupervised contrastive loss is stated as:\nL u i = -log exp(zi•z i /τ ) n 1 [n =i] exp(zi•z n /τ )\nwhere\nz i = h(f (x i ))\nis the feature extracted by a backbone f (•) on the input image x i and projected to do the embedding space via a projection head h(•), z i is the feature from another view of the input image z i . The supervised contrastive loss is stated as\nL s i = -1 |N (i)| q∈N (i) log exp(zi•zq/τ ) n 1 [n =i] exp(zi•zn/τ )\nwhere N (i) denotes the indices of other images having the same label as x i in the mini-batch B . Then, the final objective is the combination of the two losses:\nL t = (1 -λ) i∈B L ∪B U L u i + λ i∈B L L s i\nwhere λ is a weight factor and B L , B U are mini-batches for labeled and unlabeled images respectively. For label assignments, a semi-supervised k-means is proposed, where the overall procedure is similar to k-means [32] However, there is a significant distinction in that semi-supervised kmeans takes into account the labeled data in D L during the computation of cluster assignment in each step. This means that the samples with labels will always be assigned to the correct cluster, irrespective of their distance to the nearest cluster centroids. " }, { "figure_ref": [], "heading": "Our Approach", "publication_ref": [], "table_ref": [], "text": "By combining both textual and visual information, language-image models can achieve improved performance in a wide range of tasks, so we propose to leverage CLIP's zero-shot ability and multi-modal aligned encoders for this setting, and then propose a retrieval-based augmentation." }, { "figure_ref": [ "fig_0", "fig_1", "fig_1" ], "heading": "Using CLIP in General Category Discovery", "publication_ref": [ "b28", "b39", "b8" ], "table_ref": [], "text": "We propose to tackle the GCD task by leveraging the crossmodal joint embedding from CLIP [37]. The CLIP model has two branches: the image branch CLIP-Image and the text branch CLIP-Text that encode image and text into a global feature representation, respectively. CLIP is trained on large-scale image and text pairs s.t. paired image and text are pushed together in the embedding space while unpaired ones are pulled apart. Please refer to Figure 2 for the overall architecture. To improve the representation of our data, specifically for both labeled and unlabelled data, we refine the representation by combining two techniques: supervised contrastive learning on the labeled data and unsupervised contrastive learning on all data. We do this by finetuning the representation on our target data simultaneously. CLIP learns image representation by contrasting them with the representations of text description of the image, such as \"A photo of a {class name}\". The text description is called prompt, and its design is vital in enhancing CLIP's performance. However, the unlabeled data contains unseen categories, and we do not have a list of them to use for prompts. As a result, inspired by recent works in image captioning [29], for the labeled and unlabeled set, we propose to mine a set of text descriptions providing complementary information to the input image from a text corpus. The key hypothesis is that such contextual information, provided as additional \"views\" of the image, can significantly aid in clustering. To that end, we propose to generate text descriptions for an image as shown in Figure 3 containing details and information of the input image to be mapped into the feature space. Training a separate captioning model to generate text descriptions might be expensive and nontrivial, so for each labeled and unlabeled image, we retrieve the top-k most relevant descriptions from a text corpus, turning this problem into a cross-model retrieval one, which we describe as follows. Description database The description database is an organized collection of textual descriptions relevant to an image, and we select the top-k most pertinent ones. Several options can be used, and we show results across annotations from databases such as Conceptual Captions (3M) [40], Conceptual Captions (12M) [4], MS Coco [31], and LION [39]. We don't perform any rigorous processing and simply collect all the captions.\nText description retrieval Given a query of an image, the goal is to retrieve the top-k most relevant text descriptions from the description database. To this end, we propose to exploit the cross-modal joint embedding from CLIP [37] for this cross-modal retrieval task. Specifically, we use CLIP-Text to encode all the descriptions in the description database as the search key. The image is encoded by CLIP-Image into a query. We then search in the description database for the text descriptions with the top-k highest cosine similarity scores. Some examples of the top-4 results are shown in Figure 3." }, { "figure_ref": [ "fig_0" ], "heading": "Multi-view generation for clustering", "publication_ref": [ "b0" ], "table_ref": [], "text": "The general approach for our feature vector extraction and view generation framework is illustrated in Figure 2 (Stage I). Given an image and a set of text descriptions, an image view (feature vector) is generated by encoding it using the CLIP image encoder, then using the CLIP text encoder, we encode the set of text descriptions, pool embeddings, and generate a view (sentence embedding) using mean pooling. Finally, the feature vectors of the image and text (views) are concatenated and projected into CLIP latent space, and clustering is performed directly in it.\nLabel assignment with semi-supervised k-means clustering Given the image view and the text view we concatenate the feature vectors and apply semi-supervised k-means clustering following [43] to group the unlabeled data into seen and unseen classes. The semi-supervised k-means is a transformation of the traditional k-means method into a constraint-based algorithm, where the number of clusters k is assumed known. This will involve requiring that the D L data instances are assigned to their appropriate clusters based on their ground-truth labels. The first set of centroids |Y L |for D L in semi-supervised k-means are obtained using actual class labels. The second set of centroids for the additional number of new classes |Y U \\Y L |are obtained from D U using k-means++ [1], but only within the constraint of D L centroids. During the process of updating and assigning centroids, instances from the same class in D L are always grouped together, whereas instances in D U can be assigned to any cluster based on their distance to various centroids. After the algorithm converges, each instance in D U can be given a cluster label." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model architecture details", "publication_ref": [], "table_ref": [], "text": "CLIP [37] has two encoders, CLIP-Image and CLIP-Text which are pre-trained transformer models for image and text. CLIP-Text is a base transformer model consisting of 12 layers, a hidden size of 768, and the final linear projection layer produces a representation vector of size 512. CLIP-Image is a hybrid ViT-Base model (which is the same as the DINOtrained model used for a fair comparison) consisting of 12 stacked layers, with a convolutional layer in the beginning for feature extraction. For a given image, a total of 49 embedding vectors with a hidden size of 768 are generated, and to match the output of the CLIP-Text encoder; the output hidden state is projected from 768 to 512 dimensions. We fine-tune the last block of the vision transformer starting with a learning rate of 5e-5 decaying it over time using a cosine annealed schedule. We train the model for 100 epochs using batches of size 128 and set the value of λ to 0.25 in the loss function (Eq. (3.1)). Tuning and testing is done on a separate validation set to select the best hyperparameters." }, { "figure_ref": [], "heading": "Datasets & Evaluation", "publication_ref": [ "b43", "b32", "b27" ], "table_ref": [ "tab_1" ], "text": "We evaluate the performance of our method on both generic image classification and fine-grained datasets. Following [43], we selected CIFAR-10/100 [27], ImageNet-100 [7], and Flowers102 [34] as the generic image classification datasets. We use CUB-200 [44], Stanford Cars [26], and FGVC-Aircraft [33] as fine-grained datasets. We also experiment with a challenging domain adaptation dataset DomainNet (Sketch) [36]. We split the training data into two parts, a labeled dataset and an unlabeled dataset by dividing all classes equally into seen classes and unseen ones, then sampling 50% images from the seen classes as unlabeled data so that the unlabeled set D U contains images from both seen classes and unseen classes, while the labeled set only contains seen classes. The splits are summarized in Table 2.\nEvaluation Metric To measure the performance of our model, we use the clustering accuracy (ACC) defined below.\nACC = max p∈P (Y U ) 1 N N i=1 1{y i = p(ŷ i )}\nwhere P is the set of all permutations that matches the model's predictions ŷi and the ground truth labels y i using the Hungarian method [28] and N is the total number of images in the unlabeled set. Following [43], we use the metric on three different sets, 'All' which refers to the entire unlabeled set D U , 'Old' referring to instances in D U belonging to classes in Y L , and 'New' referring to instances in D U belonging to Y U \\Y L . " }, { "figure_ref": [], "heading": "Comparison with the State-of-the-Art", "publication_ref": [ "b9" ], "table_ref": [ "tab_0", "tab_2" ], "text": "We start by comparing our method with the SOTA methods on both generic image classification, fine-grained image classification, and domain adaptation benchmarks. RankStats+ [16] and UNO+ [10] are two methods modified from two competitive baselines for NCD and adopted to the GCD setting. XCon [9] is a method that targets fine-grained datasets in the GCD setting, lastly, GCD w/ CLIP is our proposed use of the GCD method with CLIP image encoder in lieu of DINO. The results on generic image recognition benchmarks are shown in Table 1. On all the datasets we experimented with, our method shows the best performance across most of the categories, often improving upon previous works with large margins. On ImagetNet-100, CIFAR100, and Flowers102, our method outperforms the other methods on all subsets 'All', 'Old', and 'New', reinforcing the idea that our dual usage of multi-modal models boosts performance compared to vision only models. On the fine-grained image classification benchmarks, our results are presented in Table 3. We show the best performance of our method on all categories 'All', 'Old', and 'New' for most datasets while achieving comparable results for FGVC-Aicraft dataset. This indicates that our method is effective for fine-grained category discovery. On the domain adaptation classification front, our method shows the best results across all subsets 'All', 'Old', and 'New' on the DomainNet dataset, which indicates that our method is much more robust to distribution shift than standard ImageNet pre-trained models." }, { "figure_ref": [ "fig_1" ], "heading": "Analysis", "publication_ref": [ "b1", "b39", "b8" ], "table_ref": [ "tab_3", "tab_4", "tab_5" ], "text": "We analyze the contribution of certain aspects of our methodology through a rigorous ablation study. Specifically, we highlight the significance of the following components of the approach: whether language supervision can result in vision models with transferable representation versus classic image-only models, the effect of the number of texts k retrieved per image on the accuracy of the model, retrieved text quality, and CLIP image encoder ViT backbone with and without finetuning. How important is language supervision in this setting? Table 4 shows the effect of language on the clustering task. The Image Encoder column represents different types of vision transformer backbones. GCD is a finetuned ViT-B-16 backbone with DINO [2] pre-trained weights from GCD [43] and CLIP [37] is a finetuned pre-trained ViT-B-16 backbone. The Knowledge columns indicate whether we are clustering vision-only features or vision and text features combined. We record the accuracy of the model across all categories, All, Old, and New for three datasets, then average them for each combination of dataset, image encoder, and knowledge. As shown, the results indicate that CLIP image and text outperform image-only by a large margin, confirming that language does help in this setting compared to image-only models. We note that while using CLIP as an encoder without retrieval is an extremely strong baseline, our retrieval mechanism further improves performance by significant margins e.g. almost 4% on All and almost 6% specifically on Old. How important is the descriptiveness of retrieved captions? Text descriptions in typical datasets can vary in terms of how they relate to the image. Ideally, we want to encode salient objects in the image that are meaningful in representation learning for object recognition tasks. The learned representations for contrastive models are governed by the text transformer (captions for CLIP), suggesting that text descriptions that describe the contents of a scene in an image will improve transferability in the CLIP model. We verify this hypothesis and quantify the descriptiveness of a caption using multiple caption data sources. We perform top-4 crossmodal retrieval from Conceptual Captions (3M) [40], Con- [39], then record the accuracy of the model for each data corpus on All, Old, and New subsets averaged for each knowledge database.\nTable 5 shows the results of the model on three datasets CIFAR100, Stanford Cars, and DomainNet(Sketch). Previous work in linguistics has shown that captions that are descriptive (meant to replace an image) are different from those that are complementary or give additional information and context. Contrary to LAION and Conceptual Captions (12M) which usually contain information complementary to the image, Conceptual Captions (3M) and MS COCO are more descriptive due to the strict annotation process. We use a score given by CLIP of a caption matching its corresponding image in our cross-modal retrieval, and according to the results, the hypothesis does not align with our subjective assessment, at least for the datasets tested. We posit that the descriptiveness of the captions retrieved and DomainNet (Sketch), and we chose to limit retrieval of captions to Conceptual Captions (12M) [4]. Figure 3 suggests that variability in dataset captions can hurt the accuracy of the model. They suggest that some of the captions might not contain useful information making the model accuracy plateau or even reduce after a certain number.\nDoes CLIP need finetuning? One of the most impressive aspects of the CLIP model is its performance in zeroshot learning, classifying objects it has never seen before, based on their descriptions in natural language. In this experiment, we probe CLIP's performance in the GCD setting without performing any finetuning. Table 6 shows our results for a CLIP model finetuned versus a model without finetuning on three datasets, CIFAR100, Stanford Cars, and Sketch with a finetuned CLIP outperforming a non-finetuned CLIP model. Recent studies have shown that CLIP finetuning might distort its pretrained representation leading to unsatisfactory performance, but our results show that it can be finetuned with the right hyperparameter choices, challenging the notion that CLIP is not suitable for finetuning." }, { "figure_ref": [ "fig_3" ], "heading": "Qualitative results", "publication_ref": [ "b41" ], "table_ref": [], "text": "We further show a t-SNE [42] projection of ViT CLIP image features and Image-Text features to visualize the feature spaces of CIFAR10 by transforming the features into two dimensions. In Figure 4, we show the clustered features of the unlabeled data and compared the results of our method for image-only features against image and text features. For image-only features, data points from the same class are generally projected close to each other, and they form clear clusters with some overlapping between classes. In contrast, the image-text features form clear clusters with some clear separation which are further distinguished when using text along with an image, further confirming the utility of language in this setting." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose to tackle the Generalized Category Discovery setting. With the recent advances in Vision-Language pertaining (VLP), we propose to use CLIP and take advantage of its multi-modality in two ways. First, we propose to leverage the CLIP image encoder, yielding an extremely strong baseline for GCD. Second, we propose a complementary novel retrieval-based augmentation, specifically retrieving textual context from a text corpus and jointly clustering the image and text embeddings. We perform rigorous analysis demonstrating that our method is well suited for this setting.\nWe demonstrate significant quantitative improvements on four generic classifications, three fine-grained, and one domain adaptation datasets showing significant performance gains over previous methods. Importantly, we show that our two ways of leveraging CLIP are complementary and that both are necessary to achieve strong state-of-art results. There are a number of limitations and future work, including enhancing the retrieval process to improve the quality of the retrieved contextual knowledge." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "from a corpus and the size of the knowledge database, as well as the diversity of captions, all play a role. How many captions do we need to retrieve for each image? We probe how the variability of captions within a caption database affects our model transfer capabilities. There are a number of ways to annotate an image as shown in Figure 3. In each corpus, captions vary in terms of how an object is described e.g. \"train\" or \"railcar\", and which part of the image the focus is on, e.g. \"cloud\" or \"bird\". The focus, lexical, and style variation in captioning could confuse the model and make it push image-text pairs apart instead of pulling them together. We examine the sensitivity of our model to the number of captions per image (top-k), averaging accuracy across three datasets, CIFAR100, Stanford Cars," }, { "figure_ref": [], "heading": "Appendices", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. CLIP ViT Backbone", "publication_ref": [], "table_ref": [], "text": "We investigate and compare CLIP ViT-B/16 versus ViT-L/14 (24 layers, a hidden size of 1024, and 307M parameters) to show the effect of a larger ViT model on the clustering task. We finetune the last block of ViT-L/14 transformer starting with a smaller learning rate of 4e-6 compared to ViT-B/16, decaying it over time using a cosine annealed schedule. We train the model for 100 epochs using batches of size 64. Details are shown in table 7. ViT-L/14 performs better across different types of datasets, out-of-distribution, generic image recognition, and fine-grained. It outperforms ViT-B/16 by over 3% aggregated over 'All', 'Old', and 'New' categories. It has been mentioned in [37] that zero-shot Im-ageNet validation set accuracy between ViT-L/14 and ViT-B/16 is over 7% which validates our results." } ]
Generalized Category Discovery (GCD) requires a model to both classify known categories and cluster unknown categories in unlabeled data. Prior methods leveraged self-supervised pre-training combined with supervised finetuning on the labeled data, following by simple clustering methods. In this paper, we posit that such methods are still prone to poor performance on out-of-distribution categories, and do not leverage a key ingredient: Semantic relationships between object categories. We therefore propose to leverage multi-modal (vision and language) models, in two complementary ways. First, we establish a strong baseline by replacing uni-modal features with CLIP, inspired by its zero-shot performance. Second, we propose a novel retrieval-based mechanism that leverages CLIP's aligned vision-language representations by mining text descriptions from a text corpus for the labeled and unlabeled set. We specifically use the alignment between CLIP's visual encoding of the image and textual encoding of the corpus to retrieve top-k relevant pieces of text, and incorporate their embeddings to perform joint image+text semi-supervised clustering. We perform rigorous experimentation and ablations (including on where to retrieve from, how much to retrieve, and how to combine information), and validate our results on several datasets including out-of-distribution domains, demonstrating stateof-art results. On the generic image recognition datasets, we beat the current state of the art (XCon [9]) by up to 6.7% on all classes, up to 2.0% on known classes, and 11.6% on average over unknown classes, and on fine-grained datasets up to 14.3% on average over all classes, and up to 10.7% on average over unknown classes.
CLIP-GCD: Simple Language Guided Generalized Category Discovery
[ { "figure_caption": "Figure 2 .2Figure 2. Model Architecture. In stage I (left), we propose a cross-modal retrieval module to retrieve a set of contextual text descriptions for the labeled and unlabeled data, generate a view from pooled sentence embedding as complementary information for clustering. In Stage II (right), we concatenate the image view and the text view and use semi-supervised k-means clustering to group seen and unseen classes.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. A sample of retrieved top-4 most relevant text descriptions from Conceptual Captions (3M) for an image from ImageNet dataset.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "(a) Image only feature visualization on CIFAR10 with t-SNE (b) Image and Text feature visualization on CI-FAR10 with t-SNE (c) Sensitivity of the model to the # of captions per image averaged across three datatsets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 44Figure 4", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparative results on generic image recognition datasets", "figure_data": "CIFAR10CIFAR100ImageNet-100Flowers-102ClassesAllOld NewAllOld NewAllOld NewAllOld NewRankStats+ [16] 46.8 19.2 60.558.2 77.6 19.337.1 61.6 24.8---UNO+ [10]68.6 98.3 53.869.5 80.6 47.270.3 95.0 57.9---GCD [43]91.5 97.9 88.273.0 76.2 66.574.1 89.8 66.374.1 82.4 70.1GCD w/ CLIP95.9 97.0 95.884.2 83.1 82.379.3 94.6 71.167.8 82.3 60.5XCon [9]96.0 97.3 95.474.2 81.2 60.377.6 93.5 69.7---Ours96.6 97.2 96.485.2 85.0 85.684.0 95.5 78.276.3 88.6 70.2CIFAR10CIFAR100CUB-200SCARS|Y L |58010098|Y U |10100200196|D L |12.5k20k1.5k2.0k|D U |37.5k30k4.5k6.1kImageNet-100 DomainNet(Sketch) Flowers-102 FGVC-Aircraft|Y L |501725150|Y U |100345102100|D L |31.9K10.1k2551.7k|D U |95.3k38k7655k", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Our dataset splits in the experiments. (|YL|,|YU |) correspond to the number of classes in the labeled and unlabeled sets respectively. (|DL|,|DU |) is the number of images for each set.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparative results on SSB[35] and DomainNet[36] ", "figure_data": "Stanford CarsFGVC-AircraftDomainNet (Sketch)CUB-200ClassesAllOld NewAllOld NewAllOldNewAllOld NewRankStats+ [16] 28.3 61.8 12.126.9 36.4 22.2---33.3 51.6 24.2UNO+ [10]35.5 70.5 18.640.3 56.4 32.2---35.1 49.0 28.1GCD [43]39.0 57.6 29.945.0 41.1 46.945.2 50.443.351.3 56.6 48.7GCD w/ CLIP62.8 85.2 52.043.7 52.8 39.252.7 74.243.759.7 76.1 51.5XCon [9]40.5 58.8 31.747.7 44.4 49.4---52.2 54.3 51.0Ours70.6 88.2 62.250.0 56.6 46.555.2 75.547.462.8 77.1 55.7", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Image only versus Image and Text clustering accuracy with different image encoders.", "figure_data": "DatasetImage Encoder Knowledge AllOld NewCIFAR-100GCDN73.0 76.2 66.5CIFAR-100GCDY75.9 79.7 67.3CIFAR-100CLIPN84.2 83.1 82.3CIFAR-100CLIPY85.2 85.0 85.6Stanford CarsGCDN39.0 57.6 29.9Stanford CarsGCDY41.1 60.0 33.5Stanford CarsCLIPN62.8 85.2 52.0Stanford CarsCLIPY70.6 88.2 62.2SketchGCDN30.2 46.4 24.3SketchGCDY30.9 48.3 25.9SketchCLIPN52.7 74.2 43.7SketchCLIPY55.2 75.5 47.4AverageGCDN47.4 60.1 40.2AverageGCDY49.3 62.7 42.2AverageCLIPN66.6 80.8 59.3AverageCLIPY70.3 82.9 65.1ceptual Captions (12M) [4], and COCO [31], and LION", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Accuracy of the model using different knowledge databases as a source of text descriptions", "figure_data": "DatasetKnowledge DB AllOld NewCIFAR-100CC-12M85.9 85.0 88.1CIFAR-100CC-3M82.8 82.6 83.2CIFAR-100MSCOCO", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Results for finetuned CLIP vs. not finetuned", "figure_data": "DatasetFinetuned CLIP AllOld NewCIFAR-100N68.7 71.1 63.7CIFAR-100Y85.2 85.0 85.6Stanford CarsN65.8 78.9 59.5Stanford CarsY70.6 88.2 62.2SketchN51.6 60.0 48.5SketchY55.2 75.5 47.4AverageN62.0 70.0 57.2AverageY70.3 82.9 65.1", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Rabah Ouldnoughi; Chia-Wen Kuo; Zsolt Kira
[ { "authors": "David Arthur; Sergei Vassilvitskii", "journal": "", "ref_id": "b0", "title": "k-means++: the advantages of careful seeding", "year": "2007" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Julien Herv'e J'egou; Piotr Mairal; Armand Bojanowski; Joulin", "journal": "", "ref_id": "b1", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Jianlong Chang; Lingfeng Wang; Gaofeng Meng; Shiming Xiang; Chunhong Pan", "journal": "", "ref_id": "b2", "title": "Deep adaptive image clustering", "year": "2017" }, { "authors": "Soravit Changpinyo; Piyush Kumar Sharma; Nan Ding; Radu Soricut", "journal": "", "ref_id": "b3", "title": "Conceptual 12m: Pushing web-scale imagetext pre-training to recognize long-tail visual concepts", "year": "2021" }, { "authors": "Olivier Chapelle; Bernhard Schlkopf; Alexander Zien", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b4", "title": "Semi-supervised learning", "year": "2006" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b5", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; K Li; Li Fei-Fei", "journal": "", "ref_id": "b6", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b7", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Yixin Fei; Zhongkai Zhao; Si Xiao Yang; Bingchen Zhao", "journal": "", "ref_id": "b8", "title": "Xcon: Learning with experts for fine-grained category discovery", "year": "2022" }, { "authors": "Enrico Fini; E Sangineto; Stéphane Lathuilière; Zhun Zhong; Moin Nabi; Elisa Ricci", "journal": "", "ref_id": "b9", "title": "A unified objective for novel class discovery", "year": "2021" }, { "authors": "Andreas Furst; Elisabeth Rumetshofer; Viet-Hung Tran; Hubert Ramsauer; Fei Tang; Johannes Lehner; David P Kreil; Michael Kopp; Günter Klambauer; Angela Bitto-Nemling; Sepp Hochreiter", "journal": "", "ref_id": "b10", "title": "Cloob: Modern hopfield networks with infoloob outperform clip", "year": "2021" }, { "authors": "Wouter Van Gansbeke; Simon Vandenhende; Stamatios Georgoulis; Marc Proesmans; Luc Van Gool", "journal": "", "ref_id": "b11", "title": "Learning to classify images without labels", "year": "2020" }, { "authors": "Yunchao Gong; Liwei Wang; Micah Hodosh; J Hockenmaier; Svetlana Lazebnik", "journal": "", "ref_id": "b12", "title": "Improving image-sentence embeddings using large weakly annotated photo collections", "year": "2014" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar", "journal": "Advances in neural information processing systems", "ref_id": "b13", "title": "Bootstrap your own latent-a new approach to self-supervised learning", "year": "2020" }, { "authors": "K Han; Sylvestre-Alvise Rebuffi; Sébastien Ehrhardt; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b14", "title": "Automatically discovering and learning new visual categories with ranking statistics", "year": "2020" }, { "authors": "K Han; Sylvestre-Alvise Rebuffi; Sébastien Ehrhardt; Andrea Vedaldi; Andrew Zisserman", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Autonovel: Automatically discovering and learning novel visual categories", "year": "2022" }, { "authors": "K Han; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b16", "title": "Learning to discover novel visual categories via deep transfer clustering", "year": "2019" }, { "authors": "Kaiming He; Xinlei Chen; Saining Xie; Yanghao Li; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b17", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "Dan Hendrycks; Mantas Mazeika; Saurav Kadavath; Dawn Xiaodong Song", "journal": "", "ref_id": "b18", "title": "Using self-supervised learning can improve model robustness and uncertainty", "year": "2019" }, { "authors": "Youngpeter Hodoshmicah; Hockenmaierjulia", "journal": "Journal of Artificial Intelligence Research", "ref_id": "b19", "title": "Framing image description as a ranking task", "year": "2013" }, { "authors": "Yen-Chang Hsu; Zsolt Kira", "journal": "", "ref_id": "b20", "title": "Neural network-based clustering using pairwise constraints", "year": "2015" }, { "authors": "Yen-Chang Hsu; Zhaoyang Lv; Zsolt Kira", "journal": "", "ref_id": "b21", "title": "Learning to cluster in order to transfer across domains and tasks", "year": "2018" }, { "authors": "Yen-Chang Hsu; Zhaoyang Lv; Joel Schlosser; Phillip Odom; Zsolt Kira", "journal": "", "ref_id": "b22", "title": "Multi-class classification without multi-class labels", "year": "2019" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc V Le; Yun-Neuralan Sung; Zhen Li; Tom Duerig", "journal": "", "ref_id": "b23", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Umar Khalid; Ashkan Esmaeili; Nazmul Karim; Nazanin Rahnavard", "journal": "", "ref_id": "b24", "title": "Rodd: A self-supervised approach for robust out-of-distribution detection", "year": "2022" }, { "authors": "Jonathan Krause; Michael Stark; Jia Deng; Li Fei-Fei", "journal": "", "ref_id": "b25", "title": "3d object representations for fine-grained categorization", "year": "2013" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b26", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Harold W Kuhn", "journal": "Naval Research Logistics (NRL)", "ref_id": "b27", "title": "The hungarian method for the assignment problem", "year": "2010" }, { "authors": "Chia-Wen Kuo; Zsolt Kira", "journal": "", "ref_id": "b28", "title": "Beyond a pre-trained object detector: Cross-modal textual and visual context for image captioning", "year": "2022" }, { "authors": "Yangguang Li; Feng Liang; Lichen Zhao; Yufeng Cui; Wanli Ouyang; Jing Shao; Fengwei Yu; Junjie Yan", "journal": "", "ref_id": "b29", "title": "Supervision exists everywhere: A data efficient contrastive languageimage pre-training paradigm", "year": "2022" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge J Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence Zitnick", "journal": "", "ref_id": "b30", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "J Macqueen", "journal": "", "ref_id": "b31", "title": "Some methods for classification and analysis of multivariate observations", "year": "1967" }, { "authors": "Subhransu Maji; Esa Rahtu; Juho Kannala; Matthew B Blaschko; Andrea Vedaldi", "journal": "", "ref_id": "b32", "title": "Fine-grained visual classification of aircraft", "year": "2013" }, { "authors": "Maria-Elena Nilsback; Andrew Zisserman", "journal": "", "ref_id": "b33", "title": "Automated flower classification over a large number of classes", "year": "2008-12" }, { "authors": "Poojan Oza; M Vishal; Patel", "journal": "", "ref_id": "b34", "title": "C2ae: Class conditioned auto-encoder for open-set recognition", "year": "2019" }, { "authors": "Xingchao Peng; Qinxun Bai; Xide Xia; Zijun Huang; Kate Saenko; Bo Wang", "journal": "", "ref_id": "b35", "title": "Moment matching for multi-source domain adaptation", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b36", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Sébastien Sylvestre-Alvise Rebuffi; K Ehrhardt; Andrea Han; Andrew Vedaldi; Zisserman", "journal": "", "ref_id": "b37", "title": "Lsd-c: Linearly separable deep clusters", "year": "2021" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b38", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Piyush Sharma; Nan Ding; Sebastian Goodman; Radu Soricut", "journal": "", "ref_id": "b39", "title": "Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning", "year": "2018" }, { "authors": "Kihyuk Sohn; David Berthelot; Chun-Liang Li; Zizhao Zhang; Nicholas Carlini; Ekin Dogus Cubuk; Alexey Kurakin; Han Zhang; Colin Raffel", "journal": "", "ref_id": "b40", "title": "Fixmatch: Simplifying semisupervised learning with consistency and confidence", "year": "2020" }, { "authors": "Laurens Van Der Maaten", "journal": "J. Mach. Learn. Res", "ref_id": "b41", "title": "Accelerating t-sne using tree-based algorithms", "year": "2014" }, { "authors": "K Sagar Vaze; Andrea Han; Andrew Vedaldi; Zisserman", "journal": "", "ref_id": "b42", "title": "Generalized category discovery", "year": "2007" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge J Belongie", "journal": "", "ref_id": "b43", "title": "The caltech-ucsd birds-200-2011 dataset", "year": "2011" }, { "authors": "Bo Yang; Xiao Fu; N Sidiropoulos; Mingyi Hong", "journal": "", "ref_id": "b44", "title": "Towards k-means-friendly spaces: Simultaneous deep learning and clustering", "year": "2017" }, { "authors": "Lihi Zelnik; -Manor ; Pietro Perona", "journal": "", "ref_id": "b45", "title": "Self-tuning spectral clustering", "year": "2004" }, { "authors": "Zhun Zhong; Enrico Fini; Subhankar Roy; Zhiming Luo; Elisa Ricci; N Sebe", "journal": "", "ref_id": "b46", "title": "Neighborhood contrastive learning for novel class discovery", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 308.5, 201.92, 238.35, 45.52 ], "formula_id": "formula_0", "formula_text": "As formalized in [43], dataset D consists of two parts, labeled dataset D L = {(x i , y i )} N i=1 ∈ X ×Y L and unlabeled dataset D U = {(x i , y i )} M i=1 ∈ X × Y U , where Y L ⊂ Y U which is distinct from NCD [17] that assumes Y L ∩ Y U = ∅." }, { "formula_coordinates": [ 3, 362.29, 409.89, 125.5, 16.24 ], "formula_id": "formula_1", "formula_text": "L u i = -log exp(zi•z i /τ ) n 1 [n =i] exp(zi•z n /τ )" }, { "formula_coordinates": [ 3, 335.19, 435.16, 57.89, 9.68 ], "formula_id": "formula_2", "formula_text": "z i = h(f (x i ))" }, { "formula_coordinates": [ 3, 336.29, 499.65, 177.51, 20.55 ], "formula_id": "formula_3", "formula_text": "L s i = -1 |N (i)| q∈N (i) log exp(zi•zq/τ ) n 1 [n =i] exp(zi•zn/τ )" }, { "formula_coordinates": [ 3, 348.76, 569.98, 155.96, 19.26 ], "formula_id": "formula_4", "formula_text": "L t = (1 -λ) i∈B L ∪B U L u i + λ i∈B L L s i" }, { "formula_coordinates": [ 5, 350.97, 579.8, 152.03, 24.35 ], "formula_id": "formula_5", "formula_text": "ACC = max p∈P (Y U ) 1 N N i=1 1{y i = p(ŷ i )}" } ]
2023-10-22
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b14", "b16", "b3", "b4", "b1", "b2", "b7", "b8", "b0", "b7" ], "table_ref": [], "text": "Many existing autonomous driving models [12,15,16] involve a multi-stage pipeline of independent tasks, such as perception [11,17], prediction [4,5] and planning [2,3].\nWhile this design simplifies the difficulty of collaboration across teams, it leads to information loss and error accumulation in the overall system due to the independence of optimization targets and model training. To better predict control signals and enhance user safety, an end-to-end approach that benefits from spatial-temporal feature learning from the ego vehicle and surrounding environment is desired.\nPlanning for autonomous driving is the ultimate goal of the entire system. To achieve these, some methods are accomplished through perception tasks, such as 3D object detection and semantic segmentation to obtain temporal and spatial information about the surrounding environment. A natural idea is that if a model can perform well in these perception tasks, it can make accurate, safe, and comfortable trajectory planning based on this information. ST-P3 [7] proposes an interpretable vision-based end-to-end system that unifies feature learning for perception, prediction, and planning. UniAD [8] adopts a systematic model design for the planning task by connecting all intermediate task nodes based on the query-based design where the relationship between multiple tasks can be modeled and encoded. VAD [9] models the scene in a fully vectorized way, getting rid of computationally intensive grid feature representation and being more efficient in computation. This vectorized feature representation helps autonomous driving vehicles focus on the crucial map and agent elements, and plan a reasonable future trajectory. The above-mentioned methods achieve promising performance on the trajectory planning task of the ego vehicle on the nuScenes [1] dataset, which is the most commonly used one in the area. However, in this paper, in order to explore whether the existing evaluation metrics can accurately measure the superiority of different methods, we only use the physical state of the ego vehicle during driving as input (i.e., a subset of the information used by existing methods) to conduct experiments, instead of using the perception and prediction information provided by the camera or LiDAR. In other words, there is no encoder for visual or point cloud feature extraction in the model. We directly unfold all the physical information of the ego vehicle into one-dimensional vectors and feed them into a multi-layer perceptron (MLP) after concatenation.\nDuring training, we use the ground-truth trajectory as supervision and have the model directly predict the ego vehicle's future trajectory points for a certain period of time. Following previous methods [7,8], we validate our approach on the nuScenes dataset using the metrics of L2 error and collision rate.\nAlthough the design of our model is simple and no perception information is utilized, it achieves similar trajectory planning performance on the nuScenes dataset. We attribute this to the inadequacy of current evaluation metrics for planning tasks on the nuScenes dataset to accurately compare the performance of different methods. In fact, the motion of the ego vehicle in the future can be reflected to a certain extent by using the past trajectory, velocity, acceleration information, and temporal continuity." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "The overall pipeline of our method is illustrated in Fig- ure 1. As our model requires no perception information, there is no component for visual or point-cloud feature extraction. The inputs only consist of two parts: the ego states and the high-level command representing its future shortterm motion trend. Our model is trained in an end-to-end manner." }, { "figure_ref": [], "heading": "Model Inputs", "publication_ref": [ "b8" ], "table_ref": [], "text": "Ego States.\nFollowing [9], we collect the ego vehicle's motion trajectories of the past T p = 4 frames, instantaneous velocity and acceleration as input. The trajectory of each frame consists of three values: (x, y, θ), repre-senting the position and heading angle of ego vehicle, respectively. The instantaneous velocity (v x , v y , ω) and acceleration (a x , a y , β) represent the current x-directional, ydirectional, and angular velocity and acceleration, respectively. We obtain this information from the nuScenes CAN bus expansion. The above-mentioned ego states are then flattened and concatenated into a one-dimensional vector.\nHigh-Level Command.\nSince our model does not use HD maps as input, a high-level command is required for navigation. Following the common practice [7, 8], three types of commands are defined: turn left, go straight, and turn right. Specifically, when the ego vehicle displaces > 2m to the left or right direction in the future 3s, the corresponding command is set to turn left or right. Otherwise, it corresponds to going straight. We represent the command using a 1 × 3 one-hot encoding.\nThe ego states and high-level command are concatenated together as the network's inputs, so the final dimension of the input vector is 21.\nNetwork Structure. The structure of our model is a simple MLP and can be summarized as Linear 21 512 -ReLU -Linear 512 512 -ReLU -Linear 512 18 . For each Linear Cout Cin , C in and C out represent the numbers of input and output channels, respectively. The final outputs represent the ego vehicle's trajectories (x,y coordinates) and heading angles for the future T f = 6 frames." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [ "b8", "b8" ], "table_ref": [], "text": "The loss function L is implemented with a L1 loss between the predicted planning trajectories Vego and the ground-truth ego trajectories V ego , which can be formulated as follows: ---0.48 0.96 1.65 1.03 0.05 0.17 0.71 0.31 VAD-Tiny [9] 0.20 0.38 0.65 0.41 0.10 0.12 0.27 0.16 VAD-Base [9] 0.17 0.34 0.60 0.37 0.07 0.10 0.24 0. where N f denotes the number of future frames, e.g., 6.\nL = 1 N f N f i=1 || Vego -V ego || 1(1)\nSince the resolution of occupancy map is 0.5m for one grid, the predicted trajectory point is mapped to the same grid if it falls into the same 0.5m segment, e.g., [1.5m, 2m), with the ground truth on both x and y axes. We re-weight the loss by 0.5 on samples where the predicted trajectory points and the ground truth coincide at the same grid for hard sample mining to reduce the collision rate." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset & Evaluation Metrics", "publication_ref": [ "b0" ], "table_ref": [], "text": "Dataset.\nFollowing the common practice [7-9] in the planning task, we use the nuScenes [1] dataset in our experiments for both training and testing. The dataset includes 1K scenes and approximately 40K key-frames mainly collected in Boston and Singapore using vehicles equipped with both LiDAR and surrounding cameras. The data collected for each frame includes multi-view camera images, LiDAR, velocity, acceleration, etc." }, { "figure_ref": [], "heading": "Metrics.", "publication_ref": [], "table_ref": [], "text": "We use the implementation 1 provided by ST-P3 [7] to evaluate the output trajectories for time horizons of 1s, 2s, and 3s. To evaluate the quality of the predicted ego trajectories, two commonly used metrics [7-9] are calculated: L2 error (in meters) and collision rate (in percentage). The average L2 errors are calculated between the predicted and ground-truth trajectories for corresponding waypoints within the next 1s, 2s, and 3s time horizons, respectively. To determine how often the ego vehicle collides with other objects, the collision rate is computed by placing a box rep-1 https://github.com/OpenPerceptionX/ST-P3/blob/ main/stp3/metrics.py resenting the ego vehicle at each waypoint on the predicted trajectory and then detecting if any collision with other oriented bounding boxes that represent vehicles and pedestrians in the scene occurs." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b13", "b12" ], "table_ref": [], "text": "Our model is implemented in both the PaddlePaddle and PyTorch framework. The AdamW [14] optimizer is used with an initial learning rate of 4e-6 and weight decay of 1e-2. The cosine annealing [13] learning rate schedule is utilized. Our model is trained for 6 epochs with a batch size of 4 on 1 NVIDIA Tesla V100 GPUs." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We conduct some ablation experiments in Table 1 to analyze the impact of velocity, acceleration, trajectories, and high-level command information to the performance of our model. We gradually add the acceleration, velocity, and high-level command information to the input, the average L2 error and collision rate continually decrease from 0.97m to 0.29m, and 0.49% to 0.19%. It is worth mentioning that the collision rate of our method is not as low as some perception-based methods. We believe this is due to the insufficient decision-making process resulting from the mere fitting of motion information, which raises safety concerns. Besides, the difference of average collision rates between our method and the others (e.g., 0.19% v.s. 0.14%) represents only about 2 ∼ 3 samples in all 4819 ones. We believe that more discriminative testing scenarios are needed to better demonstrate the advantages of perception-based methods." }, { "figure_ref": [], "heading": "Distance", "publication_ref": [], "table_ref": [], "text": "Ratio Ratio " }, { "figure_ref": [], "heading": "Multi-view Cameras LiDAR TOP", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "No Actual Collision", "publication_ref": [], "table_ref": [], "text": "Occupancy Map (Grid Size=0.5m)" }, { "figure_ref": [], "heading": "Detected Collision", "publication_ref": [], "table_ref": [], "text": "Occupancy Map (Grid Size=0.1m)" }, { "figure_ref": [], "heading": "No Detected Collision", "publication_ref": [], "table_ref": [], "text": "Figure 3. A typical ground-truth trajectory collision case caused by different grid sizes of the occupancy maps. It can be observed that the gird size for occupancy map generating has a great impact on the collision test, which is commonly used in the evaluation of collision in existing methods. For instance, in the case of grid size = 0.1m, the ground-truth trajectory is correctly recognized as a no-collision case while being misjudged when grid size = 0.5m. We can also find from the bottom-right of the figure that when grid size = 0.5m, some object masks even become irregular, which are supposed to be rectangles (e.g., the orange and red ones)." }, { "figure_ref": [], "heading": "Analysis and Discussion", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Trajectory Distribution of nuScenes", "publication_ref": [], "table_ref": [], "text": "This sub-section mainly analyzes the distribution of the ego vehicle's states on the nuScenes training set from two perspectives: trajectory points in the future 3s, heading and curvature angles.\nTrajectory Points. We plot all future 3s trajectory points in the training set in Figure 2 (a). It can be seen from the figure that the trajectories are largely concentrated in the middle part (go straight), and the trajectories are mainly straight lines, or curves with very small curvatures.\nHeading and Curvature Angles. The heading angle indicates the future driving direction relative to the current time, while the curvature angle reflects the vehicle's turning rate. As illustrated in Figure 2 (b) and (c), nearly 70% of the heading angles and curvature angles lie within the ranges of -0.2 to 0.2 and -0.02 to 0.02 radians, respectively. This finding is consistent with the conclusion drawn from the distribution of trajectory points.\nBased on the above analysis on the distributions of the trajectory points, heading and curvature angles, we argue that in the nuScenes training set, ego vehicles tend to move forward along straight lines and at small angles during driv-ing in short-time horizons." }, { "figure_ref": [], "heading": "Collision in Ground Truth", "publication_ref": [], "table_ref": [], "text": "When calculating the collision rate, the common practice of existing methods is to first project objects such as vehicles and pedestrians into the bird's-eye-view (BEV) space and then convert them into occupied regions in the occupancy map, where loss of precision occurs. After this process, we find that a small fraction of ground-truth trajectory samples (about 2%) also overlap with obstacles in the occupancy grid, resulting in collisions being falsely detected.\nHowever, the ego vehicle does not actually collide with any other objects while collecting data annotations. This anomaly is due to employing an occupancy map with a relatively large grid size, leading to false collisions when the ego vehicle approaches certain objects, e.g., smaller than the size of a single occupancy map pixel. We show an example of this phenomenon in Figure 3, together with the collision detection results of ground-truth trajectories for two different grid sizes. Under a smaller grid size (0.1m) shown in the middle-right, the evaluation system correctly identifies the ground-truth trajectory as not colliding, but under a larger grid size (0.5m) in the bottom-right, a false collision detection occurs.\nAfter observing the impact of occupancy grid size on trajectory collision detection, we test with a grid size of 0.6m. The nuScenes training set has 4.8% collision samples, while the validation set has 3.0%. It is worth mentioning that when we previously use a grid size of 0.5m, only 2.0% of the samples in the validation set are misclassified as collisions. This proves once again that the current method for determining the collision rate is not robust and accurate enough." }, { "figure_ref": [], "heading": "Conclusion and Limitations", "publication_ref": [], "table_ref": [], "text": "In this paper, we rethink the conventional evaluation metrics used in end-to-end autonomous driving by employing a simple MLP-based model that solely relies on physical states without any visual or point cloud perception as input. Despite its simplicity and the absence of perceptual information, our model achieves similar performance on the nuScenes dataset, suggesting that the current evaluation metrics may not adequately capture the superiority of different methods." }, { "figure_ref": [], "heading": "Limitations.", "publication_ref": [], "table_ref": [], "text": "The primary objective of this paper is to present our observations rather than propose a new model. Our findings demonstrate the potential limitations of the current evaluation scheme on the nuScenes dataset. Although our model performs well within the confines of the nuScenes dataset, we acknowledge that it is merely an impractical toy incapable of functioning in real-world scenarios. Driving without any knowledge beyond the ego vehi-cle's states is an insurmountable challenge.\nWe believe that perception-based planning methods will ultimately become the solution to autonomous driving, producing much safer trajectories than our toy model. We hope that our insights will stimulate further research in the field, encouraging a reevaluation and enhancement of the planning task for end-to-end autonomous driving." } ]
Modern autonomous driving systems are typically divided into three main tasks: perception, prediction, and planning. The planning task involves predicting the trajectory of the ego vehicle based on inputs from both internal intention and the external environment, and manipulating the vehicle accordingly. Most existing works evaluate their performance on the nuScenes dataset using the L2 error and collision rate between the predicted trajectories and the ground truth. In this paper, we reevaluate these existing evaluation metrics and explore whether they accurately measure the superiority of different methods. Specifically, we design an MLP-based method that takes raw sensor data (e.g., past trajectory, velocity, etc.) as input and directly outputs the future trajectory of the ego vehicle, without using any perception or prediction information such as camera images or LiDAR. Our simple method achieves similar end-to-end planning performance on the nuScenes dataset with other perception-based methods, reducing the average L2 error by about 20%. Meanwhile, the perception-based methods have an advantage in terms of collision rate. We further conduct in-depth analysis and provide new insights into the factors that are critical for the success of the planning task on nuScenes dataset. Our observation also indicates that we need to rethink the current open-loop evaluation scheme of end-to-end autonomous driving in nuScenes.
Rethinking the Open-Loop Evaluation of End-to-End Autonomous Driving in nuScenes
[ { "figure_caption": "Figure 1 .1Figure 1. Overall pipeline. Inputs: 1) ego vehicle's states: motion trajectories (x, y, θ) of the past Tp frames (in our experiments Tp = 4), instantaneous velocity (vx, vy, ω) and acceleration (ax, ay, β). θ, ω, and β indicate heading angle, angular velocity, and angular acceleration, respectively; 2) high-level command (one-hot encoded). B indicates batch size. Outputs: motion trajectory of the ego vehicle in the future T f frames (in our experiments T f = 6).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Distribution analysis of nuScenes training set. The trajectory points are concentrated in the middle forward area, and the heading angle and curvature angle are concentrated around radian 0. We can conclude that most cases of the ego vehicle are in straight and small angles forward, and there are few cases of large angle turns.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Comparison with existing perception-based methods. Our method achieves slightly lower L2 error, while the collision rate is higher than some other methods. Results in the table except for our method are collected from VAD[9].", "figure_data": "14", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Jiang-Tian Zhai; Ze Feng; Jinhao Du; Yongqiang Mao; Jiang-Jiang Liu; Zichang Tan; Yifu Zhang; Xiaoqing Ye; Jingdong Wang
[ { "authors": "Holger Caesar; Varun Bankiti; Alex H Lang; Sourabh Vora; Venice Erin Liong; Qiang Xu; Anush Krishnan; Yu Pan; Giancarlo Baldan; Oscar Beijbom", "journal": "", "ref_id": "b0", "title": "nuscenes: A multimodal dataset for autonomous driving", "year": "2020" }, { "authors": "Long Chen; Lukas Platinsky; Stefanie Speichert; Blazej Osinski; Oliver Scheel; Yawei Ye; Hugo Grimmett; Luca Del Pero; Peter Ondruska", "journal": "ICRA", "ref_id": "b1", "title": "What data do we need for training an AV motion planner?", "year": "2021" }, { "authors": "Kashyap Chitta; Aditya Prakash; Bernhard Jaeger; Zehao Yu; Katrin Renz; Andreas Geiger", "journal": "IEEE TPAMI", "ref_id": "b2", "title": "Transfuser: Imitation with transformer-based sensor fusion for autonomous driving", "year": "2022" }, { "authors": "Junru Gu; Chen Sun; Hang Zhao", "journal": "", "ref_id": "b3", "title": "Densetnt: End-to-end trajectory prediction from dense goal sets", "year": "2021" }, { "authors": "Anthony Hu; Zak Murez; Nikhil Mohan; Sofía Dudas; Jeffrey Hawke; Vijay Badrinarayanan; Roberto Cipolla; Alex Kendall", "journal": "", "ref_id": "b4", "title": "FIERY: Future instance segmentation in bird's-eye view from surround monocular cameras", "year": "2021" }, { "authors": "Peiyun Hu; Aaron Huang; John Dolan; David Held; Deva Ramanan", "journal": "", "ref_id": "b5", "title": "Safe local motion planning with selfsupervised freespace forecasting", "year": "2021" }, { "authors": "Shengchao Hu; Li Chen; Penghao Wu; Hongyang Li; Junchi Yan; Dacheng Tao", "journal": "", "ref_id": "b6", "title": "St-p3: End-to-end vision-based autonomous driving via spatial-temporal feature learning", "year": "2022" }, { "authors": "Yihan Hu; Jiazhi Yang; Li Chen; Keyu Li; Chonghao Sima; Xizhou Zhu; Siqi Chai; Senyao Du; Tianwei Lin; Wenhai Wang; Lewei Lu; Xiaosong Jia; Qiang Liu; Jifeng Dai; Yu Qiao; Hongyang Li", "journal": "", "ref_id": "b7", "title": "Planning-oriented autonomous driving", "year": "2023" }, { "authors": "Bo Jiang; Shaoyu Chen; Qing Xu; Bencheng Liao; Jiajie Chen; Helong Zhou; Qian Zhang; Wenyu Liu; Chang Huang; Xinggang Wang", "journal": "", "ref_id": "b8", "title": "Vad: Vectorized scene representation for efficient autonomous driving", "year": "2023" }, { "authors": "Tarasha Khurana; Peiyun Hu; Achal Dave; Jason Ziglar; David Held; Deva Ramanan", "journal": "", "ref_id": "b9", "title": "Differentiable raycasting for self-supervised occupancy forecasting", "year": "2022" }, { "authors": "Zhiqi Li; Wenhai Wang; Hongyang Li; Enze Xie; Chonghao Sima; Tong Lu; Yu Qiao; Jifeng Dai", "journal": "", "ref_id": "b10", "title": "Bevformer: Learning bird's-eye-view representation from multi-camera images via spatiotemporal transformers", "year": "2022" }, { "authors": "Ming Liang; Bin Yang; Wenyuan Zeng; Yun Chen; Rui Hu; Sergio Casas; Raquel Urtasun", "journal": "", "ref_id": "b11", "title": "Pnpnet: End-to-end perception and prediction with tracking in the loop", "year": "2020" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "ICLR", "ref_id": "b12", "title": "Sgdr: Stochastic gradient descent with warm restarts", "year": "2017" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "ICLR", "ref_id": "b13", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Wenjie Luo; Bin Yang; Raquel Urtasun", "journal": "", "ref_id": "b14", "title": "Fast and furious: Real time end-to-end 3d detection, tracking and motion forecasting with a single convolutional net", "year": "2018" }, { "authors": "Abbas Sadat; Sergio Casas; Mengye Ren; Xinyu Wu; Pranaab Dhawan; Raquel Urtasun", "journal": "", "ref_id": "b15", "title": "Perceive, predict, and plan: Safe motion planning through interpretable semantic representations", "year": "2020" }, { "authors": "Kaixin Xiong; Shi Gong; Xiaoqing Ye; Xiao Tan; Ji Wan; Errui Ding; Jingdong Wang; Xiang Bai", "journal": "", "ref_id": "b16", "title": "Cape: Camera view position embedding for multi-view 3d object detection", "year": "2023" }, { "authors": "Wenyuan Zeng; Wenjie Luo; Simon Suo; Abbas Sadat; Bin Yang; Sergio Casas; Raquel Urtasun", "journal": "", "ref_id": "b17", "title": "End-to-end interpretable neural motion planner", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 363.28, 684.87, 181.84, 31.46 ], "formula_id": "formula_0", "formula_text": "L = 1 N f N f i=1 || Vego -V ego || 1(1)" } ]
2023-05-21
[ { "figure_ref": [ "fig_0", "fig_1", "fig_3", "fig_1", "fig_1", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b7", "b16", "b27", "b12", "b26", "b30", "b35", "b36", "b2", "b31", "b5", "b1", "b10", "b6" ], "table_ref": [], "text": "Recent advancements in text-to-image generation [4,8,17,28], particularly diffusion models [13,27,31,36,37], have opened new frontiers in content creation. Subject-driven text-to-image generation permits the personalization to new individuals given a few sample images [3,9,20,25,32], allowing the generation of images featuring specific subjects in novel scenes, styles, and actions. However, current subject-driven text-to-image generation methods suffer from two key limitations: cost of personalization and identity blending for multiple subjects. Personalization is costly because they often need model fine-tuning for each new subject for best fidelity. The computational overhead and high hardware demands introduced by model tuning, largely due to the memory consumption [6] and computations of backpropagation, constrain the applicability of these models across various platforms. Furthermore, existing techniques struggle with multi-subject generation (Figure 1) because of the \"identity blending\" issue (Figure 2 left), in which the model combines the distinct characteristics of different subjects (subject A looks like subject B and vice versa).\nWe propose FastComposer, a tuning-free, personalized multi-subject text-to-image generation method. Our key idea is to replace the generic word tokens, such as \"person\", by an embedding that captures an individual's unique identity in the text conditioning. We use a vision encoder to derive this identity embedding from a referenced image, and then augment the generic text tokens with features from this identity embedding. This enables image generation based on subject-augmented conditioning. Our design allows the generation of images featuring specified subjects with only forward passes and can be further integrated with model compression techniques [2,11,40] to boost deployment efficiency.\nTo tackle the multi-subject identity blending issue, we identify unregulated cross-attention as the primary reason (Figure 4). When the text includes two \"person\" tokens, each token's attention map attends to both person in the image rather than linking each token to a distinct person in the image. To address this, we propose supervising cross-attention maps of subjects with segmentation masks during training (i.e., cross-attention localization), using standard segmentation tools [7]. This supervision explicitly guides the model to map subject features to distinct and non-overlapping regions of the image, thereby facilitating the generation of high-quality multi-subject images (Figure 2 left). We note that segmentation and cross-attention localization is only required during the training phase.\nNaively applying subject-augmented conditioning leads to subject overfitting (Figure 2 right), restricting the user's ability to edit subjects based on textual directives. To address this, we introduce delayed subject conditioning, preserving the subject's identity while following text instructions. It employs text-only conditioning in the early denoising stage to generate the image layout, followed by subject-augmented conditioning in the remaining denoising steps to refine the subject appearance. This simple technique effectively preserves subject identity without sacrificing editability (Figure 5).\nFor the first time, FastComposer enables inference-only generation of multiple-subject images across diverse scenarios (Figure 1). FastComposer achieves 300×-2500× speedup and 2.8×-6.7× memory saving compared to fine-tuning-based methods, requiring zero extra storage for new subjects. FastComposer paves the way for low-cost, personalized, and versatile text-to-image generation." }, { "figure_ref": [ "fig_0" ], "heading": "Related Work", "publication_ref": [ "b31", "b31", "b29", "b9", "b31", "b38", "b23", "b23", "b0", "b32", "b0", "b32" ], "table_ref": [], "text": "Subject-Driven Image Generation aims to render a particular subject unseen at the initial training stage. Given a limited number of example images of the subject, it seeks to synthesize novel renditions in diverse contexts. DreamBooth [32], textual-inversion [9], and custom-diffusion [20] use optimization-based methods to embed subjects into diffusion models. This is achieved by either fine-tuning the model weights [20,32] or inverting the subject image into a text token that encodes the subject identity [9]. Recently, tuning-encoder [30] reduces the total number of fine-tuning steps by first generating an inverted set latent code using a pre-trained encoder and then refines these codes through several finetuning steps to better preserve subject identities. However, all these tuning-based methods [9,10,20,32] require resource-intensive backpropagation, and the hardware must be capable of fine-tuning the model, which is neither feasible on edge devices such as smartphones, nor scalable for cloud-based applications. In contrast, our new FastComposer amortizes the costly subject tuning during the training phase, enabling instantaneous personalization of multiple subjects using simple feedforward methods at test time.\nA number of concurrent works have explored tuning-free methods. X&Fuse [19] concatenates the reference image with the noisy latent for image conditioning. ELITE [39] and InstantBooth [35] use global and local mapping networks to project reference images into word embeddings and inject reference image patch features into cross-attention layers to enhance local details. Despite impressive results for single-object customization, their architecture design restricts their applicability to multiple subject settings, as they rely on global interactions between the generated image and reference input image. UMM-Diffusion [24] shares a similar architecture to ours. However, it faces identity blending challenges when extended to multiple subjects [24]. In comparison, our method supports multi-subject composition via a cross-attention localization supervision mechanism (Sec 4.2).\nMulti-Subject Image Generation. Custom-Diffusion [20] enables multi-concepts composition by jointly fine-tuning the diffusion model for multiple concepts. However, it typically handles concepts with clear semantic distinctions, such as animals and their related accessories or backgrounds. The method encounters challenges when dealing with subjects within similar categories, often generating the same person twice when composing two different individuals (Figure 1). SpaText [1], and Collage Diffusion [33] enable multi-object composition through a layout to image generation process. A user-provided segmentation mask determines the final layouts, which are then transformed into high-resolution images using a diffusion model. Nevertheless, these techniques either compose generic objects without customization [1] or demand the costly textual-inversion process to encode instance-specific details [33]. Furthermore, these techniques require a user-provided segmentation map. In contrast, FastComposer generates personalized, multi-subject images in an inference-only manner and automatically derives plausible layouts from text prompts." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "Stable Diffusion. We use the state-of-the-art StableDiffusion (SD) model as our backbone network. The SD model consists of three components: the variational autoencoder (VAE), U-Net, and a text encoder. The VAE encoder E compresses the image x to a smaller latent representation z, which is subsequently perturbed by Gaussian noise ε in the forward diffusion process. The U-Net, parameterized by θ, denoises the noisy latent representation by predicting the noise. This denoising process can be conditioned on text prompts through the cross-attention mechanism, while the text encoder ψ maps the text prompts P to conditional embeddings ψ(P). During training, the network is optimized to minimize the loss function given by the equation below:\nL noise = E z∼E(x),P,ε∼N (0,1),t ||ε -ε θ (z t , t, ψ(P))|| 2 2 ,(1)\nwhere z t is the latent code at time step t. At inference time, a random noise z T is sampled from N (0, 1) and iteratively denoised by the U-Net to the initial latent representation z 0 . Finally, the VAE decoder D generates the final image by mapping the latent codes back to pixel space x = D(z 0 ).\nText-Conditioning via Cross-Attention Mechanism. In the SD model, the U-Net employs a cross-attention mechanism to denoise the latent code conditioned on text prompts. For simplicity, we use the single-head attention mechanism in our discussion. Let P represent the text prompts and ψ denote the text encoder, which is typically a pre-trained CLIP text encoder. The encoder converts P into a list of d-dimensional embeddings, ψ(P) = c ∈ R n×d . The cross-attention layer accepts the spatial latent code z ∈ R (h×w)×f and the text embeddings c as inputs. It then projects the latent code and text embeddings into Query, Key, and Value matrices:\nQ = W q z, K = W k c, and V = W v c.\nHere, W q ∈ R f ×d , W k , W v ∈ R d×d represent the weight matrices of the three linear layers, and d is the dimension of Query, Key, and Value embeddings. The cross-attention layer then computes the attention scores\nA = Softmax( QK T √ d ) ∈ [0, 1] (h×w)×n\n, and takes a weighted sum over the Value matrix to obtain the cross-attention output z attn = AV ∈ R (h×w)×d . Intuitively, the cross-attention mechanism \"scatters\" textual information to the 2D latent code space, and A[i, j, k] represents the amount of information flow from the k-th text token to the (i, j) latent pixel. Our method is based on this semantic interpretation of the cross-attention map, and we will discuss it in detail in Section 4.2." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "FastComposer", "publication_ref": [ "b25", "b28" ], "table_ref": [], "text": "4.1 Tuning-Free Subject-Driven Image Generation with an Image Encoder Augmenting Text Representation with Subject Embedding. To achieve tuning-free subjectdriven image generation, we propose to augment text prompts with visual features extracted from reference subject images. Given a text prompt P = {w 1 , w 2 , . . . w n }, a list of reference subject images S = {s 1 , s 2 , . . . s m }, and an index list indicating which subject corresponds to which word in the text prompt I = {i 1 , i 2 , . . . i m }, i j ∈ 1, 2, . . . , n, we first encode the text prompt P and reference subjects S into embeddings using the pre-trained CLIP text and image encoders ψ and φ, respectively. Next, we employ a multilayer perceptron (MLP) to augment the text embeddings with visual features extracted from the reference subjects. We concatenate the word embeddings with the visual features and feed the resulting augmented embeddings into the MLP. This process yields the final conditioning embeddings c ∈ R n×d , defined as follows:\nc i = ψ(P) i , i / ∈ I MLP(ψ(P) i ||φ(s j )), i = i j ∈ I(2)\nFigure 3 gives a concrete example of our augmentation approach.\nSubject-Driven Image Generation Training. To enable inference-only subject-driven image generation, we train the image encoder, the MLP module, and the U-Net with the denoising loss (Figure 3). We create a subject-augmented image-text paired dataset to train our model, where noun phrases from image captions are paired with subject segments appearing in the target images. We initially use a dependency parsing model to chunk all noun phrases (e.g., \"a woman\") in image captions and a panoptic segmentation model to segment all subjects present in the image. We then pair these subject segments with corresponding noun phrases in the captions with a greedy matching algorithm based on text and image similarity [26,29]. The process of constructing the subject-augmented image-text dataset is detailed in Sec. 5.1. In the training phase, we employ subject-augmented conditioning, as outlined in Equation 2, to denoise the perturbed target image. We also mask the subjects' backgrounds with random noise before encoding, preventing the overfitting of the subjects' backgrounds. Consequently, FastComposer can directly use natural subject images during inference without explicit background segmentation." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Localizing Cross-Attention Maps with Subject Segmentation Masks", "publication_ref": [ "b11", "b11" ], "table_ref": [], "text": "We observe that traditional cross-attention maps tend to attend to all subjects at the same time, which leads to identity blending in multi-subject image generation (Figure 4 top). We propose to localize cross-attention maps with subject segmentation masks during training to solve this issue.\nUnderstanding the Identity Blending in Diffusion Models. Prior research [12] shows that the cross-attention mechanism within diffusion models governs the layout of generated images. The scores in cross-attention maps represent \"the amount of information flows from a text token to a latent pixel.\" We hypothesize that identity blending arises from the unrestricted cross-attention mechanism, as a single latent pixel can attend to all text tokens. If one subject's region attends to multiple reference subjects, identity blending will occur. In Figure 4, we confirm our hypothesis by visualizing the average cross-attention map within the U-Net of the diffusion model. The unregularized model often has two reference subject tokens influencing the same generated person at the same time, causing a mix of features from both subjects. We argue that proper cross-attention maps should resemble an instance segmentation of the target image, clearly separating the features related to different subjects.\nTo achieve this, we add a regularization term to the subject cross-attention maps during training to encourage focusing on specific instance areas. Segmentation maps and cross-attention regularization are only used during training, not at test time.\nFigure 5: Effects of using different ratios of timesteps for subject conditioning. A ratio between 0.6 to 0.8 yields good results and achieve a balance between prompt consistency and identity preservation.\nLocalizing Cross-Attention with Segmentation Masks. As discussed in Section 3, a crossattention map A ∈ [0, 1] (h×w)×n connects latent pixels to conditional embeddings at each layer, where A[i, j, k] denotes the information flow from the k-th conditional token to the (i, j) latent pixel. Ideally, the subject token's attention map should focus solely on the subject region rather than spreading throughout the entire image, preventing identity blending among subjects. To accomplish this, we propose localizing the cross-attention map using the reference subject's segmentation mask. Let M = {M 1 , M 2 , . . . M m } represent the reference segmentation masks, I = {i 1 , i 2 , . . . i m } be the index list indicating which subject corresponds to each word in the text prompt, and A i = A[:, :, i] ∈ [0, 1] (h×w) be the cross-attention map of the i-th subject token. We supervise the cross-attention map A ij to be close to the segmentation mask m j of the j-th subject token, i.e., A ij ≈ m j . We employ a balanced L1 loss to minimize the distance between the cross-attention map and the segmentation mask:\nL loc = 1 m m j=1 (mean(A ij [ mj ]) -mean(A ij [m j ])).(3)\nThe final training objective of FastComposer is given by:\nL = L noise + λL loc ,(4)\nusing a localization loss ratio controlled by hyperparameter λ = 0.001. Motivated by [5,12], we apply the localization loss to the downsampled cross-attention maps, i.e., the middle 5 blocks of the U-Net, which are known to contain more semantic information. As illustrated in Figure 4, our localization technique enables the model to precisely allocate attention to reference subjects at test time, which prevents identity blending between subjects." }, { "figure_ref": [], "heading": "Delayed Subject Conditioning in Iterative Denoising", "publication_ref": [ "b9", "b29", "b17", "b17", "b20", "b6", "b4", "b15", "b28", "b30", "b21", "b3" ], "table_ref": [], "text": "During inference, using the augmented text representation directly often leads to images that closely resemble the subjects while ignoring the textual directives. This occurs because the image layout forms at the early phases of the denoising process, and premature augmentation from the reference image causes the resulting image to stray from the text instructions. Prior methods [10,30] mitigate this issue by generating an initial latent code and refining it through iterative model finetuning. However, this process is resource-intensive and needs high-end devices for model fine-tuning. Inspired by Style Mixing [18], we propose a simple delayed subject conditioning, which allows for inferenceonly subject conditioning while striking a balance between identity preservation and editability.\nSpecifically, we perform image augmentation only after the layout has been created using a text-only prompt. In this framework, our time-dependent noise prediction model can be represented as:\nt = θ (z t , t, c) if t > αT, θ (z t , t, c ) otherwise (5)\nHere, c denotes the original text embedding and c denotes text embedding augmented with the input image embedding. α is a hyperparameter indicating the ratio of subject conditioning. We ablate the effect of using different α in Figure 5. Empirically, α ∈ [0.6, 0.8] yields good results that balance prompt consistency and identity preservation, though it can be easily tuned for specific instances. Dataset Construction. We built a subject-augmented image-text paired dataset based on the FFHQ-wild [18] dataset to train our models. First, we use the BLIP-2 model [21] blip2-opt-6.7b-coco to generate captions for all images. Next, we employ the Mask2Former model [7] mask2former-swin-large-coco-panoptic to generate panoptic segmentation masks for each image. We then leverage the spaCy [15] library to chunk all noun phrases in the image captions and expand numbered plural phrases (e.g., \"two women\") into singular phrases connected by \"and\" (e.g., \"a woman and a woman\"). Finally, we use a greedy matching algorithm to match noun phrases with image segments. We do this by considering the product of the image-text similarity score by the OpenCLIP model [16] CLIP-ViT-H-14-laion2B-s32B-b79K and the label-text similarity score by the Sentence-Transformer [29] model stsb-mpnet-base-v2. We reserve 1000 images for validation and testing purposes. Training Details. We start training from the StableDiffusion v1-5 [31] model. To encode the visual inputs, we use OpenAI's clip-vit-large-patch14 vision model, which serves as the partner model of the text encoder in SDv1-5. During training, we freeze the text encoder and only train the U-Net, the MLP module, and the last two transformer blocks of the vision encoder. We train our models for 150k steps on 8 NVIDIA A6000 GPUs, with a constant learning rate of 1e-5 and a batch size of 128. We only augment segments whose COCO [22] label is \"person\" and set a maximum of 4 reference subjects during training, with each subject having a 10% chance of being dropped. We train the model solely on text conditioning with 10% of the samples to maintain the model's capability for text-only generation. To facilitate classifier-free guidance sampling [14], we train the model without any conditions on 10% of the instances. During training, we apply the loss only in the subject region to half of the training samples to enhance the generation quality in the subject area." }, { "figure_ref": [], "heading": "Evaluation Metric", "publication_ref": [], "table_ref": [], "text": "We evaluate image generation quality on identity preservation and prompt consistency. Identity preservation is determined by detecting faces in the reference and generated images using MTCNN [41], and then calculating a pairwise identity similarity using FaceNet [34].\nFor multi-subject evaluation, we identify all faces within the generated images and use a greedy matching procedure between the generated faces and reference subjects. The minimum similarity value among all subjects measures overall identity preservation. We evaluate the prompt consistency using the average CLIP-L/14 image-text similarity following textual-inversion [9]. For efficiency evaluation, we consider the total time for customization, including fine-tuning (for tuning-based methods) and inference. We also measure peak memory usage during the entire procedure." }, { "figure_ref": [ "fig_4" ], "heading": "Single-Subject Image Generation", "publication_ref": [ "b31", "b37", "b22" ], "table_ref": [ "tab_0" ], "text": "Our first evaluation targets the performance of single-subject image generation. Given the lack of published baselines in our tuning-free environment, we compare with leading optimization-based approaches, including DreamBooth [32], Textual-Inversion [9], and Custom Diffusion [20]. We use the implementations from diffusers library [38]. We provide the detailed hyperparameters in the appendix section. We assess the capabilities of these different methods in generating personalized content for subjects derived from the Celeb-A dataset [23]. To construct our evaluation benchmark, we develop a broad range of text prompts encapsulating a wide spectrum of scenarios, such as recontextualization, stylization, accessorization, and diverse actions. The entire test set comprises 15 subjects, with 30 unique text prompts allocated to each. An exhaustive list of text prompts is available in the appendix. We utilized five images per subject to fine-tune the optimization-based methods, given our observation that these methods overfit and simply reproduce the reference image when a single reference image is used. In contrast, our model employs a single randomly selected image for each subject. Shown in Table 1, FastComposer surpasses all baselines, delivering superior identity preservation and prompt consistency. Remarkably, it achieves 300×-1200× speedup and 2.8× reduction in memory usage. Figure 6 shows the qualitative results of single-subject personalization comparisons, employing different approaches across an array of prompts. Significantly, our model matches the text consistency of text-only methods and exceeds all baseline strategies in terms of identity preservation, with only single input and forward passes used." }, { "figure_ref": [], "heading": "Multi-Subject Image Generation", "publication_ref": [], "table_ref": [], "text": "We then consider a more complex setting: multi-object, subject-driven image generation. We examine the quality of multi-subject generation by using all possible combinations (105 pairs in total) formed from 15 subjects described in Sec. 5.2, allocating 21 prompts to each pair for assessment. " }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Delayed Subject Conditioning. Figure 5 shows the impact of varying the ratio of timesteps devoted to subject conditioning, a hyperparameter in our delayed subject conditioning approach. As this ratio increases, the model improves in identity preservation but loses editability. A ratio between 0.6 to 0.8 achieves a favorable balance on the tradeoff curve.\nCross-Attention Localization Loss. Table 3 presents the ablation studies on our proposed crossattention localization loss. The baseline is trained in the same setting but excludes the localization loss. Our method demonstrates a substantial enhancement of the identity preservation score. Figure 4 shows the qualitative comparisons. Incorporating the localization loss allows the model to focus on particular reference subjects, thereby avoiding identity blending." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [ "b17" ], "table_ref": [], "text": "We propose FastComposer, a tuning-free method for personalized, multi-subject text-to-image generation. We achieve tuning-free subject-driven image generation by using a pre-trained vision encoder, making this process efficient and accessible across various platforms. FastComposer effectively tackles the identity blending issue in multi-subject generation by supervising crossattention maps with segmentation masks during training. We also propose a novel delayed subject conditioning technique to balance the identity preservation and the flexibility of image editability.\nLimitations. First, the current training set is FFHQ [18] which is small and primarily contains headshots of human faces. It also has a long-tailed distribution for the number of people, thus limiting our ability to generate images with more than three subjects. Utilizing a more diverse dataset will enable FastComposer to generate a broader range of actions and scenarios, thereby enhancing its versatility and applicability. Second, our work is primarily human-centric due to a scarcity of large-scale, multi-subject datasets featuring other subjects like animals. We believe that broadening our dataset to incorporate multi-subject imagery of other categories will significantly enrich our model's capabilities. Finally, our model, built on the foundation of Stable Diffusion and FFHQ, also inherits their biases." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is supported by MIT AI Hardware Program, NVIDIA Academic Partnership Award, MIT-IBM Watson AI Lab, Amazon and MIT Science Hub, Microsoft Turing Academic Program, Singapore DSTA under DST00OECI20300823 (New Representations for Vision), NSF grant 2105819 and NSF CAREER Award 1943349." } ]
Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend identity among subjects. We present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multisubject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300×-2500× speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. Fast-Composer paves the way for efficient, personalized, and high-quality multi-subject image creation. Code, model, and datasets will be released for reproduction.
FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention
[ { "figure_caption": "Figure 1 :1Figure 1: Comparison with baselines for multi-subject image generation. We use scientists' names in the text prompt for text-only methods (SD, MJ). Text-only methods only perform well when subjects are present in the training dataset but struggle to maintain the identity otherwise. Fine-tuning-based methods blend the identity of different persons (TI rows 1 and 2, CD rows 1, 2, 4), deviate from the text instruction and only generate a single subject (TI row 4), or generate images that do not resemble any specific reference (CD row 3).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Two challenges faced by existing subject-driven image generation methods. Firstly, current methods blend the distinct characteristics of different subjects (identity blending), shown by the right figure where Newton resembles Einstein. Cross-attention localization (Sec 4.2) solves this problem. Secondly, they suffer from subject overfitting, where they overfit the input image and ignore the text instruction. Delayed subject conditioning (Sec 4.3) addresses this issue.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Training and inference pipeline of FastComposer. Given a text description and images of multiple subjects, FastComposer uses an image encoder to extract the features of the subjects and augments the corresponding text tokens. The diffusion model is trained to generate multi-subject images with augmented conditioning. We use cross-attention localization (Sec. 4.2) to boost multi-subject generation quality, and delayed subject conditioning to avoid subject overfitting (Sec. 4.3).", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: In the absence of cross-attention regularization (top), the diffusion model attends to multiple subjects' input tokens and merge their identity. By applying cross-attention regularization (bottom), the diffusion model learns to focus on only one reference token while generating a subject. This ensures that the features of multiple subjects in the generated image are more separated.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparison of different methods on single subject image generation. For text-only methods (i.e., StableDiffusion and Midjourney), we use scientists' names in the text prompt.5 Experiments", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "shows a quantitative analysis contrasting FastComposer with the baseline methods. Optimizationbased methods[9, 20,32] frequently falter in maintaining identity preservation, often generating generic images or images that blend identities among different reference subjects. FastComposer, on the other hand, preserves the unique features of different subjects, yielding a significantly improved identity preservation score. Furthermore, our prompt consistency is on par with tuning-based approaches[9, 20]. Qualitative comparisons are shown in Figure1. More visual examples for three-subject images are shown in Figure7.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Generating images with three subjects.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Comparison between our method and baseline approaches on single-subject image generation. StableDiffusion served as the text-only baseline without any subject conditioning.MethodImages ↓ Identity Preservation ↑ Prompt Consistency ↑ Total Time ↓ Peak Memory ↓", "figure_data": "StableDiffusion03.85%26.79%2s6 GBTextual-Inversion529.26%21.91%2500 s17 GBDreamBooth527.27%23.91%1084 s40 GBCustom Diffusion543.37%23.29%789 s29 GBFastComposer151.41%24.30%2 s6 GB", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison between our method and baseline approaches on multiple-subject Image generation. StableDiffusion served as the text-only baseline without any subject conditioning.MethodImages ↓ Identity Preservation ↑ Prompt Consistency ↑ Total Time ↓ Peak Memory ↓", "figure_data": "StableDiffusion01.88%28.44%2 s6 GBTextual-Inversion513.52%21.08%4998 s17 GBCustom Diffusion55.37%25.84%789 s29 GBFastComposer143.11%24.25%2 s6 GB", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation studies on the cross-attention localization supervision. We compare with the model trained in the same setting without cross-attention localization.", "figure_data": "MethodIdentity Pres. ↑ Prompt Cons. ↑w/o Loc.37.66%25.03%w/ Loc. (Ours)43.11%24.25%", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Guangxuan Xiao; Tianwei Yin; William T Freeman; Frédo Durand; Song Han
[ { "authors": "Omri Avrahami; Thomas Hayes; Oran Gafni; Sonal Gupta; Yaniv Taigman; Devi Parikh; Dani Lischinski; Ohad Fried; Xi Yin", "journal": "CVPR", "ref_id": "b0", "title": "Spatext: Spatio-textual representation for controllable image generation", "year": "2023" }, { "authors": "Daniel Bolya; Cheng-Yang Fu; Xiaoliang Dai; Peizhao Zhang; Christoph Feichtenhofer; Judy Hoffman", "journal": "", "ref_id": "b1", "title": "Token merging: Your ViT but faster", "year": "2023" }, { "authors": "Arantxa Casanova; Marlene Careil; Jakob Verbeek; Michal Drozdzal; Adriana Romero Soriano", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b2", "title": "Instance-conditioned gan", "year": "2021" }, { "authors": "Huiwen Chang; Han Zhang; Jarred Barber; Jose Maschinot; Lu Lezama; Ming-Hsuan Jiang; Kevin Yang; Murphy; Michael William T Freeman; Rubinstein", "journal": "", "ref_id": "b3", "title": "Muse: Text-to-image generation via masked generative transformers", "year": "2023" }, { "authors": "Hila Chefer; Yuval Alaluf; Yael Vinker; Lior Wolf; Daniel Cohen-Or", "journal": "Siggraph", "ref_id": "b4", "title": "Attend-and-excite: Attentionbased semantic guidance for text-to-image diffusion models", "year": "2023" }, { "authors": "Tianqi Chen; Bing Xu; Chiyuan Zhang; Carlos Guestrin", "journal": "", "ref_id": "b5", "title": "Training deep nets with sublinear memory cost", "year": "2016" }, { "authors": "Bowen Cheng; Ishan Misra; Alexander G Schwing; Alexander Kirillov; Rohit Girdhar", "journal": "", "ref_id": "b6", "title": "Maskedattention mask transformer for universal image segmentation", "year": "2022" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "Cogview: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "ICLR", "ref_id": "b8", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2023" }, { "authors": "Rinon Gal; Moab Arar; Yuval Atzmon; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "Siggraph", "ref_id": "b9", "title": "Designing an encoder for fast personalization of text-to-image models", "year": "2023" }, { "authors": "Song Han; Huizi Mao; William J Dally", "journal": "ICLR", "ref_id": "b10", "title": "Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding", "year": "2016" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "ICLR", "ref_id": "b11", "title": "Prompt-toprompt image editing with cross attention control", "year": "2023" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b13", "title": "Classifier-free diffusion guidance", "year": "2022" }, { "authors": "Matthew Honnibal; Ines Montani", "journal": "", "ref_id": "b14", "title": "spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing", "year": "2017" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt", "journal": "", "ref_id": "b15", "title": "Openclip", "year": "2021-07" }, { "authors": "Minguk Kang; Jun-Yan Zhu; Richard Zhang; Jaesik Park; Eli Shechtman; Sylvain Paris; Taesung Park", "journal": "CVPR", "ref_id": "b16", "title": "Scaling up gans for text-to-image synthesis", "year": "2023" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b17", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Yuval Kirstain; Omer Levy; Adam Polyak", "journal": "", "ref_id": "b18", "title": "X&fuse: Fusing visual information in text-to-image generation", "year": "2023" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "CVPR", "ref_id": "b19", "title": "Multi-concept customization of text-to-image diffusion", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b20", "title": "Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; Lubomir Bourdev; Ross Girshick; James Hays; Pietro Perona; Deva Ramanan; C Lawrence Zitnick; Piotr Dollár", "journal": "", "ref_id": "b21", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Ziwei Liu; Ping Luo; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b22", "title": "Deep learning face attributes in the wild", "year": "2015" }, { "authors": "Yiyang Ma; Huan Yang; Wenjing Wang; Jianlong Fu; Jiaying Liu", "journal": "", "ref_id": "b23", "title": "Unified multi-modal latent diffusion for joint subject and text conditional image generation", "year": "2023" }, { "authors": "Yotam Nitzan; Kfir Aberman; Qiurui He; Orly Liba; Michal Yarom; Yossi Gandelsman; Inbar Mosseri; Yael Pritch; Daniel Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b24", "title": "Mystyle: A personalized generative prior", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b25", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b26", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "PMLR", "ref_id": "b27", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "EMNLP", "ref_id": "b28", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Daniel Roich; Ron Mokady; H Amit; Daniel Bermano; Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b29", "title": "Pivotal tuning for latent-based editing of real images", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b30", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "CVPR", "ref_id": "b31", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Vishnu Sarukkai; Linden Li; Arden Ma; Christopher Ré; Kayvon Fatahalian", "journal": "", "ref_id": "b32", "title": "Collage diffusion", "year": "2023" }, { "authors": "Florian Schroff; Dmitry Kalenichenko; James Philbin", "journal": "", "ref_id": "b33", "title": "Facenet: A unified embedding for face recognition and clustering", "year": "2015" }, { "authors": "Jing Shi; Wei Xiong; Zhe Lin; Hyun Joon; Jung ", "journal": "", "ref_id": "b34", "title": "Instantbooth: Personalized text-to-image generation without test-time finetuning", "year": "" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b35", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "ICLR", "ref_id": "b36", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "Suraj Patrick Von Platen; Anton Patil; Pedro Lozhkov; Nathan Cuenca; Kashif Lambert; Mishig Rasul; Thomas Davaadorj; Wolf", "journal": "", "ref_id": "b37", "title": "Diffusers: State-of-the-art diffusion models", "year": "2022" }, { "authors": "Yuxiang Wei; Yabo Zhang; Zhilong Ji; Jinfeng Bai; Lei Zhang; Wangmeng Zuo", "journal": "", "ref_id": "b38", "title": "Elite: Encoding visual concepts into textual embeddings for customized text-to-image generation", "year": "2023" }, { "authors": "Guangxuan Xiao; Ji Lin; Mickael Seznec; Julien Demouth; Song Han", "journal": "", "ref_id": "b39", "title": "Smoothquant: Accurate and efficient post-training quantization for large language models", "year": "2022" }, { "authors": "Kaipeng Zhang; Zhanpeng Zhang; Zhifeng Li; Yu Qiao", "journal": "IEEE signal processing letters", "ref_id": "b40", "title": "Joint face detection and alignment using multitask cascaded convolutional networks", "year": "2016" } ]
[ { "formula_coordinates": [ 4, 193.26, 322.3, 310.74, 12.69 ], "formula_id": "formula_0", "formula_text": "L noise = E z∼E(x),P,ε∼N (0,1),t ||ε -ε θ (z t , t, ψ(P))|| 2 2 ,(1)" }, { "formula_coordinates": [ 4, 348.36, 458.55, 157.38, 10.53 ], "formula_id": "formula_1", "formula_text": "Q = W q z, K = W k c, and V = W v c." }, { "formula_coordinates": [ 4, 187.13, 493.54, 149.96, 16.94 ], "formula_id": "formula_2", "formula_text": "A = Softmax( QK T √ d ) ∈ [0, 1] (h×w)×n" }, { "formula_coordinates": [ 5, 222.51, 294.15, 281.49, 22.74 ], "formula_id": "formula_3", "formula_text": "c i = ψ(P) i , i / ∈ I MLP(ψ(P) i ||φ(s j )), i = i j ∈ I(2)" }, { "formula_coordinates": [ 6, 204.15, 367.07, 299.85, 30.32 ], "formula_id": "formula_4", "formula_text": "L loc = 1 m m j=1 (mean(A ij [ mj ]) -mean(A ij [m j ])).(3)" }, { "formula_coordinates": [ 6, 266.55, 422.22, 237.45, 9.81 ], "formula_id": "formula_5", "formula_text": "L = L noise + λL loc ,(4)" }, { "formula_coordinates": [ 6, 247.67, 648.98, 256.33, 22.74 ], "formula_id": "formula_6", "formula_text": "t = θ (z t , t, c) if t > αT, θ (z t , t, c ) otherwise (5)" } ]
[ { "figure_ref": [ "fig_0" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b2", "b3", "b6", "b7", "b8", "b11", "b12", "b13", "b4", "b6", "b21", "b22", "b23", "b24", "b25", "b25", "b27", "b6", "b32", "b7" ], "table_ref": [], "text": "F EDERATED learning (FL) [1], [2] enables different clients (e.g., companies and mobile devices) to jointly train a machine learning model since the data is usually dispersed among different clients in practice. Furthermore, no client is allowed to share its local data with any other client or the centralized server. However, a model trained with FL often fails to generalize to new clients (domains) due to the problem of domain shift [2]. For example, one client may contain pictures of mostly simulation environments, while another is mostly real environments. The phenomenon of domain shift has been thoroughly summarized in the survey of transfer learning [3], [4]. In practice, federated domain adaptation (FDA) has become the main branch of FL, aiming to transfer knowledge from the decentralized clients to a different but related client (multi-source-single-target) [5], [6], or from one client to decentralized clients (single-source-multi-target) [7]. FDA has gained wide attention in fields ranging from healthcare [8], [9], recommendation systems [10], Internet of Things [11] to robotics [12], due to the increasing data protection regulations and privacy concerns.\nThe federated setting has some additional challenges [5], in particular, the H-divergence [13] can not be minimized due to privacy constraints. As a result, existing domain adaptation techniques [14]- [17] can not be applied in FDA. Some works [5], [6], [18] attempt to adapt knowledge without accessing the source data, however, they fail to achieve high performance. Due to the heterogeneity of local data distribution across source domains, how to leverage the data-privacy source models and unlabeled target data becomes a main challenge. At least two problems should be considered in order to handle this challenge in FDA. Firstly, how to extract transferable features to adapt knowledge across heterogeneous domains? Secondly, how to align the conditional distributions by learning from the source models without accessing their local data?\nVision Transformer (ViT) can extract more adaptable and robust features compared to traditional deep neural networks (DNNs) such as convolutional neural networks (CNNs) [19]- [21]. However, ViT-based methods face several challenges [22]. For example, they heavily rely on large-scale training data. Thus, it is more difficult to bridge the large domain gap in a federated setting, due to the diversity of heterogeneous data. To solve this problem, domain augmentation is necessary to consider the complementarity among domains [23]. According to [24], manipulating the hidden layers of DNNs can obtain better feature representations. Consequently, utilizing the latent architecture of ViT may augment data at domain-level and generate transferable features to bridge the domain discrepancy in FDA.\nIn recent years, contrastive learning has become a popular discriminative method based on embedding the augmented data and it has shown promising results on downstream tasks such as classification [25], [25], [26]. Since a prototype can represent a group of semantically similar samples [27], the prototypes of each source domain can be generated based on the source models without accessing the local data across domains based on contrastive learning or prototypes, these settings are relatively simpler than the setting under ViT and FL, since ViT is more data data-hungry than CNNs and the communication efficiency should be considered in the federated setting.\nIn this paper, we propose a model-aware contrastive approach (FDAC) to address Federated Domain Adaptation based on Contrastive learning and Vision Transformer. In particular, FDAC considers the multi-source-single-target FDA setting [6], [32], which is more popular than the singlesource-multi-target scenario [7]. The general idea of FDAC is illustrated in Fig. 1, where domain augmentation and semantic matching are two key components to adapt knowledge from different models. In summary, the main contributions of FDAC are presented as follows:\n1) We utilize the hidden architecture of ViT to further explore the feature transferability among heterogeneous domains. To the best of our knowledge, this method is the first attempt to investigate transferable representations by manipulating the latent architecture of ViT under the federated setting. 2) We propose a novel framework integrating domain augmentation and semantic matching to adapt knowledge from all the source models. Moreover, this framework can increase data diversity, align class-conditional distributions across domains and avoid catastrophic forgetting. 3) We have performed extensive experiments on several real datasets to demonstrate the effectiveness of our proposed method FDAC. The comparative results indicate that FDAC consistently outperforms the state-of-the-art FDA approaches in most conditions. Moreover, FDAC can better improve communication efficiency which is also a key factor in FL. The rest of this paper is organized as follows. Section II provides an overview of the related work. Section III describes the proposed FDAC framework in detail. Experimental results are reported and discussed in Section IV. Conclusions are presented in Section V." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b11", "b33", "b32", "b34", "b2", "b32", "b6", "b6", "b35", "b7" ], "table_ref": [], "text": "In this section, we review important research related to this work, including: (1) federated domain adaptation; (2) contrastive learning; and (3) Vision Transformer.\nA. Federated Domain Adaptation [11] is the first work to propose the concept of federated learning (FL), which aims to bring collaborative machine learning opportunities for large-scale distributed clients with data privacy and performance guarantees. Then, many works attempt to extend the implementation mechanism [33] or discuss it in real applications such as fairness [], robustness [] and FDA [32]. The development of federated learning systems is believed to be an exciting research direction which needs the effort from system, data privacy and machine learning communities [34]. In order to encourage the clients from actively and sustainably participating in the collaborative learning process, the research of ensuring fairness in FL is attracting a lot of interest []. Since FL systems are vulnerable to both model and data poisoning attacks, [] provides a broad overview of existing attacks and defenses on FL. Based on the distribution characteristics of the data, federated learning can be classified into vertically federated learning, horizontally federated learning and federated transfer learning [2].\nFADA [32] extends unsupervised adversarial knowledge transfer to the constraints of federated learning, however, the communication cost of FADA is huge which will cause privacy leakage. KD3A [6] is robust to communication rounds based on knowledge distillation and vote-based pseudo labels. Similar to KD3A [6], pseudo labeling is also used in SHOT [18] which only needs well-trained source models. Adversarial training is often used in centralized learning to mitigate bias, since the heterogeneous data may yield unfair and biased models. [35] considers adversarial training in the federated setting, and it can output a debiased and accurate model. Different from these FDA scenarios that can be categorized as multisource-single-target, [7] handles the single-source-multi-target scenario. The key challenge of FDA is to adapt knowledge from heterogeneous models, while obeying regulations and policies to protect privacy." }, { "figure_ref": [], "heading": "B. Contrastive Learning", "publication_ref": [ "b36", "b37", "b39", "b29" ], "table_ref": [], "text": "Contrastive learning has become the most popular style of self-supervised learning in fields such as computer vision and natural language processing, since it can avoid the cost of annotating large-scale datasets [36]. Different from generative methods, contrastive learning is a discriminative approach that aims to embed the augmented versions of the positive samples close to each other while trying to push away embeddings from negative samples. In this way, generative and contrastive approaches can be integrated to utilize the unlabeled samples to learn the underlying representations [31], [37].\n[38] applies a hard pair mining strategy to enhance contrastive fine-tuning since the hard pairs are more informative and challenging. Several works attempt to apply existing selfsupervision techniques to ViT. [39] investigates the effects of training self-supervised ViT and finds that instability is a major issue. [40] finds that self-supervised pretraining in a standard ViT model achieves similar or better performance compared to the best CNNs specifically designed for the same setting. Different from the above methods, [41] can utilize the architectural advantages of ViT and learn patchlevel representation. Since the instance invariance assumption can be easily generalized to domain adaptation tasks, [29] finds that contrastive learning is intrinsically a suitable candidate for domain adaptation, where both transferability and discriminability are guaranteed. However, as far as we are concerned, very few works attempt to address FDA by simultaneously considering all the domains based on contrastive learning." }, { "figure_ref": [], "heading": "C. Vision Transformer", "publication_ref": [ "b29", "b29" ], "table_ref": [], "text": "Self-attention mechanism is base component in Vision Transformer (ViT). ViT has fewer parameters and the training Although ViT has been successfully applied in tasks such as video processing and computer vision, the configurable architecture of ViT has not yet been fully explored, which might bring fine-grained model adaptation, especially in FDA where the source data can not be accessed directly.\nThe works most related to our proposed FDAC framework are transferable contrastive learning approaches proposed in [29], [31]. However, these works differ from FDAC in two aspects. Firstly, there backbones are both CNNs while the backbone of FDAC is ViT. Furthermore, FDAC manipulates the latent architecture of its backbone to align the data distributions in a fine-grained manner. Secondly, different from [29], the augmented data of FDAC is the original data of each source domain. Different from [31], each local source domain model of FDAC is trained only on its own data." }, { "figure_ref": [], "heading": "III. THE PROPOSED FDAC FRAMEWORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Notations and Problem Statement", "publication_ref": [], "table_ref": [], "text": "We use {D S k } K k=1 and D T to denote the K decentralized source domains and target domain, respectively.\nD S k contains N k labeled samples, i.e., D S k = {(x k i , y k i )} N k i=1 (1 k K). D T has N T unlabeled samples, i.e., D T = {(x T i )} N T i=1\n. Under our FDAC setting, the marginal data distributions of any source and target domains are different (i.e., P S k (x) = P T (x), P S k (x) = P Sj (x)) while their conditional distributions are the same (i.e., P S k (y|x) = P T (y|x)) (1 k, j K). Each source domain can train a local model based on its own data and the model parameters can be communicated among domains.\nThe goal of FDA is to learn a classifier for D T under the privacy restrictions. To achieve that goal, there exists the following challenges:\n1) It is challenging to increase the diversity of the target data without accessing the local data of each source domain.\n2) Since each category information can not be described in detail, it is challenging to align the conditional data distributions across different domains." }, { "figure_ref": [ "fig_2" ], "heading": "B. Overall Framework", "publication_ref": [ "b29", "b24" ], "table_ref": [], "text": "To achieve these challenges, we propose the FDAC method. The framework of FDAC is displayed in Fig 2, which aims to transfer knowledge from the different source models to the target model while the communication efficiency is also guaranteed. The implementation of FDAC is based on domain augmentation and semantic matching, corresponding to domain-level and category-level contrastive learning, respectively. Different from traditional transferable features learning [29], [31], we utilize the configurable architecture of ViT to perform contrastive learning based on domain augmentation, since the latent manipulation of DNNs can improve feature representations [24]. Moreover, this kind of domain augmentation can increase the data diversity of the target domain by complementing from each source domain. On the other hand, in order to exploit the class similarities to make knowledge transfer from source data to similar target categories, we extract domain-invariant features based on semantic matching. Since no source data is available to train the target model, we first generate prototypes for the source domains and then learn discriminative information based on those prototypes. Thus, these two components are also able to avoid catastrophic forgetting when knowledge is leveraged to adapt from different sources to the target domain." }, { "figure_ref": [ "fig_2" ], "heading": "C. Model-Contrastive Domain Augmentation", "publication_ref": [ "b23", "b47" ], "table_ref": [], "text": "The statistical learning theory [46] suggests that the model capacity and the diversity of the training data can characterize the generalization of a machine learning model. Inspired by [23], increasing the data diversity of multiply domains can enhance the generalization of representations. Due to the heterogeneity of local data in the federated setting, transferable feature representations are critical to enabling source models to make similar predictions based on semantically identical data. Motivated by this idea, we expand the diversity of target samples by augmenting data at domain-level. We observe that the target domain contains distinct knowledge but lacks domain knowledge of other source domains. Our insight is to conduct domain augmentation on domain-level to increase the diversity of target data based on all the source domains. Moreover, the target domain is compensated with missing knowledge of classes and features from each source domain.\nThe Backbone of ViT. The backbone of ViT is, in essence, one kind of DNNs. Thus, the extracted features of the first blocks are relatively transferable, compared to the output features of the later blocks which are relatively discriminative. ViT can be used beyond a feature extractor since each block is independent and the output feature of any block can be fetched. Usually, an input sample of the ViT backbone is first divided into 196 patches with the fixed size 16 * 16 [47]. The encoding layer converts the input patches into patch tokens, and then the positional embeddings are added to them. The input to the Transformer is the encoded patch tokens plus a classification token, denoted by B 0 . The Transformer encoder consists of L layers of Multi-head Self-Attention (MSA) and Multi-layer Perceptron (MLP) blocks. Then, the output of the l-th (1 l L) layer can be written as:\nBl = MSA LN B l-1 + B l-1 ,(1)\nB l = MLP LN Bl + Bl ,(2)\nwhere LN(•) represents the layer normalization operator. Domain Augmentation. Inspired by the configurable architecture of ViT, we design a transferable contrastive learning module in FDAC, based on domain-level data augmentation. The detail of this module is further illustrated in Fig. 2.a in detail. Given any target sample x i , we can easily get the output of each domain, i.e., Bl i(k) for the k-th source domain and B l i(T ) for the target domain of the l-th layer, respectively. Our goal is to minimize the data discrepancy between B l i(T ) and Bl i(k) from the same sample relative to that discrepancy from different samples. Assuming that the features are 2 -normalized, the domain-augmented contrastive loss is computed by:\nL DA = - 1 N T N T i=1 K k=1 log e B l i(T ) Bl i(k) /τ B l k ∼Ai e (B l i B l k /τ ) , (3\n)\nwhere τ is a temperature hyper-parameter and A i denotes the negative pairs representing that the input target sample is not Compute the loss L DA according to Eq. (3).\nx i ." }, { "figure_ref": [], "heading": "7:", "publication_ref": [], "table_ref": [], "text": "# Get prototypes from source domains:\n8:\nP ← Prototype Generation. 9:\n# Semantic Matching: 10:\nCompute the loss L SM according to Eq. ( 6).\n11:\nTrain M T with Eq. ( 7).\n12:\n// Stage 3: Model Aggregation:\n13:\nM T ← K k=1 M S k , M T 14:\nReturn M T . 15: end while" }, { "figure_ref": [ "fig_2" ], "heading": "D. Model-Contrastive Semantic Matching", "publication_ref": [ "b6", "b11" ], "table_ref": [], "text": "In federated learning, the source data is kept locally and the target data is unlabeled. Thus, it is extremely necessary for the target model to learn from all the locally-trained source models. In FDAC, we propose the category-level contrastive learning module as illustrated in Fig. 2.b. This module can align the data distributions through two steps. Firstly, it generates prototypes for each category of all the source domains. Secondly, it utilizes contrastive learning to minimize the distances of target samples to the source prototypes with the same classes relative to those with different categories. Moreover, pseudo labels are used in the second step since the target samples are unlabeled.\nPrototype Generation. By exploring the supervised semantic information of multiple heterogeneous domains, we seek to generate domain-invariant prototypes for each category in each source domain. Inspired by [48], the direction of a prototype should be representative of the features belonging to the corresponding category. Assume that each model M consists of a feature extractor F which is actually the backbone of ViT, and a classifier C. We perform 2 -normalization on F and then use it as the input of C which consists of weight vectors\nP = [p 1 , p 2 , • • • , p C ],\nwhere C represents the number of categories. C takes F (x)\n||F (x)||2 as input and it outputs the probability\nC(x) = σ PF (x) ||F (x)||2\n, where σ is the softmax function. In sum, the prototype generation of source domain k (1 k K) is defined as:\nL S k (M S k ; D S k ) = - E (x,y)∼D S k q log M(x), (4\n)\nwhere q is the one-hot encoding of the label. Then, we can use P to provide semantic guidance for the target model. Cross-domain Semantic Matching. The true labels of the target domain are unavailable, thus, we first use pseudo labeling presented in [6] to produce high-quality pseudo labels ỹT . We also use the generated pseudo labels to reduce the feature distribution gap by:\nL T (M T ; D T ) = - E (x, y T )∼D T q log M(x),(5)\nwhere M T represents the model of D T and q is the one-hot encoding of ỹT . For a target sample x, we use an additional two-layer MLP G to obtain 2 -normalized contrastive features\nz i = G(x) ||G(x)||2\n, since a nonlinear projection can improve the performance of contrastive learning. Then, we use the supervised contrastive loss for adaptation. For a given target sample x, we take the prototypes with the same category as positive pairs A p and those with different classes as negative pairs A n , according to the pseudo label of x. The cross-domain semantic matching loss L SM is defined as:\nL SM = - 1 N T N T i=1 1 |A p | pj ∼Ap log e (z i pj ) p k ∼An e (z i p k ) .(6)\nBoth Eq. ( 3) and Eq. ( 6) indicate that they can also avoid catastrophic forgetting when knowledge is contrastively transferred from multiply source models to the target model. In sum, the optimization problem for our FDAC approach is defined as:\nmin M T λ 1 L DA + λ 2 L SM + L T ,(7)\nwhere λ 1 and λ 2 are hyper-parameters. We summarize the detailed training procedure of FDAC in Alg. 1, where the final model of the target domain is gained based on aggregation [11]." }, { "figure_ref": [], "heading": "E. Theoretical Analysis of FDAC", "publication_ref": [], "table_ref": [], "text": "This subsection performs theoretical analysis of the proposed FDAC method and demonstrates that its loss functions \nL DA ∝ H (Z|M T (x)) -H (Z) , (8\n)\nwhere Z is the embedding features from both source and target domains. Eq. ( 3) shows that L DA significantly improves feature representations. Minimizing L DA is equivalent to simultaneously minimize H (Z|M T (x)) and maximize H (Z).\nMinimizing H (Z|M T (x)) encourages the model M T to generate low entropy clusters in the feature space for each given x based on all domains. On the other side, maximizing H (Z) tends to learn a high-entropy feature space in order to increase the diversity for stronger generalization [50].\nFor the semantic matching loss L SM proposed in Eq. ( 6), we can get the infimum taken over classifiers:\nL SM ∝ H (Y |Z) -H (Y ) = -I (Z; Y ) = inf H (Y ; M (x) |Z) -H (Y ) ,(9)\nwhere I represents mutual information and H (Y ) is a constant which can be ignored. Thus, minimizing L SM with class prototypes will minimize the infimum of conditional crossentropy H (Y ; M (x) |Z)(i.e., mutual information maximization) provides an additional semantic guidance compared to pseudo labeling loss L T with only cross-entropy. To sum up, FDAC can ......" }, { "figure_ref": [], "heading": "IV. EXPERIMENTAL EVALUATION", "publication_ref": [ "b51", "b53", "b32", "b20" ], "table_ref": [], "text": "In this section, we conduct extensive experiments to evaluate the performance of FDAC based on several publicly available datasets: DomainNet [51], OfficeHome [52], OfficeCaltech [53], PACS [54] and Cancer Dataset. Since the training of ViT heavily needs a lot of data, we do not select the dataset Digit-Five [32] which is relatively small. Furthermore, we also carry out experiments to demonstrate the advantage of our ViT-based augmentation compared to other ViT-based augmentation methods [20] OfficeHome. OfficeHome is a benchmark dataset for domain adaptation and it consists of 15,500 images of 65 classes from four domains: Artistic (Ar), Clip Art (Cl), Product (Pr), and Real-world (Rw) images. This is a benchmark dataset for domain adaptation, with an average of around 70 images per class and a maximum of 99 images in a class. The images can be found typically in Home and Office settings.\nOfficeCaltech. Caltech-10 consists of pictures of objects belonging to 10 classes, plus one background clutter class. Each image is labeled with a single object. Each class contains roughly 40 to 800 images, while most classes have about 50 images, totaling around 9000 images. The size of the images are not fixed, with typical edge lengths of 200-300 pixels.\nPACS. PACS is another popular benchmark for MSDA, which is composed of four domains (Art, Cartoon, Photo and Sketch). Each domain includes samples from 7 different categories, including a total of 9, 991 samples.\nBreast Cancer. Breast Cancer dataset includes 201 samples of one category and 85 samples of another category. The samples are described by 9 attributes, some of which are nominal and some are linear." }, { "figure_ref": [], "heading": "B. Comparison Baselines", "publication_ref": [ "b56", "b6", "b58", "b35" ], "table_ref": [], "text": "We compare FDAC with eleven state-of-the-art or representative approaches in terms of prediction accuracy. ResNet50 represents that the backbone is ResNet50, which is a popular deep architecture in CNNs. ResNet50 works in the source only manner. The only difference between R50-Ours and our proposed method FDAC is that the backbone of R50-Ours ResNet50. In R50-Ours, we select the last layer for domain augmentation. Thus, both ResNet50 and R50-Ours are CNNsbased, while the backbone of the left comparative methods are ViT-based. Source Only is frequently used as a baseline to examine the advantage of domain adaptation methods. PL is a pseudo-labeling approach in the source only manner, where the target domain trains a model with pseudo labels from the output of source classifiers. We use PL to further prove the strong performance of ViT in feature extraction. For the above four methods, we change them into the federated setting.\nSHOT [18] only needs a well-trained source model and it aims to generate target data representations that can be aligned with the source data representations. DECISION [56] can automatically combine the source models with suitable weights where the source data is not available during knowledge transfer. TransDA [57] is based on Transformer and the corresponding attention module is injected into the convolutional networks. FADA [5] designs a dynamic attention mechanism to leverage feature disentanglement to promote knowledge transfer. KD3A [6] performs decentralized domain adaptation based on knowledge distillation and pseudo labeling, while it is also robust to negative transfer and privacy leakage attacks. CPGA [58] first generates prototypes and pseudo labels, and then aligns the pseudo-labeled target data to the corresponding source avatar prototypes. FADE [35] attempts to study federated adversarial learning to achieve goals such as privacy-protecting and autonomy." }, { "figure_ref": [], "heading": "C. Implementation Details", "publication_ref": [ "b59", "b11" ], "table_ref": [], "text": "We implemented FDAC and the baseline methods using PyTorch [59]. We use the ViT-small with 16 × 16 patch size, pre-trained on ImageNet, as the ViT backbone. In each epoch, FedAvg [11] is used to aggregate models after r times of training. In our experiments, r = 1. For model optimization, we set the Stochastic Gradient Descent (SGD) with a momentum of 0.9. The initial learning rate η = 10 -3 , which decays inversely with epochs. The batch size is 64, 128, or 256, depending on the actual number of samples in the current domain. About the parameters in Eq. ( 7), we set λ 1 = 1, λ 2 = 1 for OfficeHome and OfficeCaltech, and λ 1 = 0.2, λ 2 = 0.5 for DomainNet, respectively." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "D. Experimental Results", "publication_ref": [ "b20", "b20", "b60", "b7" ], "table_ref": [ "tab_5", "tab_6", "tab_6" ], "text": "In FDA, there are multiple source domains and only one target domain. Thus, for each dataset, at one training time, only one sub-dataset is selected as the target domain while all the other sub-datasets are considered as the source domains. Take Table I for example, the column Art represents that Art is the target domain while Clipart, Product, and RealWorld are the source domains.\nTable I summarizes the results on OfficeHome. It is clear that FDAC outperforms the other methods in all sub-datasets. Notably, FDAC is much more effective on several sub-datasets II. We note that FDAC also performs the best in most conditions. Compared to the results in Table I, the performances of all the methods are relatively higher and the reason might be that this dataset is simpler than OfficeHome.\nExperimental results on the dataset DomainNet are presented in Table V. This dataset is extremely challenging for two reasons. Firstly, the domain discrepancy in each adaption direction is important. Secondly, Too many categories (i.e., 345) make learning discriminative features much more challenging. R101-Ours represents that the backbone is ResNet101, since this dataset is more complex than Office-Home and OfficeCaltech. The performance of R101-Ours is not bad and the reason might be that a complex CNNs can also be trained to extract adaptable features. FDAC outperforms all the comparative methods, indicating that domain-level augmentation and semantic matching can better enable domain adaptation in the federated setting.\nE. Further Analysis 1) Communication Efficiency: Communication efficiency is an important indicator in the federated setting. To evaluate the communication efficiency, we train FDAC with different communication rounds r and report the average accuracy on dataset OfficeHome and DomainNet. KD3A and FADA are selected as comparative methods. We set r =1, 2, 5, 10, and 20, representing that we synchronize models after r rounds of training. Fig. 3.a-b shows the accuracy in each round during training. It is clear that the accuracy of all methods increases with the number of rounds, representing that FADA needs larger communication rounds for better performance. KD3A performs better than FADA, but it is still not so good as our method. For example, FDAC outperforms KD3A with more than 5% accuracy, especially in the lower communication rounds (i.e., r = 5). FDAC needs about half the number of communication rounds compared with KD3A. Moreover, FDAC is also robust to communication rounds and its accuracy only drops about 2% when r decreases from 100 to 10. In summary, our method is much more communication-efficient than the other methods.\nWe also analyze the convergence property in FDAC and the results are displayed in Fig. 3.c-d. When the number of local training epochs is small, all methods perform poorly due to less training data. FDAC leads to the best convergence rate among the comparative methods. Moreover, we find that the other methods can hardly improve the performance of FDA with the ViT backbone.\n2) Feature visualization: To further investigate the feature distributions under our FDAC method, we randomly sample pixels on ViT-small based embedding from 10 categories on task Clipart, P roduct, RealW orld → Art. We present the visualization under DA and SM, which are discussed in Eq. ( 3) and Eq. ( 6), respectively. From Fig. 4 we can get the following conclusions: (1) the policies of pseudo labeling and source only are not as good as the domain-augmentation module in FDAC; (2) the module of semantic matching can further improve knowledge transfer; (3) Both feature transferability and discriminability can be guaranteed in FDAC.\n3) Ablation study on Domain Augmentation and Semantic Matching: To further analyze our approach FDAC, we conduct ablation experiments to fully investigate the effectiveness of different items as well as the sensitivity of hyper-parameters in the objective function. The loss elements in Eq. ( 7) are jointly minimized to train the classifier. We disable one loss at each time and then record the result to evaluate its importance on OfficeHome. The results are displayed in Table VI. For all the sub-datasets, it is clear that each loss item is necessary to guarantee performance, indicating that both domain augmentation and semantic matching are important in FDAC.\nTake OfficeHome for example, λ 1 and λ 2 are similar in sensitivity.\n4) Domain Augmentation based on Latent Manipulation: Table VI indicates that the policy of domain augmentation can enhance domain adaptation in the federated setting, thus, it is interesting to investigate which block in ViT is the most important to the performance of FDAC. We choose one block at one time to examine and the result of upon OfficeHome is displayed in Fig. 5.a. It can be observed that the best block for domain augmentation varies from one task to another.\nIn order to further exploit the importance of different blocks in domain augmentation, we use another four strategies to select the block: Transferability means to select the previous blocks of ViT; Discriminability means to select the later layers of ViT; Random represents that the block is randomly selected; All represents that all blocks are selected. The result in Fig. 5.b demonstrates that it is better to select Discriminability blocks for domain augmentation. The reason might be that aligning the later blocks is better to keep the transferability of features since those blocks are relatively more discriminative.\n5) The Advantage of Domain Augmentation: The policy of domain augmentation in FDA is to extract transferable features, thus we investigate the advantage of this policy with two other representative techniques, i.e., Mixup [55] and SSRT [20]. Mixup combines two samples linearly. Formally, let x i and x j be two target samples, and y = G(x) be the model classifier predictions. We mix target samples with a designed weight λ sampled from a Beta distribution by a parameter β. The data is mixed at domain-level and the augmented data ( x, y) can be computed by: (10)\nThe corresponding optimal function is defined as:\nL m = -E x∼D T y log G( x).(11)\nSSRT [20] first adds random offsets to the latent token sequences of target sample, and then minimizes the discrepancy of the model's prediction between the original and augmented data by Kullback Leibler (KL) divergence [60]. Let b l x be the latent representation of original input x and b l xr be the augmented representation which adds an offset. The augmented data b l x can be obtained by:\nb l x = b l x + α b l x -b l xr × , (12\n)\nwhere α is a scalar parameter and [•] × means no gradient backpropagation. Let p x and p x be the model predictions corresponding to b l x and b l x , respectively. Then, the loss function can be defined as:\nL r = E x∼D T p x log p x p x .(13)\nWe use L m and L r to replace L DA in Eq. (7). For all tasks, α and β are set to be 1 and 0.2, respectively. Fig. 6 presents the results on the dataset DomainNet based on the ViT-base backbone. It is clear that the domain augmentation policy in FDAC is better than the two other data augmented policies, and the reason might be that the complementarity from source domains to the target domain is considered in FDAC." }, { "figure_ref": [], "heading": "V. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel approach, namely FDAC, to address federated domain adaptation via contrastively transfer knowledge from different source models to the target model. Firstly, we manipulate the latent architecture of ViT to further extract transferable features among domains, where the data is contrastively augmented at domain-level thus the data diversity of the target domain is also enhanced. Secondly, we generate prototypes for each source domain and high-quality pseudo labels for the target domain to bridge the domain discrepancy based on contrastive learning. In this way, both feature transferability and discriminability can be guaranteed and the knowledge can be leveraged to adapt across models.\nExtensive experiments on different real classification and segmentation tasks demonstrate the outstanding performance of FDAC in federated domain adaptation, and the communication efficiency is simultaneously guaranteed. Furthermore, the comparative results also indicate that our domain augmentation under ViT is better than existing ViT-based augmentation methods." } ]
Federated domain adaptation (FDA) aims to collaboratively transfer knowledge from source clients (domains) to the related but different target client, without communicating the local data of any client. Moreover, the source clients have different data distributions, leading to extremely challenging in knowledge transfer. Despite the recent progress in FDA, we empirically find that existing methods can not leverage models of heterogeneous domains and thus they fail to achieve excellent performance. In this paper, we propose a model-based method named FDAC, aiming to address Federated Domain Adaptation based on Contrastive learning and Vision Transformer (ViT). In particular, contrastive learning can leverage the unlabeled data to train excellent models and the ViT architecture performs better than convolutional neural networks (CNNs) in extracting adaptable features. To the best of our knowledge, FDAC is the first attempt to learn transferable representations by manipulating the latent architecture of ViT under the federated setting. Furthermore, FDAC can increase the target data diversity by compensating from each source model with insufficient knowledge of samples and features, based on domain augmentation and semantic matching. Extensive experiments on several real datasets demonstrate that FDAC outperforms all the comparative methods in most conditions. Moreover, FDCA can also improve communication efficiency which is another key factor in the federated setting.
Model-Contrastive Federated Domain Adaptation
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustration of our proposed FDAC method for federated domain adaptation. Domain augmentation and semantic matching are the key components to contrastively leverage different models at domain-level and categorylevel, respectively. Both the performance and communication efficiency are considered in this method.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "[18]. Then, the conditional distributions can be aligned by matching the semantic information of the source and target domains based on contrastive learning. Although several approaches [26], [28]-[31] have been proposed to learn transferable representations arXiv:2305.10432v1 [cs.LG] 7 May 2023", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. The general framework of our proposed method FDAC. This method adapts knowledge from heterogeneous models based on contrastive learning and ViT. Firstly, it manipulates the latent architecture of ViT and the augmented data originates from all the source models based on target samples. Secondly, it matches the semantic information across domains based on the prototypes of each source model and the pseudo labels of target samples.process converges more quickly compared to CNNs. Both of these advantages are important in the federated setting[1]. A ViT model directly applies a pure Transformer to image patches to classify full samples[19], [42]. The self-attention mechanism of ViT connects every patch token with the classification and the potential of ViT has inspired many new approaches. [43] is a fine-grained visual classification framework to investigate the potential of ViT, where the discriminative ability of classification tokens is also guaranteed based on contrastive loss. [44] introduces a quantification indicator to visualize and interpret the patch-level interactions in ViT. Different from pure ViT-based approaches, [45] proposes a cross-attention mechanism to integrate CNNs and Transformers to build a robust backbone, indicating that ViT and CNNs can complement each other through global connection and local connection. Since ViT has exhibited strong capability in learning robust representations, [21] systematically examines the role of self-attention and verifies it as a contributor to the improved robustness of ViT. ViT also works well in the field of segmentation. For example, [] investigated the feasibility of using transformer-based deep architectures for medical image segmentation tasks and it also introduces an extra control mechanism in the self-attention module to extend the existing architectures.Although ViT has been successfully applied in tasks such as video processing and computer vision, the configurable architecture of ViT has not yet been fully explored, which might bring fine-grained model adaptation, especially in FDA where the source data can not be accessed directly.The works most related to our proposed FDAC framework are transferable contrastive learning approaches proposed in[29],[31]. However, these works differ from FDAC in two aspects. Firstly, there backbones are both CNNs while the backbone of FDAC is ViT. Furthermore, FDAC manipulates the latent architecture of its backbone to align the data distributions in a fine-grained manner. Secondly, different from[29], the augmented data of FDAC is the original data of each source domain. Different from [31], each local source domain", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "1 : while not converged do 2 :// Stage 1 : 3 :1213Since the feature extraction of sample x i is based on each source model, computing the output of the given block also follows the privacy-preserving policy of the federated setting.As indicated in Fig 2.a, our contrastive learning based on latent feature space is different from traditional contrastive learning in domain adaptation, since our contrastive mechanism can leverage knowledge from different sources and the augmented samples are originated from the source domains instead of the original target samples. According to Eq. (3), the transferable representations are learned based on all the domains.Algorithm 1 FDAC Algorithm Require: Source domains {D S k } K k=1 (1 k K). Target domain D T . Ensure: Target model M T .Locally training for each source domain.Train M S k with classification loss by Eq. (4).", "figure_data": "", "figure_id": "fig_3", "figure_label": "1213", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Comparison results under different numbers of communication rounds (CR) and local epochs (LE).", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4. Feature visualization. (a) PL represents the policy of pseudo labeling. (b) indicates the result based on the source only manner. (c) represents that only domain augmentation is used in FDAC. (D) represents FDAC. Best viewed in color.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. Analysis of the hidden manipulation (Domain augmentation, DA) of ViT architecture.", "figure_data": "", "figure_id": "fig_6", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "RESULTS ON BREAST CANCER HISTOLOGY IMAGES CLASSIFICATION OF DIFFERENT MODES. (MEAN ACCURACY ± STANDARD DEVIATION) (%)", "figure_data": "MethodABCDEAverageResnet5088.7±0.485.1±0.681.0±0.287.4±0.775.0±0.183.4±0.4R50-OursSourceOnlyPLSHOTFADACPGATransDADECISIONFADEKD3A97.5±0.095.8±0.095.2±0.495.0±0.187.9±3.494.3±0.8Ours98.1±0.096.4±0.192.9±0.595.3±0.994.6±1.095.5±0.5", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "RESULTS ON DOMAINNET (MEAN ACCURACY ± STANDARD DEVIATION) (%)", "figure_data": "MethodClipartInfographPaintingQuickdrawRealSketchAverage", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "STUDY ON OFFICEHOME. DA AND SM INDICATE THAT THE MODULE OF domain augmentation AND semantic matching ARE DISABLED,", "figure_data": "in TableRESPECTIVELY.w/oArtClipartProductRealWorldAverageSourceOnly77.6±0.460.4±0.584.8±0.485.4±0.477.1±0.4DA79.9±0.264.5±0.388.5±0.187.0±0.380.0±0.2SM79.0±0.364.1±0.687.9±0.186.9±0.279.5±0.2Ours80.2±0.165.3±0.589.2±0.188.6±0.180.8±0.2such as Clipart and Product. The performance of FDAC ismuch better than R50-Ours, representing that ViT plays animportant role in feature extraction.The performance results on OfficeCaltech are summarized", "figure_id": "tab_6", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "SENSITIVITY OF λ 1 ANDλ 2 ON OFFICEHOME", "figure_data": "ArtClipartProductRealWorldλ1 = 0.179.264.488.087.7λ1 = 0.579.064.588.588.1λ1 = 1.080.265.389.288.6λ1 = 1.578.963.988.487.9λ2 = 0.179.063.487.787.8λ2 = 0.579.364.588.388.1λ2 = 1.579.664.688.588.3", "figure_id": "tab_7", "figure_label": "VII", "figure_type": "table" } ]
Chang ' An; Haotian Chen; Yonghui Xu; Yifan Zhang
[ { "authors": "", "journal": "Resnet", "ref_id": "b0", "title": "", "year": "" }, { "authors": "P Kairouz; H B Mcmahan; B Avent; A Bellet; M Bennis; A N Bhagoji; K Bonawitz; Z Charles; G Cormode; R Cummings", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b1", "title": "Advances and open problems in federated learning", "year": "2021" }, { "authors": "Q Yang; Y Liu; T Chen; Y Tong", "journal": "ACM Transactions on Intelligent Systems and Technology (TIST)", "ref_id": "b2", "title": "Federated machine learning: Concept and applications", "year": "2019" }, { "authors": "S J Pan; Q Yang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b3", "title": "A survey on transfer learning", "year": "2009" }, { "authors": "F Zhuang; Z Qi; K Duan; D Xi; Y Zhu; H Zhu; H Xiong; Q He", "journal": "", "ref_id": "b4", "title": "A comprehensive survey on transfer learning", "year": "2020" }, { "authors": "X Peng; Z Huang; Y Zhu; K Saenko", "journal": "", "ref_id": "b5", "title": "Federated adversarial domain adaptation", "year": "2019" }, { "authors": "H Feng; Z You; M Chen; T Zhang; M Zhu; F Wu; C Wu; W Chen", "journal": "", "ref_id": "b6", "title": "Kd3a: Unsupervised multi-source decentralized domain adaptation via knowledge distillation", "year": "2021" }, { "authors": "C.-H Yao; B Gong; H Qi; Y Cui; Y Zhu; M.-H Yang", "journal": "", "ref_id": "b7", "title": "Federated multi-target domain adaptation", "year": "2022" }, { "authors": "Y Chen; X Qin; J Wang; C Yu; W Gao", "journal": "IEEE Intelligent Systems", "ref_id": "b8", "title": "Fedhealth: A federated transfer learning framework for wearable healthcare", "year": "2020" }, { "authors": "A S Zhang; N F Li", "journal": "", "ref_id": "b9", "title": "A two-stage federated transfer learning framework in medical images classification on limited data: A covid-19 case study", "year": "2022" }, { "authors": "S Liu; S Xu; W Yu; Z Fu; Y Zhang; A Marian", "journal": "", "ref_id": "b10", "title": "Fedct: Federated collaborative transfer for recommendation", "year": "2021" }, { "authors": "B Mcmahan; E Moore; D Ramage; S Hampson; B A Arcas", "journal": "PMLR", "ref_id": "b11", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "X Yu; J P Queralta; T Westerlund", "journal": "", "ref_id": "b12", "title": "Towards lifelong federated learning in autonomous mobile robots with continuous sim-to-real transfer", "year": "2022" }, { "authors": "S Ben-David; J Blitzer; K Crammer; A Kulesza; F Pereira; J W Vaughan", "journal": "Machine Learning", "ref_id": "b13", "title": "A theory of learning from different domains", "year": "2010" }, { "authors": "E Tzeng; J Hoffman; K Saenko; T Darrell", "journal": "", "ref_id": "b14", "title": "Adversarial discriminative domain adaptation", "year": "2017" }, { "authors": "J.-Y Zhu; T Park; P Isola; A A Efros", "journal": "", "ref_id": "b15", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "H Liu; M Long; J Wang; M Jordan", "journal": "PMLR", "ref_id": "b16", "title": "Transferable adversarial training: A general approach to adapting deep classifiers", "year": "2019" }, { "authors": "K Zhou; Z Liu; Y Qiao; T Xiang; C C Loy", "journal": "", "ref_id": "b17", "title": "Domain generalization in vision: A survey", "year": "2021" }, { "authors": "J Liang; D Hu; J Feng", "journal": "", "ref_id": "b18", "title": "Do we really need to access the source data? source hypothesis transfer for unsupervised domain adaptation", "year": "2020" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b19", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "T Sun; C Lu; T Zhang; H Ling", "journal": "", "ref_id": "b20", "title": "Safe self-refinement for transformer-based domain adaptation", "year": "2022" }, { "authors": "D Zhou; Z Yu; E Xie; C Xiao; A Anandkumar; J Feng; J M Alvarez", "journal": "PMLR", "ref_id": "b21", "title": "Understanding the robustness in vision transformers", "year": "2022" }, { "authors": "K Han; Y Wang; H Chen; X Chen; J Guo; Z Liu; Y Tang; A Xiao; C Xu; Y Xu; Z H Yang; Y Zhang; D Tao", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b22", "title": "A survey on vision transformer", "year": "2022" }, { "authors": "Y Shu; M Long", "journal": "", "ref_id": "b23", "title": "Open domain generalization with domainaugmented meta-learning", "year": "2021" }, { "authors": "V Verma; A Lamb; C Beckham; A Najafi; I Mitliagkas; D Lopez-Paz; Y Bengio", "journal": "PMLR", "ref_id": "b24", "title": "Manifold mixup: Better representations by interpolating hidden states", "year": "2019" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b25", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "R Wang; Z Wu; Z Weng; J Chen; G.-J Qi; Y.-G Jiang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b26", "title": "Crossdomain contrastive learning for unsupervised domain adaptation", "year": "2022" }, { "authors": "J Snell; K Swersky; R Zemel", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "A Singh", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Clda: Contrastive learning for semi-supervised domain adaptation", "year": "2021" }, { "authors": "Y Chen; Y Pan; Y Wang; T Yao; X Tian; T Mei", "journal": "", "ref_id": "b29", "title": "Transferrable contrastive learning for visual domain adaptation", "year": "2021" }, { "authors": "K Tanwisuth; X Fan; H Zheng; S Zhang; H Zhang; B Chen; M Zhou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "A prototype-oriented framework for unsupervised domain adaptation", "year": "2021" }, { "authors": "Y Wei; L Yang; Y Han; Q Hu", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b31", "title": "Multi-source collaborative contrastive learning for decentralized domain adaptation", "year": "2022" }, { "authors": "X Peng; Z Huang; Y Zhu; K Saenko", "journal": "", "ref_id": "b32", "title": "Federated adversarial domain adaptation", "year": "2020" }, { "authors": "X Gong; A Sharma; S Karanam; Z Wu; T Chen; D Doermann; A Innanje", "journal": "", "ref_id": "b33", "title": "Preserving privacy in federated learning with ensemble cross-domain knowledge distillation", "year": "2022" }, { "authors": "Q Li; Z Wen; Z Wu; S Hu; N Wang; Y Li; X Liu; B He", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b34", "title": "A survey on federated learning systems: vision, hype and reality for data privacy and protection", "year": "2021" }, { "authors": "J Hong; Z Zhu; S Yu; Z Wang; H H Dodge; J Zhou", "journal": "", "ref_id": "b35", "title": "Federated adversarial debiasing for fair and transferable representations", "year": "2021" }, { "authors": "A Jaiswal; A Babu; M Zadeh; D Banerjee", "journal": "", "ref_id": "b36", "title": "A survey on contrastive self-supervised learning", "year": "2021" }, { "authors": "D Chen; D Wang; T Darrell; S Ebrahimi", "journal": "", "ref_id": "b37", "title": "Contrastive test-time adaptation", "year": "2022" }, { "authors": "Y Zhang; B Hooi; D Hu; J Liang; J Feng", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Unleashing the power of contrastive self-supervised visual models via contrast-regularized finetuning", "year": "2021" }, { "authors": "X Chen; S Xie; K He", "journal": "", "ref_id": "b39", "title": "An empirical study of training selfsupervised vision transformers", "year": "2021" }, { "authors": "M Caron; H Touvron; I Misra; H Jégou; J Mairal; P Bojanowski; A Joulin", "journal": "", "ref_id": "b40", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "S Yun; H Lee; J Kim; J Shin", "journal": "", "ref_id": "b41", "title": "Patch-level representation learning for self-supervised vision transformers", "year": "2022" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b42", "title": "Attention is all you need", "year": "2017" }, { "authors": "J He; J.-N Chen; S Liu; A Kortylewski; C Yang; Y Bai; C Wang", "journal": "", "ref_id": "b43", "title": "Transfg: A transformer architecture for fine-grained recognition", "year": "2022" }, { "authors": "J Ma; Y Bai; B Zhong; W Zhang; T Yao; T Mei", "journal": "", "ref_id": "b44", "title": "Visualizing and understanding patch interactions in vision transformer", "year": "2022" }, { "authors": "H Lin; X Cheng; X Wu; D Shen", "journal": "IEEE", "ref_id": "b45", "title": "Cat: Cross attention in vision transformer", "year": "2022" }, { "authors": "V Vapnik", "journal": "Springer science & business media", "ref_id": "b46", "title": "The nature of statistical learning theory", "year": "1999" }, { "authors": "Z Zheng; X Yue; K Wang; Y You", "journal": "", "ref_id": "b47", "title": "Prompt vision transformer for domain generalization", "year": "2022" }, { "authors": "K Saito; D Kim; S Sclaroff; T Darrell; K Saenko", "journal": "", "ref_id": "b48", "title": "Semisupervised domain adaptation via minimax entropy", "year": "2019" }, { "authors": "M Boudiaf; J Rony; I M Ziko; E Granger; M Pedersoli; P Piantanida; I B Ayed", "journal": "Springer", "ref_id": "b49", "title": "A unifying mutual information view of metric learning: cross-entropy vs. pairwise losses", "year": "2020" }, { "authors": "C Szegedy; V Vanhoucke; S Ioffe; J Shlens; Z Wojna", "journal": "", "ref_id": "b50", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "X Peng; Q Bai; X Xia; Z Huang; K Saenko; B Wang", "journal": "", "ref_id": "b51", "title": "Moment matching for multi-source domain adaptation", "year": "2019" }, { "authors": "H Venkateswara; J Eusebio; S Chakraborty; S Panchanathan", "journal": "", "ref_id": "b52", "title": "Deep hashing network for unsupervised domain adaptation", "year": "2017" }, { "authors": "L Fei-Fei; R Fergus; P Perona", "journal": "IEEE", "ref_id": "b53", "title": "Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories", "year": "2004" }, { "authors": "D Li; Y Yang; Y.-Z Song; T M Hospedales", "journal": "", "ref_id": "b54", "title": "Deeper, broader and artier domain generalization", "year": "2017" }, { "authors": "H Zhang; M Cisse; Y N Dauphin; D Lopez-Paz", "journal": "", "ref_id": "b55", "title": "mixup: Beyond empirical risk minimization", "year": "2017" }, { "authors": "S M Ahmed; D S Raychaudhuri; S Paul; S Oymak; A K Roy-Chowdhury", "journal": "", "ref_id": "b56", "title": "Unsupervised multi-source domain adaptation without access to source data", "year": "2021" }, { "authors": "G Yang; H Tang; Z Zhong; M Ding; L Shao; N Sebe; E Ricci", "journal": "", "ref_id": "b57", "title": "Transformer-based source-free domain adaptation", "year": "2021" }, { "authors": "Z Qiu; Y Zhang; H Lin; S Niu; Y Liu; Q Du; M Tan", "journal": "", "ref_id": "b58", "title": "Sourcefree domain adaptation via avatar prototype generation and adaptation", "year": "2021" }, { "authors": "A Paszke; S Gross; S Chintala; G Chanan; E Yang; Z Devito; Z Lin; A Desmaison; L Antiga; A Lerer", "journal": "", "ref_id": "b59", "title": "Automatic differentiation in pytorch", "year": "2017" }, { "authors": "M Sugiyama; S Nakajima; H Kashima; P Von; M Buenau; Kawanabe", "journal": "", "ref_id": "b60", "title": "Direct importance estimation with model selection and its application to covariate shift adaptation", "year": "2007" } ]
[ { "formula_coordinates": [ 3, 311.98, 367.56, 251.06, 34.87 ], "formula_id": "formula_0", "formula_text": "D S k contains N k labeled samples, i.e., D S k = {(x k i , y k i )} N k i=1 (1 k K). D T has N T unlabeled samples, i.e., D T = {(x T i )} N T i=1" }, { "formula_coordinates": [ 4, 106.82, 608, 193.2, 11.48 ], "formula_id": "formula_1", "formula_text": "Bl = MSA LN B l-1 + B l-1 ,(1)" }, { "formula_coordinates": [ 4, 112.44, 630.34, 187.59, 11.47 ], "formula_id": "formula_2", "formula_text": "B l = MLP LN Bl + Bl ,(2)" }, { "formula_coordinates": [ 4, 339.85, 122.98, 219.31, 35.31 ], "formula_id": "formula_3", "formula_text": "L DA = - 1 N T N T i=1 K k=1 log e B l i(T ) Bl i(k) /τ B l k ∼Ai e (B l i B l k /τ ) , (3" }, { "formula_coordinates": [ 4, 559.16, 135.41, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 311.98, 188.3, 11.5, 9.65 ], "formula_id": "formula_5", "formula_text": "x i ." }, { "formula_coordinates": [ 4, 313.75, 521, 134.79, 24.78 ], "formula_id": "formula_6", "formula_text": "M T ← K k=1 M S k , M T 14:" }, { "formula_coordinates": [ 5, 80.11, 165.2, 92.1, 9.68 ], "formula_id": "formula_7", "formula_text": "P = [p 1 , p 2 , • • • , p C ]," }, { "formula_coordinates": [ 5, 97.99, 191.61, 84.12, 14.4 ], "formula_id": "formula_8", "formula_text": "C(x) = σ PF (x) ||F (x)||2" }, { "formula_coordinates": [ 5, 76.27, 249.44, 219.88, 17.41 ], "formula_id": "formula_9", "formula_text": "L S k (M S k ; D S k ) = - E (x,y)∼D S k q log M(x), (4" }, { "formula_coordinates": [ 5, 296.15, 252.34, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 5, 80.65, 369.06, 219.38, 15.66 ], "formula_id": "formula_11", "formula_text": "L T (M T ; D T ) = - E (x, y T )∼D T q log M(x),(5)" }, { "formula_coordinates": [ 5, 48.96, 426.09, 60.79, 14.38 ], "formula_id": "formula_12", "formula_text": "z i = G(x) ||G(x)||2" }, { "formula_coordinates": [ 5, 60.72, 517.86, 239.3, 32.28 ], "formula_id": "formula_13", "formula_text": "L SM = - 1 N T N T i=1 1 |A p | pj ∼Ap log e (z i pj ) p k ∼An e (z i p k ) .(6)" }, { "formula_coordinates": [ 5, 109.43, 624.47, 190.6, 15.2 ], "formula_id": "formula_14", "formula_text": "min M T λ 1 L DA + λ 2 L SM + L T ,(7)" }, { "formula_coordinates": [ 5, 368.74, 276.95, 190.42, 9.65 ], "formula_id": "formula_15", "formula_text": "L DA ∝ H (Z|M T (x)) -H (Z) , (8" }, { "formula_coordinates": [ 5, 559.16, 277.37, 3.87, 8.64 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 5, 354.96, 432.39, 208.08, 23.68 ], "formula_id": "formula_17", "formula_text": "L SM ∝ H (Y |Z) -H (Y ) = -I (Z; Y ) = inf H (Y ; M (x) |Z) -H (Y ) ,(9)" }, { "formula_coordinates": [ 9, 123.3, 449.91, 176.73, 15.47 ], "formula_id": "formula_18", "formula_text": "L m = -E x∼D T y log G( x).(11)" }, { "formula_coordinates": [ 9, 121.37, 562.97, 174.5, 14.21 ], "formula_id": "formula_19", "formula_text": "b l x = b l x + α b l x -b l xr × , (12" }, { "formula_coordinates": [ 9, 295.87, 565.35, 4.15, 8.64 ], "formula_id": "formula_20", "formula_text": ")" }, { "formula_coordinates": [ 9, 121.96, 636.43, 178.06, 23.22 ], "formula_id": "formula_21", "formula_text": "L r = E x∼D T p x log p x p x .(13)" } ]
2023-05-11
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b7", "b9", "b20", "b17", "b21", "b22", "b6", "b11", "b0", "b9", "b20", "b2", "b4", "b19", "b18", "b8" ], "table_ref": [], "text": "Learning vocabulary is key to learning second (mostly foreign) languages, but also a difficult task. One of the most well-known and effective methods is flashcards, i.e., writing the L2 (a second language word) word on the front and writing down the corresponding L1 word (a first or native language word) on the back, with content such as mnemonic or context. Moreover, one may manage flashcards by putting the cards in boxes to follow the Leitner system [13] to recall the word regularly following the forgetting curve [8]. However, both writing down every word and managing a bunch of cards require significant effort and can take a lot of effort from learners.\nTechnology advances have enabled vocabulary learning to shift from manually writing down the words to using software systems such as Anki [10] and Quizlet [21], which make language learning more efficient and engaging. Some systems use ideas behind intelligent tutoring systems to model the learner's knowledge state to intervene in the retrieval practice [18,22,23]. Many studies have shown that managing retrieval practice and designing personalized schedules using cognitive models can significantly improve learning efficiency [7,12].\nMany systems also use gamified interfaces and enable learners to share decks with others, making the learning process more interactive and socially relevant [1,10,21]. However, despite these advances, the learning content, i.e., what is written on the flashcard, has mostly stayed the same throughout the years.\nRegarding the content for second language learning, keyword mnemonic [3] is a notable memory encoding strategy that uses interactive visual imagery with a keyword that sounds like part of a foreign word. Forming the keyword-based interactive image takes a two-step approach: creating first an acoustic and then an imagery link. Imagine a native English speaker is learning the Spanish word pato, which means duck. The keyword that sounds like the word is pot. Using the keyword, the learner first creates an acoustic link between the keyword and the Spanish word. Then, the learner builds an imagery link that connects the sound and its meaning by using a verbal cue, such as \"A duck wearing a pot on its head.\" By relating new information to existing knowledge, learners have an easier time memorizing the word and can retain it in memory for a longer time.\nPrevious studies on keyword mnemonics have shown their effectiveness compared with different learning strategies. Comparing keyword mnemonic with rote rehearsal and combining both strategies showed that the keyword group outperformed the other two groups [5]. Comparing the keyword mnemonic group with verbal and visual cues with mixed methods of contextual clues, word structure analysis, and opposite word pairs showed that the keyword group performed better in both short-term and long-term retention [20]. However, since the cues given in these studies are manually generated by experts, it is difficult to employ this approach at a large scale in the systems mentioned above.\nIn 2014, Savva et al. introduced an automatic keyword generation approach based on a cross-lingual system, TransPhoner [19]. It evaluates candidate keywords in the second language using the following measures for a given input word: imageability, phonetic similarity, orthographic similarity, and semantic similarity. The authors experimented on the effectiveness of TransPhoner using an evaluation set of 36 German words [9] with three other conditions: no keywords, randomly sampled keywords, and manually generated keywords. The result shows that the TransPhoner-generated condition achieved the highest score and the manually-generated keyword condition had no significant difference from randomly generated keywords. Despite TransPhoner's success in automatically generating keywords as cues, other forms of richer verbal or visual cues that could further help learners build an imagery link cannot be automatically generated. The learner (or teacher) still needs to manually develop them to connect the keyword and the L1 word, which requires a lot of effort on their part. Moreover, it takes an expert to come up with an image as the visual cue that corresponds to the verbal cue. Using image APIs such as Google Image API, one can juxtapose images of a keyword and an L1 word, but doing is not as effective as showing both words together in a single image. To make keyword mnemonic scalable, we need an end-to-end solution that takes words as input and generates keyword, verbal and visual cues.\nContributions. In this paper, we detail a pipeline for automatically generating verbal and visual cues in one shot via text generator and text-to-image generator. Our contributions are as follows:\n-We propose a large language model (LLM)-based pipeline that automatically generates highly memorable verbal and visual cues for an L1 word in language learning. We believe that our automated approach will significantly reduce content development costs by enhancing time efficiency and reducing manual generation effort. To the best of our knowledge, we are the first to apply LLMs in the context of keyword mnemonic. -We implement a web application for human participant studies and use it to compare our approach with existing ones. We analyze the effectiveness of four approaches: automatically generated keyword only, automatically generated keyword with a verbal cue, automatically generated keyword with both verbal and visual cues, and manually generated keyword and verbal cues.\nWe also outline avenues for future work that could stem from our approach." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b8", "b18" ], "table_ref": [], "text": "In this section, we detail our pipeline for automatically generating cues. Our work is driven by the following two research questions:\n-Can we automatically generate human-level verbal cues for the keyword? -Can we generate a visual cue that may facilitate building an imagery link that is described in a verbal cue?\nWe narrow the scope of automatically generating verbal and visual cues to the experiments conducted in previous studies [9,19] in this preliminary effort. We use the evaluation set of 36 German words and keywords from previous studies for both manually and automatically generated cues as baselines. Since verbal cues only exist for manually generated keywords, our task boils down to automatically generating verbal cues using TransPhoner-generated keywords and generating visual cues using verbal cues." }, { "figure_ref": [], "heading": "Pipeline for Auto-generating Verbal and Visual Cues", "publication_ref": [ "b16", "b15", "b8", "b5", "b14" ], "table_ref": [], "text": "We propose a pipeline consisting of two LLMs that generate verbal and visual cues in two steps: First, we use a text generator to automatically generate a sentence containing the TransPhoner keyword as the verbal cue. Second, we use a text-to-image generator to generate an image as the visual cue. LLMs, pre-trained with massive datasets, have shown human-level performance on the tasks we described above through prompts. This is because LLMs are good for controllable text generation [17] and following instructions [16]. With proper prompts, models show their ability to solve the tasks with zero-shot or few-shot setups. We use zero-shot setup LLMs for both generating verbal and visual cues. We detail the pipeline through an example in Fig. 1 where we need to generate Fig. 1: Our end-to-end pipeline for automatically generating verbal and visual cues for an L2 word. cues for the German word flasche, which means a bottle. The keyword generated by TransPhoner is flashy; Using the keyword and the meaning of the word, we create the prompt: \"Write a short, catchy sentence that connects flashy and bottle.\" Additionally, we add a constraint on verbal cues to start with \"Imagine\" for two reasons. First, verbal cues in the previous study [9] are in that format. Since we are trying to answer whether we could achieve human-level verbal cues, we match the format. Second, we follow grammatical characteristics that come after the word imagine. After the word \"Imagine\", usually a noun or gerund comes out; we found that the generated verbal cue contains fewer ambiguous pronouns, which makes the cue more descriptive. This feature is key to linking the text generator and text-to-image generator within the same pipeline. Using the prompt, our text generator, GPT-3 [6] (text-davinci-003, temp=0.5), generates the verbal cue. Then, we reuse the verbal cue as the prompt for our text-to-image generator, DALL-E 2 [15], by removing the word \"Imagine\". One can freely choose any LLMs to automatically generate these verbal and visual cues. We present the gray region in Fig. 1 to the participant as learning content." }, { "figure_ref": [], "heading": "Experimental Evaluation", "publication_ref": [], "table_ref": [], "text": "In this section, we detail our experiments on presenting different content to different participants to explore whether automatically generated verbal and visual cues are effective in vocabulary learning." }, { "figure_ref": [], "heading": "Experimental Design", "publication_ref": [], "table_ref": [], "text": "In the experiment, participants learn 36 German words and are tested on recalling both the German word (generation) and its English meaning (recognition).\nThe words are split into three sets, which means that each participant goes through the learning, recognition, and generation cycle three times. Words in each set are randomly shuffled for each participant. At the end of the experiment, we also ask participants to rate the helpfulness of the cues." }, { "figure_ref": [], "heading": "Learning and Testing", "publication_ref": [ "b1", "b18" ], "table_ref": [], "text": "We provide each participant with both instructions on the study and the content that helps them learn the word; see Section 3.2 for details. Each word has a 30-second limit for the participant to memorize, and the participant can choose to move on to the next word after 15 seconds. After 30 seconds, we automatically move on to the next word. German words are pronounced twice, after 2 seconds and 7 seconds, respectively, after being displayed. We show a timer to participants to make them aware of the time remaining for each word. Participants have 15 seconds for both recognition and generation during testing. To avoid confusion between the two tests, we provide instructions such as \"What is this in English?\" and \"What is this in German?\". For generation, we also ask participants to use a, o, u, s instead of Umlaut ä, ö, ü, ß. We show a timer to participants as well. Words in both tasks are randomized in order.\nParticipants We recruit participants from Amazon Mechanical Turk [2]. We require participants to be native English speakers with no German language experience. Considering the experiment takes about 40 minutes, we paid each participant $7.25 and added a bonus of $2.75 for those who got a score of over 70% on the final test. The bonus encourages participants to do their best. However, we acknowledge that some participants may cheat on tests to achieve a high score by using an external dictionary, which we have no control of.\nWeb Interface We implement a React web application as our participant interface, which is designed based on the previous study [19]. We place an IRBapproved consent form on the front page and only participants who agree can participate in the experiment; the form explains in detail how the experiment is structured. We also show an example with a German word not in our evaluation set to clarify the procedure to participants. We collect metadata on time spent both during learning and testing, along with the responses to further investigate participant behavior." }, { "figure_ref": [], "heading": "Experimental Conditions", "publication_ref": [ "b18", "b8" ], "table_ref": [ "tab_0" ], "text": "We first divide participants into two groups based on how the keyword was generated: automatically (auto-cue) and manually (manual-cue). Among many combinations of verbal and visual cues that can be presented to the participants, we choose conditions that enable both intra-and inter-group comparisons. We recruit a total of 80 participants for our study, with 20 in each condition.\nAs shown in Fig. 2, we show the example of our web interface on how the content is displayed in different conditions. For intra-group comparisons, we Fig. 2: A snapshot of our web interface shown to experiment participants. further divide the auto-cue group into three conditions: Condition I is only provided with the TransPhoner-generated keyword, Condition II is provided with the keyword and the verbal cue generated by our pipeline, and Condition III is provided with the keyword and both the verbal and visual cues generated by our pipeline. For the inter-group comparisons, we provide both the auto-cue group and manual-cue group with information in Condition II. We note that the previous study [19] compared the groups with Condition I by not including verbal cues that were originally presented with the manually generated keywords [9]. The manually generated verbal cue and keyword should be considered as a whole since the keyword might have been chosen to provide a verbal cue with the best imageability among many keyword candidates.\nWe refer to these four conditions as Auto-I, Auto-II, Auto-III, and Manual-II. The instructions for each condition are shown in Table 1. We use the same instructions for Condition I from Savva et al. Our instructions for Condition II tell participants to create an imagery of a scene specified in a verbal cue. Our instructions for Condition III tell participants to remember the image, which is based on the verbal cue that describes a specific scene." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b3", "b18" ], "table_ref": [], "text": "We use different metrics to score recognition and generation. For recognition, we use cosine similarity between the word embeddings [4] between the answer and the response. We also consider responses that miss \"to\" for \"to\"-infinitives to be correct. Unlike recognition, as a novice German learner, generation is bounded to the orthographic feature of vocabulary. Therefore, we use a standardized (subtracting 1 and normalizing to 1) Levenshtein distance to score generation, following previous studies [19]. We also ask participants to evaluate the helpfulness of the cues using a 5-point Likert scale, which is provided along with the entire 36 words and the cues. Imagine a visual scene connecting the given keyword with the English meaning, and the sound of the German word." }, { "figure_ref": [], "heading": "II yes yes no", "publication_ref": [], "table_ref": [], "text": "Imagine a specific scene described in the verbal cue that connects the given keyword with the English meaning, and the sound of the German word." }, { "figure_ref": [], "heading": "III yes yes yes", "publication_ref": [], "table_ref": [], "text": "Remember the image by following the verbal cue that connects the given keyword with the English meaning, and the sound of the German word." }, { "figure_ref": [ "fig_0", "fig_2", "fig_2", "fig_2" ], "heading": "Results and Discussion", "publication_ref": [ "b17", "b16" ], "table_ref": [ "tab_0" ], "text": "After we exclude participants who did not understand the experiment properly, such as those who wrote down the keyword when recalling the English meaning, we have a total of 72 participants: Auto-I ( 20) with an average age of 25.4 years (SD = 2.3), Auto-II ( 17) with an average age of 24.2 years (SD = 1.7), Auto-III (18) with an average age of 24.8 years (SD = 1.6), and Manual-II (17) with an average age of 25.3 years (SD = 1.1). Fig. 3 shows per-participant experimental data in box plots averaged among 36 German words. Learning time is time spent memorizing a word, while testing time is the average time on recognition and generation of the word. Similarly, the combined score is an average of recognition and generation scores. Learning time, testing time, and Likert scale are normalized by their maximum value.\nThe median of time spent on learning was 19.8, 18.9, 18.6, and 19.2 seconds, respectively, for the four conditions out of the 30 seconds time limit, which may suggest that cognitive load across different conditions is similar. The median of time spent on testing, i.e., the average time spent on recognition and generation, was 8.85, 9.75, 8.7, and 7.95 seconds out of the 15 seconds time limit. The median of the 5-point Likert scale was 4.2, 3.95, 4.25, and 4.4. Now, we analyze the combined score based on the per-word combined score, as shown in Fig. 4. We perform a one-tailed Welch's t-test assuming unequal variances on the hypotheses of one condition being better than another. We set our level of significance to 5%. We detail each hypothesis below. Case A, B, and C in Fig. 4 are words we present with content generated through our pipeline for qualitative analysis.\nAuto-I vs. Auto-II: Does a verbal cue help learning? We hypothesize that Auto-II, with additional verbal cues, will result in better recognition and generation scores than Auto-I, which uses only keywords. We define our null hypothesis (H 0 ) and alternate hypothesis (H a ) as follows: A right-tailed test shows there is no significant effect of verbal cues, t(33) = -1.79, p = 0.96; we cannot reject H 0 . On the contrary, a left-tailed test shows statistical significance in favor of the keyword-only condition, t(33) = 1.79, p = 0.04. This result can be explained by several factors: The participants might have done rote rehearsals instead of building links as instructed in Table 1. Moreover, participants may come up with their own verbal cues that are more memorable than automatically generated ones. Personalized by default, participants' own verbal cues may be a better fit for each individual's own experience.\nAuto-II vs. Manual-II: Are automated verbal cues effective? We hypothesize Manual-II to be an upper bound of Auto-II since the former cues are generated by experts in psycholinguistics. Therefore, we define our null hypothesis and alternate hypothesis as follows:\n-H 0 : µ M anual-II ≤ µ Auto-II -H a : µ M anual-II > µ Auto-II\nA right-tailed test shows that there is no significant difference between the two conditions, t(24) = -0.32, p = 0.62; we cannot reject H 0 . In Fig. 4, we show three words where participants perform better in the Auto-II condition than Manual-II (case A) and otherwise (case B), respectively. Case A in Table 2 shows that auto-generated cues are more memorable than manual cues even with a grammatical error (risen should be raised) or not realistic (Reuben sandwich calling your name). Case B, on the other hand, contains keywords where auto-generated cues are not frequently used (Triton, frizzy) or making it hard to imagine (a wagon with stories). This result implies that although we can automatically generate high-quality verbal cues, choosing appropriate keywords Fig. 4: Per-word combined score for all four experimental conditions, with three cases highlighting some words that work especially well with certain cues. remains crucial. Therefore, we need to add keyword generation to the pipeline and evaluate the quality of both generated keywords and the verbal cue.\nAuto-II vs. Auto-III: Does a visual cue help learning? We hypothesize better performance by Auto-III, which uses additional visual cues, than Auto-II. Therefore, we define our null hypothesis and alternate hypothesis as follows:\n-H 0 : µ Auto-III ≤ µ Auto-II -H a : µ Auto-III > µ Auto-II A right-tailed test shows that there is no significant difference between the two conditions, t(32) = 0.39, p = 0.35; we cannot reject H 0 . In Fig. 4, we show three words for the cases where participants perform better in the Auto-III condition than in Auto-II (case B) and two for when it does not (case C), respectively. Case B shows that Auto-III, which has additional visual cues than Auto-II, performs similarly as Manual-II. Considering the previous comparison that Auto-II has a lower score than Manual-II, we see that Auto-III does somewhat outperform Auto-II. Therefore, we can conclude that visual cues help participant build the imagery link to some degree.\nFor a more qualitative analysis, Fig. 5 shows visual cues generated by our pipeline. Fig. 5 (a-c) shows that visual cues may be helpful in cases where keywords that lack imageability and are not frequently used (Triton, frizzy) or in cases where auto-generated verbal cues are hard to imagine (a wagon with stories). However, as shown in case C, visual cues for abstract words (to take, to need) do not help much. Fig. 5 (d-e) shows that in these cases the generated image is not descriptive enough to facilitate the imagery link. Interestingly, the Likert scale score was higher for Auto-III than Auto-II in every word except one. This result implies that participants think it is helpful to have additional visual cues. However, we cannot create effective visual cues for every word. Generating descriptive visual cues, especially for abstract words, remains a challenging task. Imagine brokers need much experience." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [ "b10", "b13" ], "table_ref": [], "text": "In this paper, we explored the opportunity of using large language models to generate verbal and visual cues for keyword mnemonics. A preliminary human experiment suggested that despite showing some promise, this approach has limitations and cannot reach the performance of manually generated cues yet.\nThere are many avenues for future work. First, we need a larger-scale experiment in a real lab study, which provides us a controlled environment to test both short-term and long-term retention. Since we only tested short-term retention, it is possible that no approach can significantly outperform others. We also need more psycholinguistics perspectives on constraining time spent on learning and testing. By conducting the research in a more controlled environment, we can use additional information (e.g., demographics, language level) to help us conduct a deeper analysis of the results. We do clarify that using Amazon's Mechanical Turk to conduct experiments is standard in prior work, which is part of the reason why we chose this experimental setting. To track long-term retention, we likely have to resort to knowledge tracing models that handle either memory decay [11] or open-ended responses [14]. Second, we can extend our pipeline by generating the keyword automatically as well instead of using TransPhonergenerated keywords, which may make our approach even more scalable. One important aspect that must be studied is how to evaluate the imageability of the keywords and verbal cue that contains both keywords and vocabulary, which remains challenging. Third, we can generate personalized content for each participant. We may provide additional information in the text generator about the topic they are interested in that we could use to generate a verbal cue. Moreover, we can generate a story that takes all words into account. It is also possible to generate verbal cues in L2 as well, which may help learners by providing even more context. Fourth, instead of the pronunciation of the word, we can use other features in language to generate verbal cues. For example, when learning Mandarin, memorizing Chinese characters is as important as learning how to pronounce the word. The Chinese character 休 means rest, which is xiū in Mandarin. The character is called a compound ideograph, a combination of a person (人) and a tree (木), which represents a person resting against a tree. Combined with a keyword, shoe, for example, we could accomplish two goals with one verbal cue, \"A person is resting by a tree, tying up their shoe.\" This way, we can make visual cues more descriptive for abstract words." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors thank the NSF (under grants 1917713, 2118706, 2202506, 2215193) for partially supporting this work." } ]
In second language vocabulary learning, existing works have primarily focused on either the learning interface or scheduling personalized retrieval practices to maximize memory retention. However, the learning content, i.e., the information presented on flashcards, has mostly remained constant. Keyword mnemonic is a notable learning strategy that relates new vocabulary to existing knowledge by building an acoustic and imagery link using a keyword that sounds alike. Beyond that, producing verbal and visual cues associated with the keyword to facilitate building these links requires a manual process and is not scalable. In this paper, we explore an opportunity to use large language models to automatically generate verbal and visual cues for keyword mnemonics. Our approach, an end-to-end pipeline for auto-generating verbal and visual cues, can automatically generate highly memorable cues. We investigate the effectiveness of our approach via a human participant experiment by comparing it with manually generated cues.
SmartPhone: Exploring Keyword Mnemonic with Auto-generated Verbal and Visual Cues
[ { "figure_caption": "Fig. 3 :3Fig. 3: Box plots of per-participant data for each experimental condition. -H 0 : µ Auto-II ≤ µ Auto-I -H a : µ Auto-II > µ Auto-I", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 5 :5Fig. 5: Examples of visual cues generated by our pipeline in cases where they are helpful to participants and cases where they are not.", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Cues and instructions we used for different experimental conditions.", "figure_data": "Cond.Cue Keyword Verbal VisualInstructionIyesnono", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples of automatically and manually generated verbal cues. A keyword is represented in italic, while a meaning is in bold.", "figure_data": "Case Word AutoManualImagine stepping into treason, aTretentreacherous path that can never beImagine you step on a stair tread.Aundone.RasenImagine a risen lawn that is lush and green!Imagine your lawn covered in raisins.RufenImagine Reuben calling out your name!Imagine you call a friend to put a new roof on a cottage.BStreiten SagenImagine Triton and his trident quarreling with the waves. Imagine a wagon full of stories just waiting to be told!Imagine you quarrel about the Menai straits. Imagine you tell someone sago is good for them.FriseurImagine a hairdresser who can tame even the most frizzy hair!Imagine your hairdresser inside a freezer.CNehmenImagine Newman taking the initia-tive to take action!Imagine you take a name in your address book.BrauchenImagine needing to fix a broken heart.", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Jaewook Lee; Andrew Lan
[ { "authors": "L V Ahn", "journal": "", "ref_id": "b0", "title": "Duolingo", "year": "" }, { "authors": " Amazon", "journal": "", "ref_id": "b1", "title": "Amazon mechanical turk", "year": "" }, { "authors": "R C Atkinson; M R Raugh", "journal": "Journal of experimental psychology: Human learning and memory", "ref_id": "b2", "title": "An application of the mnemonic keyword method to the acquisition of a russian vocabulary", "year": "1975" }, { "authors": "P Bojanowski; E Grave; A Joulin; T Mikolov", "journal": "Transactions of the association for computational linguistics", "ref_id": "b3", "title": "Enriching word vectors with subword information", "year": "2017" }, { "authors": "C J Brahler; D Walker", "journal": "Advances in physiology education", "ref_id": "b4", "title": "Learning scientific and medical terminology with a mnemonic strategy using an illogical association technique", "year": "2008" }, { "authors": "T Brown; B Mann; N Ryder; M Subbiah; J D Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell", "journal": "Advances in neural information processing systems", "ref_id": "b5", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "M Carrier; H Pashler", "journal": "Memory & cognition", "ref_id": "b6", "title": "The influence of retrieval on retention", "year": "1992" }, { "authors": "H Ebbinghaus", "journal": "Annals of neurosciences", "ref_id": "b7", "title": "Memory: A contribution to experimental psychology", "year": "2013" }, { "authors": "N C Ellis; A Beaton", "journal": "Language learning", "ref_id": "b8", "title": "Psycholinguistic determinants of foreign language vocabulary learning", "year": "1993" }, { "authors": "D Elmes", "journal": "", "ref_id": "b9", "title": "Anki", "year": "" }, { "authors": "A Ghosh; N Heffernan; A S Lan", "journal": "", "ref_id": "b10", "title": "Context-aware attentive knowledge tracing", "year": "2020" }, { "authors": "D P Larsen; A C Butler; Iii Roediger; H L ", "journal": "Medical education", "ref_id": "b11", "title": "Repeated testing improves longterm retention relative to repeated study: a randomised controlled trial", "year": "2009" }, { "authors": "S Leitner", "journal": "Herder", "ref_id": "b12", "title": "So lernt man lernen", "year": "1974" }, { "authors": "N Liu; Z Wang; R Baraniuk; A Lan", "journal": "", "ref_id": "b13", "title": "Open-ended knowledge tracing for computer science education", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b14", "title": "Dall-e 2", "year": "" }, { "authors": "L Ouyang; J Wu; X Jiang; D Almeida; C Wainwright; P Mishkin; C Zhang; S Agarwal; K Slama; A Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Training language models to follow instructions with human feedback", "year": "2022" }, { "authors": "S Prabhumoye; A W Black; R Salakhutdinov", "journal": "", "ref_id": "b16", "title": "Exploring controllable text generation techniques", "year": "2020" }, { "authors": "S Reddy; I Labutov; S Banerjee; T Joachims", "journal": "", "ref_id": "b17", "title": "Unbounded human learning: Optimal scheduling for spaced repetition", "year": "2016" }, { "authors": "M Savva; A X Chang; C D Manning; P Hanrahan", "journal": "", "ref_id": "b18", "title": "Transphoner: Automated mnemonic keyword generation", "year": "2014" }, { "authors": "V Siriganjanavong", "journal": "English Language Teaching", "ref_id": "b19", "title": "The mnemonic keyword method: Effects on the vocabulary acquisition and retention", "year": "2013" }, { "authors": "A Sutherland", "journal": "", "ref_id": "b20", "title": "Quizlet", "year": "" }, { "authors": "J Ye; J Su; Y Cao", "journal": "", "ref_id": "b21", "title": "A stochastic shortest path algorithm for optimizing spaced repetition scheduling", "year": "2022" }, { "authors": "B Zylich; A Lan", "journal": "", "ref_id": "b22", "title": "Linguistic skill modeling for second language acquisition", "year": "2021" } ]
[ { "formula_coordinates": [ 8, 140.99, 515.44, 130.72, 21.53 ], "formula_id": "formula_0", "formula_text": "-H 0 : µ M anual-II ≤ µ Auto-II -H a : µ M anual-II > µ Auto-II" } ]
2023-05-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b0", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b0", "b15", "b5", "b16", "b17", "b18", "b19", "b6" ], "table_ref": [], "text": "The ability of a design space to create a rich and valid set of design alternatives is a crucial component of shape optimisation pipelines, as it determines the quality and innovativeness of the solutions produced. Typically, design spaces result from the parametric modellers, which are pre-coded to parameterise the key features of a baseline design [1]. However, these modellers are built to produce solutions within the proximity of the baseline design, and thus, have several limitations. One such drawback is the limited ability of the resulting design spaces to support rich design exploration, leading to a lack of novel design solutions [2].\nMoreover, machine learning approaches have proven effective in bypassing the need for computational solvers by providing low-fidelity performance estimators trained offline with data from high-fidelity solvers or physical experimentation. However, until recently, the ability of these models to generate innovative solutions was limited. This limitation arose from the fact that they were only built to predict the performance criteria of designs coming from very narrow design spaces. Generative models [3] such as generative adversarial networks (GANs), variational auto-encoders (VAEs), diffusion models, and transformers have changed this by providing rich design spaces that allow for the creation of innovative shapes in addition to performance prediction.\nIn engineering design tasks, generative models are gaining attention for creating vast generative design spaces (GDSs). These models learn a set of latent features from the given training dataset of existing designs, which are used as design parameters to form GDSs. GDSs are not only low dimensional to expedite shape optimisation but, if properly trained, can also produce novel and valid design alternatives beyond the spectrum of the training dataset. Additionally, efforts are underway to enhance the quality of GDSs to make them physics-informed [4] and user-centred [5]. Physics-informed GDSs can leverage physical laws to ensure that generated designs satisfy certain performance criteria, while user-centred GDSs can incorporate user preferences and constraints to generate designs that are more aligned with the user's needs.\nAlthough GDSs have the potential to offer unprecedented design possibilities, their usability in real design scenarios is not yet fully understood. It is crucial to determine how GDSs can be best utilised without overwhelming designers while expediting the design process. For example, it is essential to study whether existing design exploration techniques, primarily designed to explore narrow design spaces generated by procedural parametric modellers [6,1], can be effectively applied to explore vast design spaces offered by generative models. Moreover, it is important to understand whether designers are willing to adopt GDSs in their design activities and, if so, what innovative design approaches and scenarios they can use to take full advantage of these diverse spaces.\nTo achieve this understanding, the present work investigates the most efficient ways of design exploration of GDSs. To this end, we first construct a GDS for complex engineering design problems, such as ship hull design, where parametric design plays a vital role. We create the GDS for hull design by training a custom GAN model, ShipHullGAN [7], on a large dataset of various ship types, including tankers, container ships, bulk carriers, tugboats, and crew supply vessels.\nWe then develop three design exploration modes with varying degrees of autonomy or designer involvement: random, semi-automated, and automated. The random exploration mode (REM) is a typical preliminary design phase, where designers independently explore the design space based on their intuition and expertise while considering performance. In the semi-automated exploration mode (SAEM), both the designer and optimiser collaborate to guide design exploration towards user-centred and optimised areas of GDS. Finally, the automated exploration mode (AEM) is the standard optimisation scenario where the optimiser is the primary driver and design space exploration occurs while taking performance into account.\nWith the above modes of exploration, we aim to understand how designers/naval architects perceive different modes of design exploration in the quest of generating novel design solutions from GDS. With the above research question in mind, during the study, we aim to mainly analyse the following:\n1. To what extent is each exploration mode effective in achieving diverse, novel and better-performing designs?\n2. Which factor is the key consideration for each exploration mode: form or performance?\n2 Background on ship design and optimisation Ship design is a complex and bespoke engineering process [8], which differs significantly from other design fields. Unlike other industries, there is no opportunity for full-scale testing, which means that designers have to rely heavily on digital design tools to create the most efficient and safe vessels possible [9]. In today's highly competitive world market, ships must be designed to meet high standards while also being delivered quickly. This requires a high degree of optimisation and customisation, as designers must balance numerous factors such as fuel efficiency, speed, safety, and cargo capacity [10,11]. The ultimate objective of ship design is to achieve the best performance for a given set of design criteria, which includes the vessel's intended use, the environmental conditions it will operate in, and the regulatory requirements that it must meet. Achieving these objectives requires a multidisciplinary approach that combines expertise in naval architecture, marine engineering, materials science, and other fields [12].\nTo expedite the design process, naval architects use extensively off-the-shelf parametric modelling tools. These tools are characterised by conservatism, for they are built to generate shapes lying in the neighbourhood of a successful baseline/parent shape [13]. Some relevant examples of such tools are presented in [14,15,1,16,6]. Next, these modellers are coupled with optimisers for improving the baseline shape against performance criteria (e.g., ship wave resistance, seakeeping, structural strength, etc.), which involve time-consuming simulations, e.g., computational fluid dynamics (CFD). At the end of the process, the new design is likely a local optimum whose shape is a minor variation of the existing one. While these approaches have proven effective for well-established ship types, there may be a need for more radical design ideas in certain situations. This can occur in situations where there are specific requirements that necessitate a more extensive exploration of the design space. Additionally, it may arise when there is a need to revolutionize and redesign existing ship types due to significant regulatory changes, such as the IMO 2020 emission reduction mandate, or the emergence of new disrupting technologies in the context of Industry 4.0 [17,18,19]. Such a strategy will benefit novel design tasks, e.g., special-purpose vessels, but it can also offer a competitive advantage for traditional players in the industry.\nConclusively, the coexistence of conservative parametric modellers with high-cost simulations and a large number of design parameters needed for shape optimisation of complex shapes leads to a non-efficient design approach. Such an approach can suffer from the curse of high dimensionality and a limited capability to explore design spaces efficiently for delivering variant, innovative, user-centred and truly optimal designs [20].\nTherefore, the ship design necessitates design approaches those bypass the dependence on the parent design and use more rational methods to create rich design spaces, i.e., design spaces resulting from the generative models, with the ability to formulate both conventional and non-conventional hull forms [7]." }, { "figure_ref": [], "heading": "Research methodology", "publication_ref": [], "table_ref": [], "text": "For this work, a study involving human subjects has been developed to quantitatively analyse the efficiency of three exploration modes for exploring GDS constructed using a custom GAN for the ship hull design. Firstly, we discuss the construction of the GDS and how it can be used in preliminary optimisation while being connected to a surrogate model to predict design performance. We then provide a detailed discussion on the different modes of exploration used to analyse the performance of GDS and how they differ in terms of optimisation and user involvement." }, { "figure_ref": [ "fig_6" ], "heading": "Creation of generative design space (GDS)", "publication_ref": [ "b5", "b0", "b6", "b0", "b19", "b20", "b6" ], "table_ref": [], "text": "There have been substantial efforts in computer-aided ship design for building robust parametric tools, but they can only handle a specific hull type [6,1]. Despite their efficiency in creating valid and smooth ship-hull geometries, they cannot be readily used to generate instances of ship types that deviate significantly from their target ship types.\nTherefore, in this work, we utilised, ShipHullGAN [7], a generic parametric modeller built using deep convolutional GANs. The training of ShipHullGAN is performed using a large and diverse dataset of existing hull geometries. We first extensively explored the literature on hull form optimisation and machine learning to identify various hull types. Ultimately, we selected 17 different parent hulls, including KCS2 , KVLCC23 , VLCC, JBC4 , DTC, DTMB 5 , and others from the FORMDATA series. We then created 3,000 synthetic variations of these hulls using the parametric approach described in [1]. The length, beam, and width of these designs were kept constant, while non-dimensional parameters between 0 and 1 were used to create shape variations. For the FORMDATA series, 5000 design variations were created systematically with respect to the characteristic parameters like midship section-area coefficient c M and the block coefficients c BA and c BF of the aft and fore parts of the ship, respectively. This synthetic and systematic design creation resulted in 56,000 designs. Subsequently, in order to establish a reliable training dataset, designs undergo validation using a blend of geometry-and physics-oriented quality filters to assess the viability of each design. Geometry-based filters are employed to ascertain geometric validity, ensuring the absence of self-intersecting surfaces in all designs. Conversely, physics-based filters are utilised to verify that the performance of each design can be accurately predicted by solvers without any potential collapse. Following this rigorous design validation process, a total of 52,591 design variations, which have been both geometrically and physically validated, are obtained for training the ShipHullGAN model.\nThe design dataset to the ShipHullGAN is inputted in the form of a shape-signature vector (SSV), which consists of a shape modification function and geometric moments. SSV acts as a unique descriptor of each dataset design instance [20,21]. The inclusion of geometric moments enables the extraction of meaningful features that are not only geometry-driven but also physics-informed. Using geometric moments along with the shape increases the chances of creating a large number of geometrically valid shapes, as adding moments gives a rich set of information about the geometry. More importantly, a strong correlation between ship physics and geometric moments also induces the notion of physics in the extracted latent features. Thus, the resulting features have not only the ability to form a compact but also a physics-informed design, ensuring high-quality valid designs.\nThe ShipHullGAN uses deep convolutional neural networks for both generator (G) and discriminator (D) components to capture sparsity in the training dataset, along with a space-filling term in the loss function to enhance diversity. D consists of 6 convolutional layers and a dropout layer, with a sigmoid activation function in the last convolutional layer to determine if the design is real or fake. G is the transpose of D and has 5 transposed convolutional layers, with an input layer that takes randomly sampled design, x and reshapes it. Both G and D use batch normalisation and ReLU activation functions. Training is performed using the Adam gradient descent algorithm with specific settings and performed on a computer with a dual 24-core 2.7GHz Intel Xeon 6 Gold 6226 CPU, NVIDIA Quadro RTX 6000 GPU, and 128GB of memory.\nOnce the training is completed, the generator component of the ShipHullGAN model is used as a generic parametric modeller. This provides a rich 20-dimensional GDS, which facilitates users in exploring design variations for a wide range of ship hulls. The resulting design variations include both traditional and unconventional forms, as shown in Figure 1. Interested readers should refer to [7] for details on the training of ShipHullGAN and the construction of 20-dimensional GDS." }, { "figure_ref": [ "fig_6" ], "heading": "Optimisation", "publication_ref": [ "b21" ], "table_ref": [], "text": "For the three modes of exploration, a simple optimisation problem is formulated. The problem aims to explore the 20-dimensional GDS resulting from the ShipHullGAN parametric modeller to create a container ship with a load-carrying capacity of 3600 TEU (Twenty-foot equivalent unit) while minimising its wave-making resistance/drag (C w ). This optimisation problem can be written in the following setting: (1)\nFind x * ∈ R 20 such that C w (x * ) = min x∈X C w (x)\nThe design constraints in the above equation are set to obtain physically plausible variations of the hull designs. The physical criterion, C w , is part of the overall resistance affecting the movement of objects on or near the free surface of oceans, lakes and rivers. It reflects the energy spend on creating the free-surface waves following the moving body [22]. Although the overall resistance of the ship is composed of different components, C w is a vital component and especially prominent for relatively full hull forms travelling at high speeds. It is noteworthy that C w is highly Figure 1: Design variations created with the proposed parametric modeller. These design variations can be visualised at https://youtu.be/avlq0FxZP-s and https://youtu.be/ZIfmAs5-qFw sensitive to local features of the hull so that a significant reduction can be achieved without affecting the overall cargo capacity. C w is affected by the distribution of the hull's shape, and minimising it at the preliminary design stage is crucial, but its evaluation can be highly computationally demanding." }, { "figure_ref": [], "heading": "Performance evaluation", "publication_ref": [ "b22", "b23", "b24", "b25" ], "table_ref": [], "text": "To expedite the optimisation process and reduce user fatigue resulting from long simulation run times, we developed a surrogate model that predicts C w values for designs using Gaussian Process regression (GPR) [23]. GPR is a nonparametric Bayesian approach that has been used in various design applications. It maps the globally-coupled, nonlinear relationship between inputs and outputs sampled from a theoretically infinite-dimensional normal distribution and any finite number of input-space samples that follow a corresponding joint (multivariate) Gaussian distribution. The main advantages of GPR over other modelling techniques are that it can: (1) map the input-output relationship with small data size, (2) handle noise in the data easily, thus avoiding over-fitting, and (3) optimise hyperparameters from training data to increase the fit accuracy.\nTo develop a reliable GPR model, we sampled 10,000 designs using the dynamic propagation sampling technique [24], which ensures that designs are evenly distributed in the design space covering all the design possibilities a given design has to offer. For evaluating C w values of the designs in the training dataset, we performed hydrodynamic simulations using a software package based on linear potential flow theory using Dawson (double-model) linearisation, with details of the employed formulation, the numerical implementation, and its validation appearing in [25]. As a result of using simple Rankine sources, the computational domain consists of a part of the undisturbed free surface, extending 1Lpp upstream, 3Lpp downstream, and 1.5Lpp sideways, with Lpp denoting the length between perpendiculars for the assessed ship hull. A total of [20 × 70] grid points are used for the undisturbed free surface, whereas [50 × 180] grid points are used for the hull discretisation with the simulation being performed at a Froude number F r equal to F r = U/ √ gL = 0.28, where g is the acceleration due to gravity, and L is the ship's length. Readers can refer to [26] for details on the construction of the surrogate model with GPR." }, { "figure_ref": [], "heading": "Experiment procedures", "publication_ref": [], "table_ref": [], "text": "The study is composed of three generative design exploration modes, random, semi-automated and automated design exploration, with varying levels of user involvement while providing them with a different level of autonomy. In the following section, we discuss in detail all the exploration modes." }, { "figure_ref": [ "fig_1" ], "heading": "Random exploration mode (REM)", "publication_ref": [ "b26", "b27", "b28", "b29" ], "table_ref": [], "text": "REM is based on a typical random design exploration approach [27,28,29], where the user manually explores GDS for novel and better-performing designs based on their intuition. However, as GDS has 20 dimensions, the exploration needs to be organised and user-friendly since exploring each of the 20 parameters individually can be cognitively taxing. Therefore, to streamline the exploration process, we first randomly sample a set of 30,000 designs from the GDS that satisfy all the design constraints in Eq. ( 1). As designers can explore designs well when the dimensionality of the space is low, the sampled designs are projected onto a 2-dimensional space using t-distributed stochastic neighbour embedding (t-SNE) [30]. This statistical method allows for visualising high-dimensional data by giving each data point a location in a 2-or 3-dimensional map that indicates the distribution of designs. The projection of the randomly sampled designs onto a 2-dimensional space is shown in Figure 2, where their boundary is evaluated using the convex hull, shown using a black curve. During the design exploration process, users can evaluate the C w value of each design to balance performance and novelty. However, to avoid biasing users towards physics-based designs only, we do not display the performance in real-time. Once a user discovers a novel design, they can evaluate its performance by clicking on the \"evaluate C w \" button. Each user is randomly assigned a set of 30,000 designs and asked to select 5 preferred designs during the exploration process. The design selection process aims to identify a design that is both novel and optimised. A design may be considered novel if it visually differs from the designs that the user has previously seen, designed, or worked with. An optimised design has the least C w . Therefore, the objective is to find a design that is both novel and optimised, with distinct features and minimal C w .\nThe design preview window also allows participants to visualise the design in 3D. Users can rotate, zoom, and pan designs to analyse their features thoroughly. Additionally, participants can overwrite a previously selected design. On each design selection, the user is asked what dictated their selection -the form (i.e., design novelty), performance, or a combination of both. Once users select all five designs, they can terminate and conclude this phase of the study." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Semi-automated exploration mode (SAEM)", "publication_ref": [ "b15", "b15", "b0" ], "table_ref": [], "text": "During this mode of exploration, the user and optimiser collaborate to explore GDS for the generation of novel and optimised designs based on the user's intuition and performance. The overall workflow of SAEM is shown in Figure 3. The optimiser works on exploring a diverse set of optimised designs, while the user induces their preferences to guide the exploration towards user-centred regions of GDS. Optimisation in this mode is performed based on Khan et al.'s [16] approach, which provides an innovative way to explore GDS and generate well-diverse design alternatives. This approach commences the design exploration with N number of uniformly distributed designs, where each design represents a particular location in GDS. These designs are then shown to the user along with their C w values. Afterwards, users select the designs according to the designs' overall form appearance and physics.\nThis interaction step allows users to compare designs and make appropriate design decisions. Once the desired hull form is selected, the design space is refined based on the selected design. The refined design space is then imported into the optimiser to generate new N uniformly distributed designs for the next interaction step.\nDuring this process, the designs generated for each interaction should reflect the user's design selection at the previous interaction, so at the end of the interactive process, the user is able to generate a preferred design. In this work, this is achieved by refining the input design space at each interaction while taking into account the user's design preference. A Space Shrinking Technique (SST) [16] is utilised, which detects non-potential regions based on the selected designs and then removes these regions to create a new design space. In other words, at each interaction, SST shrinks the design space towards user-preferred designs and removes regions containing non-preferred designs. This helps the search process to focus the computational effort on the exploration of user-preferred regions of design space.\nThe interactive process continues until the user arrives at a design with the desired characteristics. At each design selection, the system asks the user what factors influenced their selection, whether it was performance, form, or a combination of both. The user is permitted to perform between 16-25 interactions, which has been found to be an appropriate number to achieve convergence, meaning that no distinct designs are being created. This mode of exploration is based on typical shape optimisation [1]. Its pipeline is shown in Figure 4, which connects the GDS, generator (i.e., parametric modeller), and surrogate model for C w to an appropriate optimiser. During the exploration, the optimiser explores the GDS based on the outcome from the surrogate model, thereby guiding the exploration towards potentially obtaining the global optima while satisfying a given set of constraints." }, { "figure_ref": [], "heading": "Automated exploration mode (AEM)", "publication_ref": [], "table_ref": [], "text": "Generator\nFor the optimisation, we utilised a metaheuristic optimiser, Jay Algorithm (JA), a simple yet efficient approach that does not require any tuning parameters to reach a potentially global solution. JA commences the optimisation with a set of randomly sampled solutions, whose location is improved over a set of iterations. In each iteration, these solutions are moved towards global optima while minimising the following objective function.\nmin x∈X F = γ 1 C w + γ 2 n i=1 ||x u -x i ||(2)\n.\nThe above equation is defined as the weighted sum of two terms. The first term is C w and the second term is added to induce a notion of user preference during design exploration, which, therefore, is defined as the closeness of the new designs with the previously selected design by the user, x u . The weights γ 1 and γ 2 can be varied between 0 and 1 and set by the user in real-time during exploration. However, initially, we commence the exploration with γ 1 = 0.7 and γ 2 = 0.3, giving 70% weightage to C w and 30% to the closeness/similarity of newly created designs to the previously selected design.\nIn our case, since our solver relies on a surrogate model, running many design iterations is not computationally expensive. Therefore, we begin the optimizer with 50 design solutions, which increases the likelihood of finding a good solution. In each iteration, we present the user with the top n = 5 designs that minimise the objective function in Equation ( 2). It is important to note that during the first iteration γ 2 = 0 as there is no preferred design selected by the user. However, starting from the second interaction, participants select a design based on its novelty and performance and adjust the weightage of the objective function accordingly. This process continues in a similar fashion to the previous mode for 16-25 interactions." }, { "figure_ref": [ "fig_5" ], "heading": "Population and recruitment", "publication_ref": [], "table_ref": [], "text": "Figure 5 show the graphical user interface created in MATLAB®6 using the above-described exploration approaches. In total, 20 participants were recruited for the experiment following a protocol approved by the Institutional Review Board of the University of California and the ethical panel of the University of Strathclyde. All participants were final-year undergraduate students who had taken a Naval architecture course, and on average, they reported 3-4 years of experience in ship design. Participants were offered £30 as compensation for their participation. The average age of the participants was 25, with 30% female and 70% male participants. On average, designers reported that they equally value form (i.e. design novelty) and performance in their ship design practice.\nThe experiments were conducted virtually on Amazon Web Services. Prior to participation, informed consent was obtained from all participants via Google Forms. Participants received an email with step-by-step instructions on the experiment and were assigned 40 minutes to complete it, although they were allowed to take longer. Participants were also informed that there were no right or wrong answers and that their task was to explore the design in a Human-AI design setting using their experience, intuition, and the given directions.\nThe study did not capture any identifying information about the participants. All three modes of exploration were randomly assigned to the participants, meaning that one participant may perform REM first while another participant performs SAEM first. Once the participants completed all three modes of exploration, they were asked to fill out a post-experiment questionnaire designed to gain more insight into the results of the study. The questionnaire contained five questions, which were as follows:\nQ1 What mode of exploration helped to: " }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In this section, we extensively analyse the outcomes of the user study." }, { "figure_ref": [], "heading": "Design Histories", "publication_ref": [ "b30", "b3", "b31" ], "table_ref": [], "text": "By combining both design histories and final outcomes, it is possible to evaluate the effectiveness of an exploration mode, as described in [31]. In this study, a design history includes:\n1. Overall time spent by participants in each mode.\n2. Time spent on each design.\n3. Number of designs explored in each mode. 4. Performance of all the explored and user-preferred designs.\n5. Indicators for selecting preferred designs: performance, novelty, or a combination of both.\nThe data relating to these design histories were collected in real-time as the participants performed the study. At the end of the study, the results were automatically sent to the cloud. Among the design histories mentioned above, the key parameters to understand the significance of each mode of exploration are the overall time spent during each mode of exploration, the diversity of the selected designs, and their performance. For example, the most efficient mode of exploration is one in which participants extensively explore the GDS to find diverse yet optimised solutions within a short amount of time. In addition to the above histories, we also measure some mode-specific histories such as the location of designs explored during REM to identify if participants tend to cover the entire design space during exploration. Furthermore, we store the weightage of the two terms of the objective function in Eq. ( 2) during AEM. These design histories can reveal specific behaviours demonstrated by participants during each mode of exploration [32]." }, { "figure_ref": [ "fig_7" ], "heading": "Analyses of design histories", "publication_ref": [], "table_ref": [], "text": "Here we first analyse the three key elements of design histories related to the overall time spent, diversity of the explored designs and their quality (i.e., their performance) to gain insight into the behaviour of the participants during the three modes of exploration. Figure 6 shows the total time spent by the participants during each mode of exploration. Interestingly, among the three modes of exploration, participants spent less time in REM, while there was no significant difference between SAEM and AEM. On average, participants completed REM, SAEM, and AEM in 5, 11, and 10 minutes, respectively. It was expected that during REM, participants would take more time to find an innovative and optimised design. However, within REM, participants on average explored 1630 designs within the least amount of time. Another interesting finding was that participants who took less time to complete REM explored more designs, while participants who took more time explored fewer designs. For example, one participant examined 300 designs in 6.4 minutes, whereas another participant explored 6,776 designs in 4.5 minutes. It is important to note that the latter participant's performance can be considered an outlier. Nonetheless, the average time spent on each design, which is shown in Figure 7, did reveal an interesting trend: participants who took more time exploring fewer designs spent, on average, more time on each design. The time spent on each design was measured as the time taken to move to a different design from the design that was currently on the viewing window. In other words, it was the time taken when a design was created to the time when it was replaced by a new design. If no new design was created, it meant that the designer was currently analysing the current design, i.e., they were evaluating its performance and/or analysing its feature for novelty. On average, participants spent 1.4 seconds on each design during REM. Figure 8 provides the distribution of the total number of designs explored during each mode of exploration. During REM, participants explored an average of 1630 designs. However, among 20 participants, one participant explored 6,776 designs, which is significantly higher compared to the other participants and can be considered an outlier. If we exclude this outlier, the average number of designs explored by the remaining participants in REM is 601. In contrast, the total number of designs explored during SAEM and AEM was less than REM because, in these modes, participants can explore a set of five designs over 16 to 25 interactions, resulting in a total of 80 to 125 designs. As explained earlier, this number was chosen based on pre-analysis to ensure that participants could explore the design space without experiencing cognitive overload. On average, participants explored 106 and 118 designs in SAEM and AEM, respectively. These results indicate that within SAEM, participants were able to quickly scan the GDS with the least number of designs, taking approximately the same time as in AEM, which has the highest level of design exploration automation." }, { "figure_ref": [], "heading": "Overall time taken", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_6" ], "heading": "Diversity of preferred designs", "publication_ref": [ "b31", "b32" ], "table_ref": [], "text": "In this subsection, we analyse the diversity of the preferred designs during the three modes of exploration. For REM, the diversity is measured between the five final selected designs, and for SAEM and AEM, it is measured between the design selected as preferred designs during each interaction. This analysis addresses the effect of partially performance-driven design exploration and its impact on creativity for the creation of novel hull forms. While creativity and design novelty can be defined in different ways, it is generally assumed that measurements of diversity correspond with increased relative freedom, while the tendency towards standard solutions indicates less creative freedom [32]. The diversity measure aims to specifically understand if giving the performance as a criterion for exploration biases participants to increase novelty or if it influences participants to still focus only on the performance, as in the typical design exploration setting.\nDiversity in this work is evaluated with the sparseness at the centre (SC) [33] criterion, which measures the average distance of the centroidal design, x centroid , of the preferred design, to the preferred designs resulting during the exploration of GDS.\nSC = 1 n n i=1 ||x centroid -x i || 2(3)\nAlthough the absolute units of SC measurement are meaningless, as they represent the distance between designs, the relative values from the different modes of exploration provide a worthwhile comparison. Figure 9 shows the SC measure of the designs explored by the participants in all three modes of exploration. It is interesting to note that the diversity of the preferred designs in REM is significantly higher compared to the other two modes. AEM has significantly lower diversity, indicating that designs are highly influenced by performance without much focus on diversity, even when the objective function includes a term to induce a human preference for novelty (see Eq. ( 2)). Figure 10 shows the average values of Cw of the preferred design resulting from all three modes. It is noteworthy that designs resulting from SAEM, on average, perform better compared to AEM, which is highly performancedriven. However, designs resulting from REM are diverse but do not perform well. In conclusion, participants find better performing and diverse designs with SAEM while exploring fewer designs compared to AEM and REM." }, { "figure_ref": [ "fig_10" ], "heading": "Performance vs novelty", "publication_ref": [], "table_ref": [], "text": "During the three modes of exploration, most participants tended to select the preferred design based on both performance and form novelty. However, in REM, participants cared more about form novelty, while in the other two modes, they prioritised performance. Interestingly, the indication towards performance was higher in SAEM, which could be the reason for the better-performing preferred designs resulting from SAEM.\nAnother point worth noting is that at the start of the study, we asked the participants to give their opinion on whether they care more about performance or novelty during a typical design process. On average, they indicated an equal preference for both novelty and performance. In this subsection, we discuss the results of the questionnaire conducted to evaluate participants' perceptions of RME, SAME and AME exploration modes discussed in Section 3.5. The results of this questionnaire are shown in Figure 11. It can be seen that in Q1.1, 60% of participants reported finding the most novel design ideas within REM, while only 15% found AEM to be useful for discovering novel designs. However, in Q1.2, 40% of the participants believed that AEM produced better-performing designs. The remaining 35% and 25% of participants found REM and SAEM, respectively, to be more effective at producing better-performing designs. In response to Q1.3, which asked about the mode that provided the exploration of both diverse and better-performing designs, 55% of participants preferred SAEM, while 25% and 20% preferred REM and AEM, respectively." }, { "figure_ref": [ "fig_10", "fig_10" ], "heading": "Survey results", "publication_ref": [], "table_ref": [], "text": "From Figure 11 it can be seen that in Q2.1, 60% of participants indicated that novelty was the driver of design selection in REM. Additionally, in Q2.2, 40% of participants indicated that performance was also a driver of design selection in REM, although this result did not deviate significantly from those of SAEM and AEM. The results of Q2.3 were particularly interesting, as 50% of participants found SAEM to be a mode where design selection was driven equally by both performance and novelty. It is worth noting that the trend observed in the Q2 questions is consistent with that of the Q1 questions.\nFor Q3, Q4 and Q5, the results in Figure 11 show that 70% of the participants found REM to be the most engaging mode, whereas only 10% of the participants indicated it as the least engaging mode. Perhaps these are the participants who value design performance significantly more than design novelty. Overall, participants least favoured the AEM, mainly due to the lack of design novelty." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [], "text": "This section provides a detailed discussion of the key findings of this study, specifically focused on identifying an effective exploration approach for GDS." }, { "figure_ref": [], "heading": "5.1", "publication_ref": [], "table_ref": [], "text": "To what extent is each exploration mode effective in achieving diverse, novel and better-performing designs?\nFrom the results discussed in Section 4, it can be concluded that REM provides a user-engaging approach that results in the exploration of novel design solutions within the shortest possible design exploration time. Interestingly, even with a short exploration time, participants are able to scan the GDS effectively and find a diverse set of design alternatives. Although designs resulting from this mode are diverse, as expected, they are not efficient from a performance perspective. On the other hand, AEM, which is primarily driven by performance, has the least diversity. One would expect that even if the designs are not diverse, they must perform better as the optimisation is solely driven by performance. However, designs resulting from AEM have slightly lower performance on average compared to the designs resulting from SAEM.\nIn conclusion, while design spaces resulting from typical parametric approaches may yield better-performing designs, the key benefit of GDSs lies not only in better performance but also in the ability to generate novel designs that possess non-conventional features and do not currently exist in the market. To fully leverage the potential of GDSs, it is necessary to focus not only on enriching these spaces but also on exploring them in an effective manner. This study highlights that the commonly used random exploration (REM) and optimisation-based exploration (AEM) approaches are not optimal for GDSs. REM prioritises novelty, while AEM prioritises performance. To strike a balance between these objectives, hybrid and intuitive exploration approaches such as SAEM are needed. SAEM involves both the user and the optimiser at the same level, where users leverage their design expertise to explore novel solutions, and the optimiser focuses on enhancing the performance of the user-preferred designs.\n5.2 Which factor is the key consideration for each exploration mode: form or performance?\nMoreover, this study revealed that although participants initially stated they aim to balance both novelty and performance in their design tasks. However, during the design exploration in REM, SAEM and AEM they tended to prioritise performance over novelty. This may be due to various factors. If this trend persists, users may be biased toward prioritising performance and fail to utilise GDSs to their full potential. However, this fact is also centred on the type of exploration approach used. For example, in REM, design exploration and selection of preferred design was driven by the form, whereas in AEM design selection was mainly driven by the performance. However, SAEM, aimed at balancing both performance and novelty, does well to engage participants in design exploration. Therefore, designs resulting from this approach are diverse as well as better performing. Furthermore, in addition to hybrid exploration methods, there is a need for more engaging design interfaces. Our questionnaire results indicate that participants found REM to be more engaging compared to other exploration modes.\nIn summary, from this study, it can be concluded that as the design space are become more and more diverse, thanks to generative models, we need also an innovative approach for their efficient exploration, as the traditional way of design exploration, made originally for the narrow design spaces, cannot be beneficial to truly exploit the potential of GDSs." }, { "figure_ref": [], "heading": "Concluding remarks", "publication_ref": [], "table_ref": [], "text": "In this work, we aimed to evaluate the effectiveness of different design exploration approaches for exploring generative design spaces resulting from generative models such as generative adversarial networks. To achieve this, we constructed a generative design space for the ship hull design and optimisation problem. We trained a custom generative model on a large dataset of physically and geometrically validated designs and then used the generator component of the model as a parametric modeller to generate a diverse 20-dimensional design space. We explored this space using three different approaches: REM, SAEM, and AEM, each with different levels of user involvement and algorithmic autonomy.\nThe REM is a random exploration mode in which the user explores a 2-dimensional projection of the generative design space. SAEM is a mode that is spontaneously driven by both the user and the optimiser with the same level of involvement. In this mode, the optimiser focuses on exploring a diverse set of uniformly distributed and optimised designs (i.e., designs with low C w ) from the generative design space, while the user directs the exploration towards the region of design space containing their preferred designs. AEM is a typical shape optimisation mode in which the design space is connected to an optimiser and performance evaluation code that guides the optimiser in finding a global optimum. To incorporate a user's preference, the objective function for this mode is the weighted sum of C w and the similarity of newly created designs to the user's previously selected designs.\nThe results of this study showed that the highest design diversity occurred during REM, followed by SAEM and AEM, whereas better-performing designs were found within AEM and SAEM. However, SAEM outperforms REM and AEM in terms of exploring designs that have a significantly high trade-off between novelty and performance. The study results also showed that participants are adept at exploring novelty and that their subconscious directs them to prefer novel design alternatives. However, when performance is brought into the exploration, they immediately tend to select based on performance." }, { "figure_ref": [], "heading": "Future work", "publication_ref": [], "table_ref": [], "text": "In the future, we aim to scale up our investigation by enriching the population of the subjects involved with a) designers covering the whole spectrum of expertise (low to high), b) design-users covering the whole lifecycle, e.g. shipyards, ship-owners, operators, and c) designers acting in other transportation (automotive, aerospace) industries." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work received funding from: 1. the Royal Society under the HINGE (Human InteractioN supported Generative modEls for creative designs) project via their International Exchanges 2021 Round 2 funding call, and 2. the European Union's Horizon 2020 research and innovation programme under the Marie Sk lodowska-Curie grant GRAPES (learninG, pRocessing And oPtimising shapES) agreement No 860843." } ]
Typical parametric approaches restrict the exploration of diverse designs by generating variations based on a baseline design. In contrast, generative models provide a solution by leveraging existing designs to create compact yet diverse generative design spaces (GDSs). However, the effectiveness of current exploration methods in complex GDSs, especially in ship hull design, remains unclear. To that end, we first construct a GDS using a generative adversarial network, trained on 52,591 designs of various ship types. Next, we constructed three modes of exploration, random (REM), semi-automated (SAEM) and automated (AEM), with varying levels of user involvement to explore GDS for novel and optimised designs. In REM, users manually explore the GDS based on intuition. In SAEM, both the users and optimiser drive the exploration. The optimiser focuses on exploring a diverse set of optimised designs, while the user directs the exploration towards their design preference. AEM uses an optimiser to search for the global optimum based on design performance. Our results revealed that REM generates the most diverse designs, followed by SAEM and AEM. However, the SAEM and AEM produce better-performing designs. Specifically, SAEM is the most effective in exploring designs with a high trade-off between novelty and performance. In conclusion, our study highlights the need for innovative exploration approaches to fully harness the potential of GDS in design optimisation.
How does agency impact human-AI collaborative design space exploration? A case study on ship design with deep generative models
[ { "figure_caption": "subject to: given cargo capacity (3600 TEU); 51120.5m 3 ≤ Volume of displacement ≤ 56501.6m 3 ; 220.9m ≤ Length at waterline ≤ 244.2m; 30.6m ≤ Beam at waterline ≤ 33.8m; 10.3m ≤ Draft ≤ 11.3m.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: 2D t-SEN plot of designs generated from the ShipHullGAN model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "ShowFigure 3 :3Figure 3: Workflow of semi-automated exploration mode.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Workflow of automated exploration mode.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Graphical user interfaces of all three exploration modes.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Q1. 11explore diverse designs Q1.2 explore better-performing designs Q1.3 explore a mix of diverse and better-performing designs Q2 During the exploration preferred design selection is driven by: Q2.1 design novelty (i.e., distinctive form features) Q2.2 design performance Q2.3 design novelty and performance Q3 The most engaging mode of exploration Q4 The least engaging mode of exploration Q5 Overall preferred mode of exploration", "figure_data": "", "figure_id": "fig_6", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Distribution of total time spent during each mode of exploration.", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :Figure 8 :78Figure 7: Distribution of average time spent during each design during the REM and average time spent on a set of five designs during each interaction of SAEM and AEM.", "figure_data": "", "figure_id": "fig_8", "figure_label": "78", "figure_type": "figure" }, { "figure_caption": "Figure 9 : 3 Figure 10 :9310Figure 9: Distribution of the diversity of designs explored by participants in all three modes of exploration.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9310", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Summary of the questionnaire results from all the participants.", "figure_data": "", "figure_id": "fig_10", "figure_label": "11", "figure_type": "figure" } ]
Shahroz Khan; Panagiotis Kaklis; Kosa Goucher-Lambert
[ { "authors": "K Kostas; A Ginnis; C Politis; P Kaklis", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b0", "title": "Ship-hull shape optimization with a T-spline based BEM-isogeometric solver", "year": "2015" }, { "authors": "W Chen; F Ahmed", "journal": "Journal of Mechanical Design", "ref_id": "b1", "title": "Padgan: Learning to generate high-quality novel designs", "year": "2021" }, { "authors": "L Regenwetter; A H Nobari; F Ahmed", "journal": "Journal of Mechanical Design", "ref_id": "b2", "title": "Deep generative models in engineering design: A review", "year": "2022" }, { "authors": "L Yang; D Zhang; G E Karniadakis", "journal": "SIAM Journal on Scientific Computing", "ref_id": "b3", "title": "Physics-informed generative adversarial networks for stochastic differential equations", "year": "2020" }, { "authors": "A M Chaudhari; D Selva", "journal": "Journal of Mechanical Design", "ref_id": "b4", "title": "Evaluating designer learning and performance in interactive deep generative design", "year": "2023" }, { "authors": "S Khan; E Gunpinar; K M Dogan", "journal": "Ocean Engineering", "ref_id": "b5", "title": "A novel design framework for generation and parametric modification of yacht hull surfaces", "year": "2017" }, { "authors": "S Khan; K Goucher-Lambert; K Kostas; P Kaklis", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b6", "title": "ShipHullGAN: A generic parametric modeller for ship hull design using deep convolutional generative model", "year": "2023" }, { "authors": "N D Charisi; H Hopman; A Kana", "journal": "", "ref_id": "b7", "title": "Early-stage design of novel vessels: How can we take a step forward?", "year": "2022" }, { "authors": "H Nowacki", "journal": "Computer-Aided Design", "ref_id": "b8", "title": "Five decades of computer-aided ship design", "year": "2010" }, { "authors": "H M Gaspar; D H Rhodes; A M Ross; S Ove Erikstad", "journal": "Journal of Ship Production and Design", "ref_id": "b9", "title": "Addressing complexity aspects in conceptual ship design: a systems engineering approach", "year": "2012" }, { "authors": "A Ebrahimi; P O Brett; S O Erikstad; B E Asbjørnslett", "journal": "Journal of Ship Production and Design", "ref_id": "b10", "title": "Influence of ship design complexity on ship design competitiveness", "year": "2021" }, { "authors": "A Papanikolaou", "journal": "Computer-Aided Design", "ref_id": "b11", "title": "Holistic ship design optimization", "year": "2010" }, { "authors": "S Khan; E Gunpinar; M Moriguchi", "journal": "Proceedings of CAD", "ref_id": "b12", "title": "Customer-centered design sampling for cad products using spatial simulated annealing", "year": "2017" }, { "authors": "A Ginnis; K Kostas; C Feurer; K Belibassakis; T Gerostathis; C Politis; P Kaklis", "journal": "", "ref_id": "b13", "title": "A CATIA®ship-parametric model for isogeometric hull optimization with respect to wave resistance", "year": "2011-09-22" }, { "authors": "S Khan; E Gunpinar; K Mert Dogan; B Sener; P Kaklis", "journal": "", "ref_id": "b14", "title": "ModiYacht: Intelligent cad tool for parametric, generative, attributive and interactive modelling of yacht hull forms", "year": "2022" }, { "authors": "S Khan; E Gunpinar; B Sener", "journal": "Ocean Engineering", "ref_id": "b15", "title": "Genyacht: An interactive generative design system for computer-aided yacht hull design", "year": "2019" }, { "authors": "D Kaklis; T Varelas; I Varlamis; P Eirinakis; G Giannakopoulos; C V Spyropoulos", "journal": "Society of Naval Architects and Marine Engineers (SNAME)", "ref_id": "b16", "title": "From steam to machine: Emissions control in the shipping 4.0 era", "year": "2023" }, { "authors": "I Citaristi", "journal": "Routledge", "ref_id": "b17", "title": "United nations conference on trade and", "year": "2022" }, { "authors": "T.-H Joung; S.-G Kang; J.-K Lee; J Ahn", "journal": "Journal of International Maritime Safety, Environmental Affairs, and Shipping", "ref_id": "b18", "title": "The imo initial strategy for reducing greenhouse gas (ghg) emissions, and its follow-up actions towards 2050", "year": "2020" }, { "authors": "S Khan; P Kaklis; A Serani; M Diez; K Kostas", "journal": "Computer-Aided Design", "ref_id": "b19", "title": "Shape-supervised dimension reduction: Extracting geometry and physics associated features with geometric moments", "year": "2022" }, { "authors": "S Khan; P Kaklis; A Serani; M Diez", "journal": "Computer-Aided Design", "ref_id": "b20", "title": "Geometric moment-dependent global sensitivity analysis without simulation data: application to ship hull form optimisation", "year": "2022" }, { "authors": "V Bertram", "journal": "Elsevier", "ref_id": "b21", "title": "Practical ship hydrodynamics", "year": "2011" }, { "authors": "E Schulz; M Speekenbrink; A Krause", "journal": "Journal of Mathematical Psychology", "ref_id": "b22", "title": "A tutorial on gaussian process regression: Modelling, exploring, and exploiting functions", "year": "2018" }, { "authors": "S Khan; P Kaklis", "journal": "Advanced Engineering Informatics", "ref_id": "b23", "title": "From regional sensitivity to intra-sensitivity for parametric analysis of free-form shapes: Application to ship design", "year": "2021" }, { "authors": "P Bassanini", "journal": "Surv Math Ind", "ref_id": "b24", "title": "The wave resistance problem in a boundary integral formulation", "year": "1994" }, { "authors": "S Khan; A Serani; M Diez; P Kaklis", "journal": "", "ref_id": "b25", "title": "Physics-informed feature-to-feature learning for design-space dimensionality reduction in shape optimisation", "year": "2021" }, { "authors": "M Bole", "journal": "Ship Technology Research", "ref_id": "b26", "title": "Interactive hull form transformations using curve network deformation", "year": "2011" }, { "authors": "E Fuchkina; S Schneider; S Bertel; I Osintseva", "journal": "eCAADe", "ref_id": "b27", "title": "Design space exploration framework", "year": "2018" }, { "authors": "S Krish", "journal": "Computer-Aided Design", "ref_id": "b28", "title": "A practical generative design method", "year": "2011" }, { "authors": "L Van Der Maaten; G Hinton", "journal": "Journal of machine learning research", "ref_id": "b29", "title": "Visualizing data using t-SNE", "year": "2008" }, { "authors": "K Girotra; C Terwiesch; K T Ulrich", "journal": "Management science", "ref_id": "b30", "title": "Idea generation and the quality of the best idea", "year": "2010" }, { "authors": "N C Brown", "journal": "Design studies", "ref_id": "b31", "title": "Design performance and designer preference in an interactive, data-driven conceptual building design scenario", "year": "2020" }, { "authors": "N C Brown; C T Mueller", "journal": "AI EDAM", "ref_id": "b32", "title": "Quantifying diversity in parametric design: a comparison of possible metrics", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 152.89, 577.69, 119.61, 31.6 ], "formula_id": "formula_0", "formula_text": "Find x * ∈ R 20 such that C w (x * ) = min x∈X C w (x)" }, { "formula_coordinates": [ 7, 228.15, 64.67, 324.61, 30.32 ], "formula_id": "formula_1", "formula_text": "min x∈X F = γ 1 C w + γ 2 n i=1 ||x u -x i ||(2)" }, { "formula_coordinates": [ 10, 237.44, 214.57, 315.31, 30.32 ], "formula_id": "formula_2", "formula_text": "SC = 1 n n i=1 ||x centroid -x i || 2(3)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b26", "b28", "b10", "b20", "b24", "b28", "b1", "b21", "b8" ], "table_ref": [], "text": "Photorealistic facial reenactment research has achieved remarkable improvements in both perceptual quality and frame consistency. Current deep-learning based facial reenactment methods [4, 26,27,29] can synthesize high-quality videos of a talking face, using a target human identity and animation sources. Such systems hold great potential for various applications such as telepresence and virtual avatar generation.\nDespite these advancements, current talking head models have limitations in controlling facial content. Most methods require target video sequences that should exactly contain the desired head pose 1 , usually represented in the form of facial landmarks [11,21,25,29]. One intuitive method of pose control is to let the user upload their facial videos to the system, but for real-world services, many users felt uncomfortable sharing their own videos and preferred to use existing video sources. This, in turn, places an additional burden on the user, as they must find and provide such video sources.\nTo enable better control over the facial domain, several methods focused on providing facial synthesis in the form of three-dimensional, user-friendly control (also known as face rig). Those models focused on using a three-dimensional morphable face model (3DMM) [2], which allows the user to control over various facial semantic parameters, such as identity, expressions, texture, illuminance, and head orientation. However, most 3DMM-based researches are focused on generating 3D animations instead of photorealistic videos. One main reason is that 3DMMs tend to be bound by the lack of facial training data, which requires complex 3D facial scanning, resulting in a lack of photorealism.\nTo be best of our knowledge, StyleRig [22] is the first model to provide rig-like control over photorealistic portrait images, by adding 3DMM's semantic controllability over StyleGAN [9], a GAN-based image generator. Requiring only a pre-trained StyleGAN model, StyleRig achieved an intuitive, rig-like control over high quality portrait images. However, StyleRig has limited expressiveness, as it fails to produce certain head poses such as in-plane head rotation and asymmetric expressions. Moreover, as StyleGAN does not explicitly disentangles identity and pose information, identity bleeding can occur during high-level expression editing, i.e. the resulting face's identity changes when it should not.\nBuilding upon such limitations, we propose a novel solution for parameter-based neural talking head synthesis, by combining the advantages of both neural talking head methods and parameter-based pose control into a single method. We focus on expressing head poses as semantic parameters, which are transformed into latent codes for the given talking head generator to create photorealistic facial images with or without additional driving images. We use a fixed, pretrained talking head model, and do not require additional data for training.\nOur main contribution is the landmark-parameter morphable model (LPMM), a model designed to connect landmarks to the parametric domain in which the user can adjust different facial expressions and head pose in a meaningful manner. Since LPMM is built upon a large and diverse facial landmark dataset, which can be easily collected compared to facial scans, it achieves a better generalization over expression diversity and semantic parameterization compared to 3DMM. The usage of a talking head generator instead of a 3d renderer ensures photorealistic results of our method. The results display that our approach successfully provides intuitive riglike control for a pre-trained neural talking head model, while still allowing traditional facial image inputs.\nIn summary, we present the following contributions: • Novel pipeline that can give pose controllability to talking head model, which does not require additional training for fixed model.\n• A method to provide additional rig-like control while maintaining the inference method of the existing talking head model. Therefore, the performance of the generator can be fully utilized.\n• We show that our pipeline is independent of model architecture and can be applied to arbitrary latentbased talking head generators." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b13", "b19", "b20", "b4", "b15", "b27" ], "table_ref": [], "text": "Neural Talking Head Most models focus on extracting pose information from a driving video source into the form of latent codes and applying it to a facial identity [14,20,21]. Doing so minimizes the identity-pose disentanglement issue, making it suitable for further pose editing. Such methods, however, do not provide additional means for the user to control head pose and facial expression. Some talking head methods accept userspecified head rotation input [26], but do not allow the user to adjust facial expressions in a semantic manner.\nOther methods rely on audio inputs to control the lower part of the face (lips, jaw movement) [5,16,28] to high-quality facial images while maintaining control over expression, illumination, and pose. However, Sty-leRig fails to exploit 3DMM's full expression space, resulting in incorrect expression mappings for the final result (e.g. in-place head rotation, eye-blinking). Its visual quality is also limited by the face renderer used.\nOur work is highly motivated by StyleRig's approach of adopting parametric control over an image generator, but differs in the model architecture, semantic controllability, and a novel morphable model." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Our solution achieves an explicit rig-like control over the head pose of facial images generated from neural talking-head models. Based on a talking head generator [4, 27] that generates a facial image I w using latent code w, our approach focuses on providing additional pose editing through the usage of semantic parameters, while still allowing traditional pose image inputs. Prior to training, we prepare a landmark-parameter morphable model (LPMM), which is designed to control head pose information based on a set of control parameters (Sec. 3.1). Those parameters are linked to significant components in the facial landmark domain, calculated through a standard PCA decomposition [8].\nGiven a facial image I, a landmark-parameter regressor (LP-regressor) is used to acquire the respective parameter values for LPMM, which are then edited to convey the desired head pose (Sec. 3.2). The modified parameters are then sent to the landmark-parameter adaptor (LP-adaptor), and converted to a latent code for the selected talking head generator (Sec. 3.3). One practical use case would be to edit a face such that its eyes are closed, without altering other elements of the face. In such a case, the user can simply adjust the parameter values corresponding to the eyes, without using additional driving image inputs. " }, { "figure_ref": [ "fig_1" ], "heading": "Landmark-Parameter Morphable Model", "publication_ref": [ "b10", "b20", "b24", "b28", "b2" ], "table_ref": [], "text": "Our proposed model attempts parametric control on the facial landmark, which serves as a good basis of head pose for many talking head models [11,21,25,29]. Since facial landmarks L = (x 1 , y 1 , x 2 , y 2 , ....., x n , y n ) ∈ R 2n consist of n = 68 points that connect to different facial components (such as eyes, nose, mouth), we assume that through a linear combination of exemplar facial landmark components, it is possible to generate an arbitrary head pose, leading towards an abstract posespecific representation. Moreover, facial landmarks can be acquired from a wider range of 2D facial video data, unlike 3D facial scans, so we believe LPMM will result in a highly generalized model compared to 3DMM.\nTo extract such exemplar components, we perform Principal Component Analysis (PCA) [8] upon the facial landmark data [3,13]. Being a common technique for data compression, PCA performs an eigendecomposition operation over the data covariance matrix, extracting the most significant eigenvectors in the form of linearly independent components. Since the first principal components created from PCA explain the most variance of the data compared to the latter com-ponents, we assumed that these first components will be responsible for the head orientation (yaw, pitch, roll) movements, which tend to show the high magnitude of movement in the coordinate system while being orthogonal to one another. Other facial expressions, such as eye-blinking, have movements of lower magnitude, thus were assumed to be linked with the latter components.\nWe collected a large number of facial landmarks from talking-head video datasets [6,13]. Since LPMM should not include components that are linked to head translation movements, each video frame was preprocessed to ensure that the face was aligned in the center of a square frame. Through PCA, m principal components were calculated from the facial landmark dataset.\nWe define the morphable model as the set of facial landmarks, parameterized by the coefficients p. New arbitrary face landmarks can now be generated by the summation of average face landmark L and the linear combination of the parameters p and eigenvectors e ∈ R 2n . In this case, the maximum number of parameter m is 2n.\nL new = L + k i=1 p i e i .(1)\nAfter setting up the model, we discovered that the beginning components were indeed linked to head orientation movements, and the latter components were associated with other facial expressions, thus proving the initial assumption right (Figure 3). Visualization of the different PCA components can be found in Supp. Mat." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "LP-regressor", "publication_ref": [ "b18" ], "table_ref": [], "text": "As shown in Figure 2 (left), the landmark-parameter regressor (LP-regressor) can be seen as a function that estimates LPMM parameters from a given facial image. While LPMM is capable of generating arbitrary facial landmarks, it is required to estimate specific parameters to reconstruct a target landmark that successfully conveys the correct head pose.\nGiven an image I ∈ R H×W ×3 , LP-regressor F : R H×W ×3 → R k calculates the corresponding LPMM parameters p = [p 1 , p 2 , ....., p k ] up to a specific degree k (maximum of m), which are used to reconstruct the image's facial landmark LI . Since these parameters have different levels of significance in modeling facial landmarks, k functions as a regulation term between expressiveness and complexity. We model LP-regressor based on the MobilenetV2 architecture [19], and trained it using 1 -loss between the original facial landmark L I and the reconstructed landmark LI .\nThis model can be viewed as a modification of previous facial landmark detectors, but instead of detecting coordinates of face landmarks, ours focus on imprinting head pose information into the LPMM parameters for intuitive pose control. Also, facial landmark data are not used as input for inference, and only their parameter counterparts function as the input for our system.\nFrom experimentation, we found the performance at k = 40 is suitable for arbitrary facial landmark representation, and chose it as our default setting (Figure 4). For further implementation details, please refer to the experiments section. (Sec. 4)" }, { "figure_ref": [ "fig_0" ], "heading": "LP-adaptor", "publication_ref": [ "b0", "b16", "b17", "b21", "b28", "b21", "b21", "b22" ], "table_ref": [], "text": "Given a set of LPMM parameters, the landmarkparameter adaptor (LP-adaptor) generates latent codes for the pretrained talking head generator. Its main objective is to ensure that the parametric control over the facial landmark domain is also maintained for the generated face images from the neural talking head model. Previous latent-based facial manipulation methods [1,17,18,24] require finding new editing directions within the latent space, which is time-consuming and might lead to undesired pose distortion. On the other hand, LP-adaptor is capable of editing selected head pose, without distorting other facial attributes.\nWe denote pose encoder of the talking head model [4, 27] as E : R H×W ×3 → R w , which outputs an identityagnostic pose vector v ∈ R w for image I. And we denote its generator as G : R w → R H×W ×3 , which generates facial images with head pose from I. Figure 2, right shows the overall framework of LP-adaptor. Following StyleRig [22], we model LP-adaptor (denoted as D : R k → R w ) as a three-layer linear perceptron (MLP) with ELU activations for every intermediate layer.\nGiven an estimated parameter p from LP-regressor, the last layer of MLP outputs d, which is added to the pre-calculated average pose vector v to form the final estimated pose vector v = v + d. The objective of LPadaptor is to encode v such that it's mapping in the latent space of pose encoder E(I) is in the right location; where the original pose vector v is located.\nWe want the identity-specific features of output images to be consistent when controlling head pose using landmark parameters. Previous works [4,29] discussed that using facial landmarks on neural talking head tasks may induce identity-bleeding issues. However, in our settings, facial landmarks themselves are not used as input. Also, we used a fixed identity embedding vector during the training pipeline, so that the LP-adaptor can only focus on the pose latent space without requiring any complex methods for the identity-pose disentanglement issue (i.e cycle-consistent per-pixel editing loss [22]). And since the controllability is not limited towards a specific identity, it is not required for the LP-adaptor to be re-trained for a different human identity. Training LP-adaptor. \nL total = λ rgb L rgb + λ pose-reg L pose-reg .(2)\nPixel-Wise RGB loss. L rgb is the 1 -loss between the generated image I v = G(v) using v from LP-adaptor, and the generated image I v = G(v) using v from the original pose encoder. While we could design the regression loss directly in latent space, this has been shown to not be very effective [22,23].\nL rgb = G(D(p (I) )) -G(E(I)) 1 .(3)\nPose Regularization loss. In order to ensure the existence of a \"base pose\", we add an additional regularization loss. We enforce residual value d in v = v + d should be zero when LP-adaptor got a parameters of average pose image F(G(v)). Since the weights of F and E are frozen, this constraint enforces the mapping between the parametric and latent spaces.\nL pose-reg = D(F(G(v))) 1 .(4)\n4. Experiment" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b29" ], "table_ref": [], "text": "Two talking head video datasets were used for training and evaluation. We used the VoxCeleb1 dataset [13] for LPMM and the training pipeline. The dataset consists of YouTube videos, each containing a main speaker identity. For preprocessing, we used the S3FD facial landmark detector [30] for each video frame to check the perceptibility of a clear face. After collecting around 4.2M images, each image was cropped using the detected bounding boxes, ensuring the center alignment of the face inside the frame. We increased the bounding box size by 80% and resized to 256 × 256. When the increased bounding box size went over the original image boundary, we followed the padding policy of LPD [4].\nVoxCeleb2 [6] contains more identities and videos compared to VoxCeleb1. We collected the videos for 30 identities in VoxCeleb2's test set, and sampled 64 frames per video, which were then preprocessed following Vox-Celeb1's setting. VoxCeleb2 was used for evaluation, displaying the full potential of our approach.\nIn addition to VoxCeleb2, we used a separate dataset consisting of Korean celebrity videos and webtoon characters, dubbed as the Korceleb&Webtoon dataset." }, { "figure_ref": [], "heading": "Backbone generator", "publication_ref": [ "b26", "b26" ], "table_ref": [], "text": "Amongst the possible choices of latent-based neural talking head models that can be used with our method, we chose the two well-known models for our experiments: Latent Pose Descriptors (LPD) [4] and Latent Image Animator (LIA) [27] LPD. LPD uses a combination of identity embedding and pose embedding to generate a talking face. In practice, the output latent vector of LP-adaptor corresponds to the pose embedding vector d p ∈ R 256 of the original paper [4]. For LPD, we used a generator fine-tuned with Voxceleb2 or Korceleb&Webtoon dataset. LIA. LIA interprets motion as an orthogonal set of motion vectors and their corresponding magnitudes. In practice, the output latent vector of LP-adaptor corresponds to the pose magnitude vector A r→d ∈ R 20 of the original paper [27]. Since LIA is based on a one-shot setting, a representative identity image was chosen from the dataset as an identity source." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [ "b6" ], "table_ref": [ "tab_1" ], "text": "We evaluate the contributions related to the parameter choices we made in the training of our model. LP-regressor expressiveness. For reconstructed landmark accuracy, Normalized Mean Error (NME) was used for measurement. It is defined as\nNME(P, P ) = 1 N P N P i=1 p i -pi 2 d ,(5)\nwhere P and P denote the predicted and ground-truth coordinates of the respective landmarks, N P is the number of landmarks points, and d is the reference distance to normalize the absolute errors [7]. We use interocular distance as the normalizing factor. We evaluated the NME between the original facial landmark and the reconstructed landmark, for different values of LPregressor degree k. (Table 1) Here, k = 136 means we used all principal components of the LPMM model. We observed for k = 40, detailed facial expressions such as eye blinking were well expressed within the landmarks, while maintaining a compact parametric space. For our final solution, we use k = 40 for its expressiveness." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Rig-like control", "publication_ref": [], "table_ref": [], "text": "Interactive Parameter Control.\nSince LPMM is based on the linear combination of landmark components, each LPMM parameter is independent from one another, and can be manipulated through vector arithmetic. And because LP-adaptor enforces this property with the talking head model, the user can manipulate head pose through linear interpolation. We develop a user-friendly interface where a user can update different LPMM parameter values in the form of interactive sliders, which are fed into LP-adaptor and the neural talking head model to generate a portrait image with the userdefined pose applied, without requiring additional posedriving images. Figure 5 shows the different results of parametric control through the interface. The results display consistent results for different identity and pose information, proving that our model is capable of being adapted to arbitrary generators. Parametric control with additional driving image.\nSince the LP-adaptor's latent code output v can be mixed with the pre-trained talking head model's pose encoder output v, our method retains the property of previous talking-head models to use images as pose driving sources, while applying semantic parametric control at the same time. This can be done by sending the driving image through the LP-regressor and/or the talking-head model's pose encoder, followed by applying additional specific parameter control. In practice, the user can save pre-defined head pose parameters in the form of blendshapes, and use them directly upon a target facial identity, guaranteeing an efficient, intuitive pose control. Figure 1 and Figure 6 displays our results with both the driving input image and separate parametric control applied. As shown above, face animations e.g., blinking, surprised, can be reenacted through simple vector arithmetic of predefined parameters, such as p surprise . This allows users to perform intuitive pose editing without requiring additional driving image sources. In practice, an artist can pre-define different expressional blendshapes Figure 6. Results of image-based head pose edited identities, followed by semantic parameter control. Our method maintains the original inference method of talking head models, which extracts pose information from a driving image as a latent code, and applies it to a facial identity (a). Along with it, pre-defined semantic parameters can be added on to the produced latent code, editing head pose in a reasonable manner while maintaining the original identity (b).\nfor a neutral identity, so that it can be used at any time for different identities.\nBase pose visualization. To prove that our parametric system is capable of better pose controllability than the latent domain, we visualized the \"base\" faces for both domains. We define the parametric \"base\" face as the generated face when all parameter values are initialized to zero (p zero ), while its latent counterpart will be the generated face of either the mean latent code of the train distribution (v bar ), or the zero latent code (v zero ).\nFigure 8 visualizes the \"base\" face for different facial identities. It can be observed that while both v bar and v zero are biased towards a certain head pose and expression, p zero maintains a neutral, face-frontalized face, displaying that the parametric domain has a consistent starting point for pose manipulation. The existence of a robust base face also allows smooth pose interpolation to be done by a simple scalar multiplication to the pose parameters (Figure 7), unlike previous latent manipula-tion approaches [4] which requires two pose vectors to perform the interpolation." }, { "figure_ref": [ "fig_6" ], "heading": "Comparison with previous methods", "publication_ref": [ "b1", "b8", "b21" ], "table_ref": [], "text": "Since there is no publicly-available code base or checkpoints available for StyleRig, we trained and implemented StyleRig by ourselves, following the practices mentioned in the original paper. While implementing, we noticed that, unlike our method's pipeline, Sty-leRig's pipeline is based on two different data distributions; 3D facial scanning data for 3DMM [2], and photorealistic portrait images [9] for StyleGAN, which creates a discrepancy between the parametric and latent space domain. Figure 9 shows the comparison between StyleRig parametric control results and our results. We use pose edited results both from the original paper 1 and Figure 7. Each row shows the parametric interpolation results for head pose. Our system is capable of generating visually smooth and identity preserving interpolation results. from our StyleRig implementation. It can be observed that while StyleRig succeeds in editing the yaw angles of the image (first row), it fails to control the roll movements (second row), which was attributed to a bias introduced in StyleRig's training data [22]. Our method does not suffer from such biases and is capable of precise control over the head orientation. For expression editing, StyleRig often leads to incorrect expression mapping for certain expressions (third row). Compared to StyleRig, our model produces better editability towards portrait images, especially for extreme facial expressions." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [ "b21", "b21" ], "table_ref": [], "text": "We have presented a novel method to provide semantic head pose control over fixed neural talking head model. Unlike the previous approach on semantic pose control [22], our method utilizes talking head model as a back-bone generator. Since, it can bring the pose- [22]. The StyleRig results from the first row was created from our StyleRig implementations, while the two bottom row results were collected from the original paper. Our approach is on par with StyleRig for head orientation (blue), and produces a better pose transfer for in-place rotations and complex expressions (red). identity disentanglement power and does not require complex training pipeline. One limitation of our method is that to enable semantic control over facial expression, we might have to discover a combination of parameters to manipulate an intuitive expression, instead of controlling a single parameter value. We note, however, that head orientations can be frontalized, and for real-world applications our method does not require users to find such parameter values." } ]
Figure 1. LPMM allows user-friendly pose control over portrait images, by translating facial landmarks to the parametric domain. This enables a sequential, intuitive editing of facial expressions and head orientation, either without a driving source image (top row) or with a driving source image (bottom row).
LPMM : Intuitive Pose Control for Neural Talking-Head Model via Landmark-Parameter Morphable Model
[ { "figure_caption": "Figure 2 .2Figure 2. The training of our model is divided into two stages. (Left) The LP-regressor processes the input facial image to generate LPMM parameters, and is trained so that the reconstructed facial landmark matches the original. (Right) The LP-adaptor is used to transform LPMM parameters into the latent space of a pretrained talking head model's pose encoder. While training LP-adaptor, all weights other than LP-adaptor itself are frozen.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Interpolation parameters of LPMM. Each row shows that different parameter can control head pose independently. The average face landmark L in the middle column represents a expression-neutral, frontalized face.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. For LP-regressor, it is noted that the expressiveness of facial landmarks reconstructed from parameters change with differing values of k. The reconstructed landmarks cannot express eye-closing until k = 40, and the accuracy difference between k = 40 and k = 136 is minor.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 22right shows the overall training of LP-adaptor. The training loss consists of a pixel-wise RGB loss L rgb and a pose regularization loss L pose-reg where λ rgb and λ pose-reg are fixed weights for each losses. When training LP-adaptor, all other networks except LP-adaptor (i.e. F, E, G) are fixed.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Parametric head pose control examples for different target identities, without using driving source images. Different semantic facial expressions and head orientation information are saved in the form of LPMM parameters (first column). These parameters can be applied to different facial identities, editing images in a consistent manner.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Base pose visualization for both parametric space (pzero) and latent space (vzero, vbar). Compared to vzero, vbar, the face from pzero maintains a expression-neutral, frontalized face.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 .9Figure 9. Comparison to StyleRig[22]. The StyleRig results from the first row was created from our StyleRig implementations, while the two bottom row results were collected from the original paper. Our approach is on par with StyleRig for head orientation (blue), and produces a better pose transfer for in-place rotations and complex expressions (red).", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "For different values of LP-regressor degree k, we evaluated the NME between the original facial landmark and the reconstructed landmark.", "figure_data": "LP-regressor NMEEval datasetk=5 k=10 k=20 k=40 k=136Voxceleb24.33 2.892.382.292.24Korceleb&Webtoon 5.81 3.923.183.133.25", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" } ]
Kwangho Lee; Patrick Kwon; Myung Ki Lee; Namhyuk Ahn; Junsoo Lee; Naver Webtoon
[ { "authors": "Yuval Alaluf; Or Patashnik; Daniel Cohen-Or", "journal": "", "ref_id": "b0", "title": "Restyle: A residual-based stylegan encoder via iterative refinement", "year": "2021" }, { "authors": "Volker Blanz; Thomas Vetter", "journal": "", "ref_id": "b1", "title": "A morphable model for the synthesis of 3d faces", "year": "1999" }, { "authors": "Adrian Bulat; Georgios Tzimiropoulos", "journal": "", "ref_id": "b2", "title": "How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks)", "year": "2017" }, { "authors": "Egor Burkov; Igor Pasechnik; Artur Grigorev; Victor Lempitsky", "journal": "", "ref_id": "b3", "title": "Neural head reenactment with latent pose descriptors", "year": "2007" }, { "authors": "Lele Chen; Ross K Maddox; Zhiyao Duan; Chenliang Xu", "journal": "", "ref_id": "b4", "title": "Hierarchical cross-modal talking face generation with dynamic pixel-wise loss", "year": "2019" }, { "authors": "Son Joon; Arsha Chung; Andrew Nagrani; Zisserman", "journal": "", "ref_id": "b5", "title": "Voxceleb2: Deep speaker recognition", "year": "2018" }, { "authors": "Yangyu Huang; Hao Yang; Chong Li; Jongyoo Kim; Fangyun Wei", "journal": "", "ref_id": "b6", "title": "Adnet: Leveraging error-bias towards normal direction in face alignment", "year": "2021" }, { "authors": "Ian T Jolliffe", "journal": "", "ref_id": "b7", "title": "Principal component analysis", "year": "1986" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b8", "title": "A stylebased generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Tero Karras; Samuli Laine; Miika Aittala; Janne Hellsten; Jaakko Lehtinen; Timo Aila", "journal": "", "ref_id": "b9", "title": "Analyzing and improving the image quality of stylegan", "year": "2020" }, { "authors": "Hyeongwoo Kim; Pablo Garrido; Ayush Tewari; Weipeng Xu; Justus Thies; Matthias Nießner; Patrick Pérez; Christian Richardt; Michael Zollhöfer; Christian Theobalt", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b10", "title": "Deep video portraits", "year": "2018" }, { "authors": "Tianye Li; Timo Bolkart; J Michael; Hao Black; Javier Li; Romero", "journal": "ACM Trans. Graph", "ref_id": "b11", "title": "Learning a model of facial shape and expression from 4d scans", "year": "2017" }, { "authors": "A Nagrani; J S Chung; A Zisserman", "journal": "", "ref_id": "b12", "title": "Voxceleb: a large-scale speaker identification dataset", "year": "2017" }, { "authors": "Yuval Nirkin; Yosi Keller; Tal Hassner", "journal": "", "ref_id": "b13", "title": "Fsgan: Subject agnostic face swapping and reenactment", "year": "2019" }, { "authors": "Pascal Paysan; Reinhard Knothe; Brian Amberg; Sami Romdhani; Thomas Vetter", "journal": "", "ref_id": "b14", "title": "A 3d face model for pose and illumination invariant face recognition", "year": "2009" }, { "authors": "K R Prajwal; Rudrabha Mukhopadhyay; P Vinay; C V Namboodiri; Jawahar", "journal": "", "ref_id": "b15", "title": "A lip sync expert is all you need for speech to lip generation in the wild", "year": "2020" }, { "authors": "Elad Richardson; Yuval Alaluf; Or Patashnik; Yotam Nitzan; Yaniv Azar; Stav Shapiro; Daniel Cohen-Or", "journal": "", "ref_id": "b16", "title": "Encoding in style: a stylegan encoder for image-to-image translation", "year": "2021" }, { "authors": "Daniel Roich; Ron Mokady; H Amit; Daniel Bermano; Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b17", "title": "Pivotal tuning for latent-based editing of real images", "year": "2022" }, { "authors": "Mark Sandler; Andrew Howard; Menglong Zhu; Andrey Zhmoginov; Liang-Chieh Chen", "journal": "", "ref_id": "b18", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018-06" }, { "authors": "Aliaksandr Siarohin; S Stéphane Lathuilière; Elisa Tulyakov; N Ricci; Sebe", "journal": "", "ref_id": "b19", "title": "First order motion model for image animation", "year": "2019" }, { "authors": "Supasorn Suwajanakorn; Steven M Seitz; Ira Kemelmacher-Shlizerman", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b20", "title": "Synthesizing obama", "year": "2017" }, { "authors": "Ayush Tewari; Mohamed Elgharib; Gaurav Bharaj; Florian Bernard; Hans-Peter Seidel; Patrick Pérez; Michael Zollhofer; Christian Theobalt", "journal": "", "ref_id": "b21", "title": "Stylerig: Rigging stylegan for 3d control over portrait images", "year": "2020" }, { "authors": "Ayush Tewari; Michael Zollhofer; Hyeongwoo Kim; Pablo Garrido; Florian Bernard; Patrick Perez; Christian Theobalt", "journal": "", "ref_id": "b22", "title": "Mofa: Model-based deep convolutional face autoencoder for unsupervised monocular reconstruction", "year": "2017" }, { "authors": "Omer Tov; Yuval Alaluf; Yotam Nitzan; Or Patashnik; Daniel Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b23", "title": "Designing an encoder for stylegan image manipulation", "year": "2021" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Guilin Zhu; Andrew Liu; Jan Tao; Bryan Kautz; Catanzaro", "journal": "NeurIPS", "ref_id": "b24", "title": "Video-to-video synthesis", "year": "2018" }, { "authors": " Ting-Chun; Arun Wang; Ming-Yu Mallya; Liu", "journal": "", "ref_id": "b25", "title": "Oneshot free-view neural talking-head synthesis for video conferencing", "year": "2021" }, { "authors": "Yaohui Wang; Di Yang; Francois Bremond; Antitza Dantcheva", "journal": "ICLR", "ref_id": "b26", "title": "Latent image animator: Learning to animate images via latent space navigation", "year": "2021" }, { "authors": "Ran Yi; Zipeng Ye; Juyong Zhang; Hujun Bao; Yong-Jin Liu", "journal": "arXiv: Computer Vision and Pattern Recognition", "ref_id": "b27", "title": "Audio-driven talking face video generation with learning-based personalized head pose", "year": "2020" }, { "authors": "Egor Zakharov; Aliaksandra Shysheya; Egor Burkov; Victor Lempitsky", "journal": "", "ref_id": "b28", "title": "Few-shot adversarial learning of realistic neural talking head models", "year": "2019" }, { "authors": "Shifeng Zhang; Xiangyu Zhu; Zhen Lei; Hailin Shi; Xiaobo Wang; Stan Z Li", "journal": "", "ref_id": "b29", "title": "S3fd: Single shot scaleinvariant face detector", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 138.74, 320.93, 156.01, 30.32 ], "formula_id": "formula_0", "formula_text": "L new = L + k i=1 p i e i .(1)" }, { "formula_coordinates": [ 5, 110.4, 335.78, 184.35, 9.81 ], "formula_id": "formula_1", "formula_text": "L total = λ rgb L rgb + λ pose-reg L pose-reg .(2)" }, { "formula_coordinates": [ 5, 113.03, 436.52, 181.72, 11.72 ], "formula_id": "formula_2", "formula_text": "L rgb = G(D(p (I) )) -G(E(I)) 1 .(3)" }, { "formula_coordinates": [ 5, 126.97, 553.36, 167.79, 9.81 ], "formula_id": "formula_3", "formula_text": "L pose-reg = D(F(G(v))) 1 .(4)" }, { "formula_coordinates": [ 5, 356.22, 613.89, 183.78, 30.44 ], "formula_id": "formula_4", "formula_text": "NME(P, P ) = 1 N P N P i=1 p i -pi 2 d ,(5)" } ]
2023-07-06
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b0", "b3", "b4", "b0" ], "table_ref": [], "text": "Time is a natural concept that allows us to arrange events in a sorted way from the past to the future in an evenly distributed manner (such as days, seasons of the year or years). Data points are usually taken at successive equally spaced points in time. Based on this way of describing events, the concept of time series as a sequence of discrete data points evenly distributed arises naturally. From this perspective, time series can describe a wide range of natural and social phenomena that evolve in time. A few samples are climate and weather trends, seismic measures, stock prices, sales data, biomedical measurements, body movements or Internet traffic [1]. The study of time series involves identifying its statistical properties, including trends, seasonality, and autocorrelation, and using this information to build models for prediction and classification.\nAccording to [2], who reviewed the work of the most active researchers in data mining and machine learning for their opinions, the problem of sequential and time series data is one of the ten most challenging in data mining. Its most relevant applications are prediction and classification (supervised or unsupervised). As for time series data availability, the significant advances in data storage, collection and processing produced during the last decades have made available a large amount of data which is growing exponentially. Unfortunately, most of this data remains unlabelled, making it useless for some critical applications such as supervised learning.\nWith the ever-increasing growth rate of available data, we need scalable techniques to process it. Clustering serves as a solution for extracting valuable insights from unlabelled data, allowing a vast quantity of this data to be processed efficiently. Clustering can be defined as the task of grouping a set of objects into different subsets, or clusters, where the objects in each subset share certain common characteristics. Since it does not require human supervision or hand-labeling, clustering is classified within the domain of unsupervised learning techniques. This makes clustering particularly suited to keeping pace with the influx of available data.\nAlthough time series are usually considered a collection of data points sorted chronologically, they can also be regarded as a single object [3] and, therefore, subject to clustering. In scenarios involving significant volumes of time series data, clustering is especially helpful in discovering patterns. In the case of rare patterns, it facilitates anomaly and novelty detection, while for frequent patterns, it aids in prediction and recommendation tasks [1]. Examples of such applications include detecting web traffic anomalies in Informatics [4] and gene classification in Biology [5]. Time series clustering can also serve as a pre-processing technique for other algorithms such as rule discovery, indexing, or classification [1].\nDespite the crucial role of time series clustering across diverse fields, existing approaches often struggle with the complexity of high-dimensional, noisy real-world time series data. Furthermore, while automated feature extraction methods have shown success, they require more parameters, data, and longer training periods, as discussed in Section II. There is, therefore, a need for more efficient, accurate, and scalable methods for time series clustering.\nIn this paper, we make several contributions to the field of time series clustering. In Section III, we introduce R-Clustering, a novel time series clustering algorithm that uses convolutional architectures with static, randomly selected kernel parameters. This approach addresses the challenge of scalability and resource-intensive training, prevalent in current methods. Subsequently, in Section IV we provide a comprehensive evaluation of R-Clustering, benchmarking its performance against state-of-the-art methods through the use of the UCR archive and detailed statistical analyses. We contrast R-Clustering against eight other reference clustering algorithms across 72 diverse time series datasets. Remarkably, R-Clustering outperforms the other algorithms in 33 out of the 72 datasets, with the second-best performing algorithm leading in only 13 datasets. In addition, R-Clustering achieves the highest mean clustering accuracy and rank, and is also the fastest algorithm across the datasets evaluated. We further demonstrate its scalability for larger datasets. These findings highlight the superior accuracy and scalability of R-Clustering, emphasizing its potential for deployment in large-scale applications." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we will first review the most relevant methods for time series clustering. Then, our focus will shift to feature extraction methods applied to 2D and 1D data: this discussion will cover general applications of these methods and make a distinction between those methods that use learnable kernels and those that use static kernels. To conclude, we will explore the potential application of static kernel methods to time series clustering" }, { "figure_ref": [], "heading": "A. Different Approaches to Time Series Clustering", "publication_ref": [ "b0", "b5", "b1", "b6", "b7", "b8", "b9", "b10", "b0" ], "table_ref": [], "text": "Different approaches to the clustering of time series, according to [1], can be classified in three groups: model-based, shape-based and feature-based. In the model-based methods, the time series are adjusted to a parametric model; since the parameters do not have a time series structure, a universal clustering algorithm is applied. Some popular parametric models are autorregresive-moving-average (ARMA) models or the Hidden Markov Model. However, these techniques might not accurately model high-dimensional and noisy real-world timeseries [6]. Existing methods that may address the noise issue, such as wavelet analysis or filtering, often introduce lag in the processed data, reducing the prediction accuracy or the ability to learn significant features from the data [2].\nIn the shape-based approach, the clustering algorithm is applied directly to the time series data with no previous transformation. Unlike to the model-based approach, these methods employ a clustering method equipped with a similarity measure appropriate for time series. For instance, [7] introduce the k-shape algorithm with a shape-based approach. In this work, the authors suggest that feature-based strategies may be less optimal due to their domain dependence, requiring the modification of the algorithms for different datasets. A different approach in shape-based clustering involves utilizing a universal clustering algorithm, such as the widely used kmeans algorithm [8], along with a suitable distance measure that considers the unique characteristics of time series data, such as dynamic time warping (DTW), introduced by [9]. While DTW has been shown to perform better for time series clustering, it comes with the downside of having a time complexity of O(n 2 ), where n is the length of time series. Compared to DTW, the Euclidean distance, which is the distance usually used in the k-means algorithm [10], has a complexity of O(n) [11].\nFeature-based methods first extract relevant time series features and later apply a conventional clustering algorithm [1]. As we will see in the next section, feature-based algorithms have been proven quite successful for image clustering and classification. According to [12, p. 1798], the performance of machine learning algorithms relies strongly on the correct choice of feature representation." }, { "figure_ref": [], "heading": "B. Feature extraction", "publication_ref": [ "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b11", "b20", "b21", "b16", "b22", "b23", "b24", "b25", "b18", "b19", "b26", "b11", "b24", "b18", "b5", "b11", "b27", "b28", "b29", "b31" ], "table_ref": [], "text": "The incorporation of expert knowledge has been shown to improve machine learning performance, as demonstrated by previous research such as [12] and [13]. Traditionally, this has been achieved through the manual design of feature extractors based on the specific task. Examples of such strategy include Lowe's algorithm [14] for image feature extraction using handdesigned static filters to induce specific invariances in the transformed data. Also, it is well known that the discrete Gaussian kernel can be used to perform a wide range of filtering operations, such as low-pass filtering, high-pass filtering, and band-pass filtering [15]. In the context of digital signal processing, FIR (Finite Impulse Response) filters operate by applying a weight set, or \"coefficients,\" to a selection of input data to calculate an output point, essentially a discrete convolution operation [16]. It is important to remark, however, that this process requires manual calibration of the filter to selectively allow certain frequencies through while suppressing others.These techniques, while effective, present scalability limitations due to their dependency on human supervision for the design process. As data volumes continue to grow exponentially, there is an increasing need for automated feature extraction techniques that can efficiently handle large datasets without the extensive requirement for expert knowledge.\nIn some domains, such as the 2D shape of pixels in images or time series having a 1D structure, it is possible to learn better features using only a basic understanding of the input as an alternative to incorporating expert knowledge. In images and time series data, patterns are likely to reproduce at different positions, and neighbours are likely to have strong dependencies. The existence of such local correlations is the basis for applying a local feature extractor all over the input and transforming it into a feature map of a similar structure. A convolution operation usually performs this task with a convolutional kernel [12, p. 1820]. The use of local feature extractors has several benefits in machine learning. First, it allows for the efficient extraction of relevant features from large datasets without requiring expert knowledge. Second, it can improve the accuracy of machine learning models by capturing local correlations in the input data. Finally, it can reduce the dimensionality of the input data, making it easier to process and analyze.\nThe academic community has achieved substantial advances during the last decade applying this technique to the problems of image classification [17], time series forecasting [18] or time series classification [19] among others. In the field of time series clustering, [20] have developed a convolutional model with deep autoencoders.\n1) Feature extraction with learnable kernels: In most convolution algorithms, kernel weights are typically learnt during the training process, as described in [12]. Over the last few years, the academic community has been actively exploring the applications of such convolutional in tasks such as image classification and segmentation. This is illustrated by works like [21], [22], and [17]. These models use convolutional layers in neural networks to transform image data, refining and compressing the original input into a compact, high-level feature representation. The architecture then applies a linear classification algorithm on this condensed feature representation as the final step. Some authors have applied convolutional architectures to the problem of image clustering, as described in [23] and [24]. In both models, a network transforms input data into an enhanced feature representation, followed by a clustering algorithm. This dual optimization approach adjusts both the network weights and the clustering parameters simultaneously. The first model predicts input labels based on the clustering results, then the error in these predictions is backpropagated through the network to adjust the network parameters. The second model utilizes an autoencoder with a middle bottleneck. The network parameters are first optimized using the reconstruction loss. Then, the decoder section of the architecture is discarded and the output features from the bottleneck form the new data representation. This condensed data representation is then processed by a clustering algorithm.\nConvolutional neural networks (CNNs) have also become increasingly popular in the field of time series analysis due to their ability to capture local patterns and dependencies within the data. For example, [25] proposed a model that uses a combination of 1D and 2D CNNs to extract both temporal and spatial features from multivariate time series data. Another model, proposed by [26], uses a dilated convolutional neural network to capture long-term dependencies in time series data. More recently, [19] used deep convolutional neural networks for time series classification achieving state-of-the-art results. Despite the success of CNN-based models in time series analysis, there are still challenges that need to be addressed. One challenge is the selection of appropriate hyperparameters, such as the number of filters and the filter size, which can greatly affect the performance of the model.\nIn recent work, [20] proposed a novel clustering algorithm for time series data that builds upon previous image clustering techniques. The model leverages an autoencoder to obtain a reconstruction loss, which measures the difference between the original time series and its reconstructed version. Simultaneously, the model employs a clustering layer to predict the labels of the time series, which is used to compute a prediction loss. By combining both losses, the model jointly optimizes the parameters of the network and the clustering parameters. This approach allows for more accurate clustering of time series data, as it takes into account both the reconstruction error and the predicted labels.\nDeep learning faces considerable challenges due to the complexity of its models, which typically involve multiple layers and a significant number of parameters [27], [12]. This complexity results in practical issues, including the requirement for substantial computational resources and vast amounts of training data. There is also the potential for overfitting and the necessity for fine tuning to optimize the model's performance.\nWhen deep learning is applied to time series analysis, further complications arise. Time series data exhibit unique characteristics like temporal dependencies and seasonal patterns, requiring specialized treatment as indicated by [25] and [19]. In addition to the challenges associated with deep learning models, time series data also pose unique challenges due to their often high-dimensional and highly variable nature [6]. This can make it difficult to select appropriate hyperparameters and optimize the model's performance.\n2) Feature extraction with static random kernels: Convolutional models with static parameters offer a distinct approach to feature extraction. Instead of learning the weights of the convolutional filters during the training process, these models use fixed or static parameters, resulting in faster computation times and simpler architectures [12]. The work of [28] and [29] supports this approach, demonstrating that convolutional models with weights that are randomly selected, or random kernels, can successfully extract relevant features from image data. Additionally, [30] argue that choosing the right network design can sometimes be more important than whether the weights of the network are learned or random. They showed that convolutional models with random kernels are frequency selective and translation invariant, which are highly desirable properties when dealing with time series data.\n[31] provide evidence that convolutional architectures with random kernels are effective for time series analysis. The authors of the paper demonstrate that their proposed method, called ROCKET (Random Convolutional KErnel Transform), achieves state-of-the-art accuracy on several time series classification tasks while requiring significantly less computation than existing methods. The authors further improved the efficiency of their method by introducing MiniRocket [32], an algorithm that runs up to 75 times faster than ROCKET on larger datasets, while maintaining comparable accuracy Motivated by the success of random kernels applied to the problem of feature extraction of time series and image data, we propose a simple and fast architecture for time series clustering using random static kernels. To the best of our knowledge, this is the first attempt to use convolutional architectures with random weights for time series clustering. This approach eliminates the need for an input-reconstructing decoder or a classifier for parameter adjustments, commonly found in previous time series clustering works.Instead, the method applies the convolution operation with random filters over the time series input data to obtain an enhanced feature representation, which is then fed into a K-means clustering algorithm. By eliminating the need for a reconstruction loss or classification loss, our method is more efficient and easier to implement than existing methods for time series clustering. We have conducted several experiments to test the effectiveness and scalability of the algorithm, and our results show that it outperforms current state-of-the-art methods. As such, our research provides a substantial contribution to the field of time series clustering, offering a promising new avenue for further advancements." }, { "figure_ref": [], "heading": "III. METHODOLOGY", "publication_ref": [ "b32" ], "table_ref": [], "text": "This section explains the proposed clustering algorithm and the evaluation methods, including a statistical analysis. Regarding data, we use the UCR archive [33], that is a vital tool for time series researchers, with over one thousand papers employing it. At the time of writing this work, it consists of 128 datasets of different types: devices, ECGs, motion tracking, sensors, simulated data and spectrographs." }, { "figure_ref": [], "heading": "A. Clustering algorithm", "publication_ref": [ "b7", "b30", "b31", "b31", "b33", "b34", "b35", "b33", "b7", "b10", "b9" ], "table_ref": [], "text": "We introduce R-clustering, a new algorithm for time series clustering, which is composed of three stages of data processing elements connected in series. The first element is a feature extractor, comprised of a single layer of static convolutional kernels with random values. This extractor transforms the input time series into an enhanced data representation. The second processing element employs principal component analysis for dimensionality reduction, selecting combination of features that account for the majority of the variance in the input data. The last element is a K-means algorithm [8] that utilizes Euclidean distance, thereby providing the algorithm with its clustering capabilities. Next we describe each of this elements in detail.\n1) Feature extraction: For the feature extractor, we propose a modified version of the one used in Minirocket [32, p. 251] which is based on a previous algorithm called Rocket [31]. In particular, we employ randomly selected values for the bias and adjust the configuration of the hyperparameters to better suit the distinct challenge of clustering. We have conducted an optimization process of the hyperparameters (kernel length and number of kernels in this case). To avoid overfitting the UCR archive we have chosen a development set of 36 datasets, the same group of datasets used in the original algorithm (Rocket)\nOur feature extractor is composed of 500 kernels, each of length 9, optimized for clustering accuracy as the result of a hyperparameter search detailed in Section: I. The kernel weights are restricted to either 2 or -1. As [32] demonstrated, constraining the weight values does not significantly compromise accuracy, but it substantially enhances efficiency.\nDilations represent another significant parameter in the model. Dilation in 1D kernels refers to the expansion of the receptive field of a convolutional kernel in one dimension. This expansion is achieved by inserting gaps between the kernel elements, which increases the distance between the elements and allows the kernel to capture information from a larger area. This technique expands the receptive field of the kernel, allowing them to identify patterns at various scales and frequencies. Our model follows the configuration proposed by [32], which employs varying dilations adapted to the specific time series being processed, to ensure recognition of most potential periodicities and scales in the data. The number of dilations is a fixed function of the input length and padding.\nOther relevant configuration parameter is the selection of the bias values, which are chosen as follows: first each kernel is convolved with a time series selected at random from the dataset. Then, the algorithm draws the bias values randomly from the quantiles of the result of the convolution. This process ensures that the scale of the bias values aligns with that of the output.\nAfter the convolution of the input series with these kernels under the mentioned configuration, the proportion of positive values (PPV) of the resulting series is calculated. The transformed data consists of 500 features (the same as the number of kernels) with values between 0 and 1.\nA thorough examination of the original feature extractor stage reveals there are artificial autocorrelations, not from the time series data, produced by the implementation of the algorithm. This behaviour could affect the performance of the clustering stage of R-clustering, or of a future algorithm using this feature extractor, because it would likely detect these unnatural patterns. Also, these patterns could mask legit features of time series, obtaining misleading results. We have identified the origin of this issue in the selection of the bias values. For a time series X convolved with a kernel W PPV is computed as the proportion of positive values of (W * X-bias) where * denotes convolution. To determine the bias values, our original algorithm selects a training instance X and calculate its convolution with the specified dilation d and kernel: W d . The quantiles of the resulting convolution output are used as the bias values. In the implementation of the original algorithm, we identified manner in which the bias values where sorted produced the artificial autocorrelations. To rectify this, we have randomly permuted them, effectively removing the artificial autocorrelations. (see IV-A for a detailed description of the output of the feature extractor after and before the modification).\n2) Dimensionality reduction with Principal Component Analysis: The \"curse of dimensionality\" can potentially impact the performance of the K-means clustering in the third stage of our algorithm, particularly given our high-dimensional context with 500 features, as detailed in the previous subsection. In high-dimensional space, the distance between data points becomes less meaningful, and the clustering algorithm may struggle to identify meaningful clusters [34]. This is because the distance between any two points tends to become more uniform, leading to the loss of meaningful distance metrics. As the number of dimensions increases, the volume of the space increases exponentially, and the data becomes more sparse, making it difficult to identify meaningful clusters [35]. For these reasons, it is convenient to reduce the number of features to improve the performance of K-means clustering in a context of a high-dimensional space. In our particular case, due to the random nature of the kernel weights used in the convolutions with the input data, we expect that many components of the transformed data may not be significant. Hence, implementing a dimensionality reduction method can be beneficial in multiple ways.\nWe propose using Principal Component Analysis (PCA), a dimensionality reduction technique that can identify the crucial dimensions or combinations of dimensions that account for the majority of data variability. Given that our problem is unsupervised, PCA is specially apt: it focuses on the inherent statistical patterns within the data, independently of any evaluation algorithm. As to why not using PCA directly to the time series data and skipping the convolution transformation, PCA does not take into account the sequential ordering of data, therefore it may not fully capture the underlying structure of time series.\nOne challenge associated with implementing PCA is determining the optimal number of principal components to retain, which will define the final number of features. Common techniques for this include the elbow method and the Automatic Choice of Dimensionality for PCA [36]. The elbow method involves visualizing the explained variance as a function of the number of components, with the 'elbow' in the plot suggesting the optimal number. However, because this technique relies on visual interpretation, it may not be suitable for our automated algorithm. The second method, based on Bayesian PCA, employs a probabilistic model to estimate the optimal number of dimensions. This method, while powerful, might not always be applicable, particularly in the context of high-dimensional data like time series, where it may be challenging to satisfy the assumption of having more samples than features.\nWe therefore opt to determine the number components by analyzing the explained variance of the data introduced by each additional dimension until it is not very significant. We choose to consider increments of 1% as no significant. As experiments from [34] show, the number of selected dimension lies between 10 and 20 which are the dimensions beyond which the effect of the curse of dimensionality can produce instability of an algorithm based on the Euclidean distance. In accordance with this result, our experiments show that most of the times the number of dimensions selected with our method are between 10 and 20.\nIn summary, we incorporate an additional stage into our algorithm that employs Principal Component Analysis (PCA) to reduce the dimensionality of the features prior to the implementation of the K-means algorithm.\n3) K-means with Euclidean distance: The findings in section IV-A, demonstrating the absence of artificial autocorrelations in the output of the first stage, along with the dimensionality reduction via Principal Component Analysis (PCA) of the second stage, suggest that our algorithm's transformation considerably reduces the time series properties of the features following the first two stages. This reduction simplifies the problem, making it more amenable to traditional raw data algorithms which are typically less complex and less demanding in terms of computational resources compared to algorithms designed specifically for time series data, such as those using Dynamic Time Warping (DTW) or shape-based distances. Consequently, in the third stage of R-clustering, we adopt a well-established clustering technique: the K-means algorithm [8] with Euclidean distance. This combination is widely recognized and has been extensively tested within the scientific community for clustering problems [11], [10]. K-means partitions data into a number K (set in advance) of clusters by iteratively assigning each data point to the nearest mean center (centroid) of a cluster. After each new assignment, the centroids are recalculated. When the training process finishes, the resulting centroids can be used to classify new observations. To evaluate the nearest centroid, a distance metric must be defined. Using the Euclidean distance will result in a more efficient algorithm since it has a time complexity of O(n). In contrast, using DTW as a distance metric would result in a time complexity O(n 2 ). Figure 1 provides a schema of the R-clustering algorithm and its stages." }, { "figure_ref": [], "heading": "B. Evaluation method", "publication_ref": [ "b36", "b37", "b38", "b39", "b36", "b40", "b41", "b42", "b43", "b41", "b42", "b32", "b42" ], "table_ref": [], "text": "To the authors' knowledge, the only benchmark for time series clustering using the widely used UCR dataset is the one presented by [37]. This benchmark compares eight popular clustering methods that cover three categories of clustering algorithms (partitional, density-based, and hierarchical) and three distance measures (Euclidean, Dynamic time warping, and shape-based). Our evaluation of R-clustering's performance uses the same 112 datasets (36 development datasets and 76 validations datasets) from the UCR archive as the benchmark, with 16 out of the 128 total datasets omitted; 11 due to their variable lengths, and 5 because they comprise only one class. We ensure a fair and direct comparison by adhering to the same evaluation procedures defined in the benchmark study. The number of clusters for each dataset is known in advance since the UCR archive is labeled, and this number is used as an input for the clustering algorithms in the benchmark and the R-clustering algorithm. In case the number of clusters were not known, different methods exist to estimate them, such as the elbow method, but evaluating these methods is not part of the benchmark's paper or this paper.\nSeveral metrics are available for the evaluation of a clustering process, such as Rand Index (RI) [38], Adjusted Mutual Information [39] or Adjusted Rand Index (ARI) [40]. Among these, ARI is particularly advantageous because its output is independent of the number of clusters while not adjusted metrics consistently output higher values for higher number of clusters [37]. It is essentially an enhancement of the RI, adjusted to account for randomness. Additionally, [41] explicitly recommends ARI as a superior metric for evaluating clustering performance. Based on these reasons and for comparability with the benchmark, we use the Adjusted Rand Index (ARI) to evaluate R-clustering. This choice also ensures compatibility with the benchmark study, facilitating meaningful comparisons of our results.\nIn comparing the performance of various algorithms, we adhere to the methods used in the benchmark study. Specifically, we calculate the following across the same 112 datasets: the number of instances where an algorithm achieves the highest Adjusted Rand Index (ARI) score, denoted as the 'number of wins'; the mean ARI score; and the mean rank of all algorithms. Results in subsection IV-D indicate that R-clustering outperforms the other algorithms across all these measures, demonstrating its efficacy. We also conducted statistical tests to determine the significance of the results and considered any limitations or assumptions of the methods.\nThe problem of comparing multiple algorithms over multiple datasets in the context of machine learning has been treated by several authors [42], [43], [44]. Following their recommendations, we first compare the ranks of the algorithms as suggested by [42] and use the Friedman test to decide whether there are significant differences among them. If the test rejects the null hypothesis (\"there are no differences\"), we try to establish which algorithms are responsible for these differences. As [43] indicates, upon a rejection of the null hypothesis by the Friedman test, we should proceed with another test to find out which algorithms produce the differences, using pairwise comparisons. Following the recommendations of the authors of the UCR Dataset, [33], we choose the Wilcoxon sign-test for the pairwise comparisons between the R-clustering algorithm and the rest. As outlined in subsection Fig. 1: The figure illustrates the various steps involved in the R-clustering algorithm: 1)Initially, the input time series is convolved with 500 random kernels. Following this, the Positive Predictive Value (PPV) operation is applied to each the convolution results, generating 500 features with values spanning between 0 and 1. 2)The next phase involves applying Principal Component Analysis (PCA) for dimensionality reduction. This procedure results in a more manageable set of features, reducing the original 500 to between 10 and 20. Finally, the processed and dimensionality-reduced data are clustered using the K-means algorithm.\nIV-D, we initially conduct comparisons between all benchmark algorithms and R-clustering, employing the latter as a control classifier to identify any significant differences. The results indicate that R-Clustering's superior performance compared to the other algorithms is statistically significant.\nAdditionally, to enhance the insights provided by the benchmark study and following the suggestions from [43], we carry out a new experiment that involves pairwise comparisons among all possible combinations within the set comprising of the benchmark algorithms and R-Clustering." }, { "figure_ref": [], "heading": "C. Implementation and reproducibility", "publication_ref": [ "b44", "b6", "b45", "b18" ], "table_ref": [], "text": "We use Python 3.6 software package on Windows OS with 16GB RAM, Intel(R) Core(TM) i7-2600 CPU 3.40GHz processor for their implementation. In our study, we use a variety of reputable libraries that have been widely used and tested, therefore ensuring reliability. These include:\n• sktime [45], a standard Python library used for evaluating the computation time of the Agglomerative algorithm and for extracting data from the UCR dataset. • The code from [7], which we utilize to evaluate the computation time of the K-shape algorithm. • scikit-learn [46], employed for executing the K-means stage of R-clustering and for evaluating the adjusted rand index. • Functions provided in [19], used for calculating statistical results.\nWe make our code publicly available1 and base our results on a public dataset. This provides transparency and guaranteess full reproducibility and replicability of the paper following the best recommended practices in the academic community." }, { "figure_ref": [], "heading": "IV. RESULTS", "publication_ref": [ "b36" ], "table_ref": [], "text": "This section initiates with an investigation of the outputs generated during the feature extraction stage, as explicated in III-A1. We continue with a search for optimal hyperparameters by examining various algorithm configurations. Following this, we present the results of applying the R-clustering algorithm to 112 datasets extracted from the UCR archive and perform a statistical analysis with other algorithm evaluated in the benchmark study [37] using R-Clustering as a control classifier. Subsequently, we showcase the scalability results of R-Clustering. We conclude this section by conducting a comprehensive statistical comparison of all the benchmark algorithms, along with R-Clustering, thereby contrasting every single one against the rest." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "A. Analysis of the Feature Extraction Stage Output", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows a sample time series from the UCR archive, while figure 3a illustrates its transformation through the original feature extractor, highlighting evident autocorrelations. To analyze these autocorrelation properties, we employ the Ljung-Box test, whose null hypothesis is that no autocorrelations exist at a specified lag. As expected from observing figure 3 (left), the test rejects the null hypothesis for every lag at 95% confidence level, therefore we assume the presence of autocorrelations. To find out whether these periodic properties originate in the time series data, we introduce noisy data in the feature extractor, repeat the Ljung-Box test and still observe autocorrelations at every lag. This observation leads us to conclude that the original feature extractor introduces artificial autocorrelations, which are not inherent to the input time series. After modifying the algorithm as explained in Section III-A1 and introducing the same noise, we rerun the Ljung-Box test. In contrast to the previous findings, the test does not reject the null hypothesis at any lag. Thus, we assume the absence of autocorrelations, indicating that the modified feature extractor works as intended, that is producing noisy data from noisy inputs. For a sample transformation of a time series with the new updated feature extractor see Figure 3 (right) Fig. 2: Sample time-series from Fungi dataset included in the UCR archive. The dataset contains high-resolution melt curves of the rDNA internal transcribed spacer (ITS) region of 51 strains of fungal species. The figure shows the negative first derivative (-dF/dt) of the normalized melt curve of the ITS region of one of such species" }, { "figure_ref": [], "heading": "B. Search for Optimal Hyperparameters", "publication_ref": [ "b5" ], "table_ref": [ "tab_0" ], "text": "We explore several configurations of the algorithm and the effect on clustering accuracy. We focus on the main hyperparameters, specifically the kernel length and the number of kernels. For comparison, we set the number of kernels to range from 100 to 20,000 and kernel lengths to range from 7 to 13 and test various combinations across these entire ranges. We evaluated 20 combinations of hyperparameters on the development set of datasets. Among these, the combination of 500 kernels of length 9 yielded the highest number of wins (6), the best mean rank (8.12), and the second-highest mean accuracy (0.31), as shown in Table I. Based on these results, we selected the configuration of 500 kernels of length 9. Not only did this configuration obtains top positions in two of three categories, but its use of 500 kernels also made it faster than competitors utilizing 1000 or more kernels. " }, { "figure_ref": [], "heading": "C. Dataset-level assessment", "publication_ref": [ "b36", "b46" ], "table_ref": [], "text": "We evaluate R-clustering on the validation dataset from the UCR archive using the same procedures as in [37]. As the problem of finding the optimal partition of n data points into k clusters is an NP-hard problem with a non-convex loss function, we run the algorithm multiple times with different randomly initialized centroids to avoid local minima and enhance performance. Specifically, in line with the procedure adopted in the benchmark study, the algorithm is executed ten times, and the highest ARI score is chosen from all runs. The K-means algorithm requires the number of clusters to be specified in advance. While different methods, such as the elbow method [47], are available to estimate this number, evaluating these methods is not part of this benchmark. Given that we are using a labelled dataset in this experiment, the actual number of clusters is provided as input to the Rclustering algorithm." }, { "figure_ref": [], "heading": "D. Results of the R-clustering algorithm", "publication_ref": [ "b32", "b32", "b41", "b47", "b43" ], "table_ref": [ "tab_6" ], "text": "We compare the performance of R-clustering to the other algorithms of the benchmarks over the validation datasets under several perspectives using the ARI metric. We count the number of wins considering that ties do not sum and calculate the mean score and rank. R-clustering obtains:\n• the highest number of wins (33) In addition, we perform a statistical comparison of Rclustering with the rest of the algorithms, in line with the recommendations provided by [33] and [42]. These recommendations suggest comparing the rank of the classifiers on each dataset. Initially, we perform the Friedman test [48] at a 95% confidence level, which rejects the null hypothesis of no significant difference among all the algorithms. According to [44], upon rejecting the null hypothesis, it becomes neccesary to identify the significant differences among the algorithms. To accomplish this, we conduct a pairwise comparison of R-clustering versus the other classifiers using the Wilcoxon To wrap up this subsection, we incorporate the performance results of the R-Clustering algorithm without the PCA stage (See Table VI). This step is taken to validate and understand the contribution made by the PCA stage to the overall performance of the algorithm. In addition to being faster, R-Clustering outperforms R-Clustering without the PCA stage. However, it's worth noting that R-Clustering without PCA still achieves significant results and secures the second position across all three measured magnitudes. " }, { "figure_ref": [ "fig_1" ], "heading": "E. Computation Time and Scalability", "publication_ref": [ "b48" ], "table_ref": [], "text": "In this subsection, we compare the computation time and scalability of the R-Clustering algorithm with the Agglomerative algorithm and K-Shape, which are the second and third best performers based on winning counts. These two algorithms represent diverse approaches to time clustering, with Agglomerative employing a hierarchical strategy, and K-Shape utilizing a shape-based approach. The total computational time across all 112 datasets is 7 minutes for Rclustering (43 minutes for R-Clustering without PCA), 4 hours 32 minutes for K-shape, and 8 minutes for the Agglomerative algorithm. Nevertheless, it is important to highlight that the time complexity of the Agglomerative algorithm is O(n 2 ) [49]. This characteristic might pose computational challenges for larger datasets and may limit proper scalability, as the subsequent experiment will show.\nIn the scalability study, we use two recent datasets not included in the benchmark: DucksAndGeese and InsectSound. DucksAndGeese dataset consists of 100 time series across 5 classes, making it the longest dataset in the archive with a length of 236,784 points. The InsectSound dataset comprises 50,000 time series, each with a length of 600 points and spread across 5 classes. The results are depicted in figure 4, which demonstrates the scalability of R-clustering in terms of both time series length and size. R-clustering scales linearly with respect to two parameters: the length of the time series and the number (or size) of time series in the dataset. For smaller dataset sizes, Agglomerative algorithm performs the fastest, even for long time series. However, R-clustering outperforms the other algorithms when dealing with moderate to large datasets.\nDespite the fact that the training stage of the Agglomerative algorithm is faster for certain data sizes, it exhibits drawbacks in some applications. R-clustering, like other algorithms based on K-means, can classify new data points easily using the centroids calculated during the training process. This is accomplished by assigning the new instance to the class represented by the nearest centroid. In contrast, the Agglomerative algorithm does not generate any parameter that can be applied to new instances. Consequently, when using Agglomerative to classify new data, the entire training process must be repeated, incorporating both the training data and the new observation." }, { "figure_ref": [], "heading": "F. Statistical analysis of the benchmark", "publication_ref": [ "b42", "b41" ], "table_ref": [ "tab_6" ], "text": "To strengthen the results of the cited benchmark, in accordance with the recommendations provided by [43], we repeat the pairwise comparisons among each of the algorithms in the benchmark, not only with the newly presented method as a control classifier. The results are displayed in table VII, which indicates which pairs of algorithms exhibit a significant difference in performance regarding mean rank at a 95% confidence level. It is important to note that the threshold for alpha value is not fixed at 0.05, but it is adjusted according to the Holm correction to manage the family-wise error [42]. In this comparison, we notice that R-Clustering doesn't exhibit significant differences with certain algorithms as it did in the earlier comparison where it was the control classifier. The reason for this difference is the increased number of pairwise tests being conducted, which, in turn, diminishes the overall statistical power of the experiment.\nThe authors of the UCR dataset recommend showcasing these types of comparisons through a critical differences diagram, grouping together the algorithms which exhibit no significant difference. However, in our study, the diagram is not particularly insightful due to the large number of resulting groups. Therefore, instead of the diagram, we present the results in Table VII, which illustrates the significant differences between each pair of algorithms." }, { "figure_ref": [], "heading": "V. CONCLUSIONS AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "We have presented R-clustering, a clustering algorithm that incorporates static random convolutional kernels in conjuction with Principal Component Analysis (PCA). This algorithm transforms time series into a new data representation by first utilizing kernels to extract relevant features and then applying PCA for dimensionality reduction. The resultant embedding serves as an input for a K-means algorithm with Euclidean distance. We evaluated this new algorithm adhering to the procedures of a recent clustering benchmark study that utilizes the UCR archive -the largest public archive of labeled time-series datasets -and state-of-the-art clustering algorithms. Notably, R-clustering obtains the first place among all the evaluated algorithms across all datasets, using the same evaluation methods deployed in the reference study. Furthermore, we demonstrate the scalability of R-clustering concerning both time series length and dataset size, becoming the fastest algorithm for large datasets. Finally, in alignment with recent recommendations from the machine learning academic community, we strengthen the cited benchmark results with a pairwise statistical comparison of the included algorithms. This statistical analysis, coupled with the fact that the code used for generating the results in this paper is publicly available, should facilitate testing future time series clustering algorithms.\nThe effectiveness of random kernels in improving the clustering accuracy of time series has been demonstrated through the experimental results. This finding opens up several future research directions, such as adapting R-clustering for multivariate series, extending its use to other types of data like image clustering, and investigating the relationship between the number of kernels and performance. We anticipate that such a study could potentially suggest an optimal number of kernels depending on the time series length.\nThe excellent performance of the convolution operation with static random kernels has been demonstrated through experimental results. We, therefore, also encourage the academic community to engage in a more detailed analysis of the theoretical aspects of random kernel transformations. Progress in this direction could enhance our understanding of the process.\nIn conclusion, the R-clustering algorithm has shown promising results in clustering time series data. Its incorporation of static random convolutional kernels and PCA, along with its scalability and superior performance, make it a valuable tool for various applications. Future research and analysis in this field will contribute to the advancement of time series clustering algorithms and our understanding of random kernel transformations." } ]
Time series data, spanning applications ranging from climatology to finance to healthcare, presents significant challenges in data mining due to its size and complexity. One open issue lies in time series clustering, which is crucial for processing large volumes of unlabeled time series data and unlocking valuable insights. Traditional and modern analysis methods, however, often struggle with these complexities. To address these limitations, we introduce R-Clustering, a novel method that utilizes convolutional architectures with randomly selected parameters. Through extensive evaluations, R-Clustering demonstrates superior performance over existing methods in terms of clustering accuracy, computational efficiency and scalability. Empirical results obtained using the UCR archive demonstrate the effectiveness of our approach across diverse time series datasets. The findings highlight the significance of R-Clustering in various domains and applications, contributing to the advancement of time series data mining.
Time Series Clustering With Random Convolutional Kernels
[ { "figure_caption": "Fig. 3 :3Fig. 3: Transformed sample time-series from Fungi dataset (figure 2) with the original feature extractor from Minirocket and the modified feature extractor", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :4Fig. 4: Scalability comparison of R-Clustering and the second and third-best performing algorithms with respect to time series length (left) and the number of time series in the dataset (right).", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Results for different hyperparameters configurations across the development set. The first term represents the number of kernels and the second the kernels's length. For instance, '500-9' stands for a configuration with 500 kernels of length 9", "figure_data": "Algorithm Mean rank Mean accuracy Winning count500-98.120.31061000-118.170.299210000-98.610.3062500-118.920.29601000-99.080.314110000-79.140.30215000-119.180.30305000-99.240.305310000-119.620.30001000-79.830.29515000-79.860.29701000-1310.220.2861500-710.440.2841500-1310.680.28015000-1311.220.284010000-1311.290.2861100-912.070.2563100-1114.080.2421100-1314.610.2263100-715.600.2141", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "followed by Agglomerative(13) (see table II) • the best mean rank (3.47) followed by k-means-DTW (4.47) (see table III) • the best mean ARI score (0.324) followed by Agglomerative (0.276) (see tabgle IV).", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "number of wins for each algorithm over the validation datasets in terms of best ARI", "figure_data": "AlgorithmWinning countR-Clustering33Agglomerative (Euclidean)13K-shape10Density Peaks (DTW)5K-means (DTW)4K-means (Euclidean)3C-means (Euclidean)2K-medoids (Euclidean)1Density Peaks (Euclidean)1", "figure_id": "tab_2", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "Mean rank for R-Clustering and the rest of the algorithms considered in the benchmark", "figure_data": "AlgorithmMean rankR-Clustering3.47Agglomerative (Euclidean)4.47K-means (Euclidean)4.49K-means (DTW)4.60K-shape4.84C-means (Euclidean)5.20K-medoids (Euclidean)5.51Density Peaks (Euclidean)5.89Density Peaks (DTW)6.53", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Average ARI for each algorithm over the validation datasets", "figure_data": "AlgorithmAverage ARIR-Clustering0.324Agglomerative (Euclidean)0.276K-means (Euclidean)0.255K-means (DTW)0.248C-means (Euclidean)0.238K-medoids (Euclidean)0.232K-shape0.206Density Peaks (Euclidean)0.201Density Peaks (DTW)0.165signed-rank test at a 95% confidence level. This test in-cludes the Holm correction for the confidence level, whichadjusts for family-wise error (the chance of observing atleast one false positive in multiple comparisons). Table Vpresents the p-values from the Wilcoxon signed-rank testcomparing R-Clustering with each of the other algorithms,together with the adjusted alpha values. The statistical rankanalysis can be summarized as follows: R-clustering emergesas the best-performing algorithm with an average rank of 3.47as presented in Table III. The pairwise comparisons usingR-Clustering as control classifier indicate that R-clusteringpresents significant differences in terms of mean rank withall other algorithms.", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "Results of the Wilcoxon signed-rank test between R-Clustering and the rest of the algorithms in consideration. The left column indicates the algorithm R-Clustering is compared to, the second column indicates the p-value of the Wilcoxon signed-rank test and the third column the alpha value with the Holm correction at a 95% confidence level", "figure_data": "Algorithmp-valuealpha w/ Holm correctionDensity Peaks (DTW)0.0000010.006250Density Peaks (Euclidean)0.0000100.007143K-shape0.0000120.008333K-medoids (Euclidean)0.0000480.010000C-means (Euclidean)0.0006330.012500K-means (Euclidean)0.0012350.016667K-means (DTW)0.0040600.025000Agglomerative (Euclidean)0.0456670.050000", "figure_id": "tab_5", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "Results for R-Clustering, R-Clustering without PCA and the rest of the algorithms considered in the benchmark", "figure_data": "AlgorithmMean rank Mean accuracy Winning countR-Clustering4.000.32421R-Clustering W/O PCA4.070.29417Agglomerative (Euclidean)5.090.2768K-means (Euclidean)5.130.2553K-means (DTW)5.260.2483K-shape5.430.2069C-means (Euclidean)5.880.2382K-medoids (Euclidean)6.200.2321Density Peaks (Euclidean)6.660.2011Density Peaks (DTW)7.300.1655", "figure_id": "tab_6", "figure_label": "VI", "figure_type": "table" } ]
Jorge Marco-Blanco; Rubén Cuevas
[ { "authors": "S Aghabozorgi; A S Shirkhorshidi; T Y Wah", "journal": "Information Systems", "ref_id": "b0", "title": "Time-series clustering-a decade review", "year": "2015" }, { "authors": "Q Yang; X Wu", "journal": "International Journal of Information Technology & Decision Making", "ref_id": "b1", "title": "10 challenging problems in data mining research", "year": "2006" }, { "authors": "R P Kumar; P Nagabhushan", "journal": "", "ref_id": "b2", "title": "Time series as a point-a novel approach for time series cluster visualization", "year": "2006" }, { "authors": "A Lakhina; M Crovella; C Diot", "journal": "ACM SIGCOMM computer communication review", "ref_id": "b3", "title": "Mining anomalies using traffic feature distributions", "year": "2005" }, { "authors": "I C Mcdowell; D Manandhar; C M Vockley; A K Schmid; T E Reddy; B E Engelhardt", "journal": "PLoS computational biology", "ref_id": "b4", "title": "Clustering gene expression time series data using an infinite gaussian process mixture model", "year": "2018" }, { "authors": "M Längkvist; L Karlsson; A Loutfi", "journal": "Pattern Recognition Letters", "ref_id": "b5", "title": "A review of unsupervised feature learning and deep learning for time-series modeling", "year": "2014" }, { "authors": "J Paparrizos; L Gravano", "journal": "", "ref_id": "b6", "title": "k-shape: Efficient and accurate clustering of time series", "year": "2015" }, { "authors": "J Macqueen", "journal": "", "ref_id": "b7", "title": "Classification and analysis of multivariate observations", "year": "1967" }, { "authors": "D J Berndt; J Clifford", "journal": "", "ref_id": "b8", "title": "Using dynamic time warping to find patterns in time series", "year": "1994" }, { "authors": "A Likas; N Vlassis; J J Verbeek", "journal": "Pattern recognition", "ref_id": "b9", "title": "The global k-means clustering algorithm", "year": "2003" }, { "authors": "A K Jain; M N Murty; P J Flynn", "journal": "ACM computing surveys (CSUR)", "ref_id": "b10", "title": "Data clustering: a review", "year": "1999" }, { "authors": "Y Bengio; A Courville; P Vincent", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b11", "title": "Representation learning: A review and new perspectives", "year": "2013" }, { "authors": "F Sebastiani", "journal": "ACM computing surveys (CSUR)", "ref_id": "b12", "title": "Machine learning in automated text categorization", "year": "2002" }, { "authors": "D G Lowe", "journal": "Ieee", "ref_id": "b13", "title": "Object recognition from local scale-invariant features", "year": "1999" }, { "authors": "T Kailath", "journal": "Prentice-Hall", "ref_id": "b14", "title": "Linear systems", "year": "1980" }, { "authors": "J G Proakis; D G Manolakis", "journal": "", "ref_id": "b15", "title": "Digital signal processing: principles, algorithms, and applications", "year": "1996" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "K Greff; R K Srivastava; J Koutník; B R Steunebrink; J Schmidhuber", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b17", "title": "Lstm: A search space odyssey", "year": "2016" }, { "authors": "H Ismail Fawaz; G Forestier; J Weber; L Idoumghar; P.-A Muller", "journal": "Data mining and knowledge discovery", "ref_id": "b18", "title": "Deep learning for time series classification: a review", "year": "2019" }, { "authors": "Q Ma; J Zheng; S Li; G W Cottrell", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Learning representations for time series clustering", "year": "2019" }, { "authors": "D C Ciresan; U Meier; J Masci; L M Gambardella; J Schmidhuber", "journal": "", "ref_id": "b20", "title": "Flexible, high performance convolutional neural networks for image classification", "year": "2011" }, { "authors": "S Pereira; A Pinto; V Alves; C A Silva", "journal": "IEEE transactions on medical imaging", "ref_id": "b21", "title": "Brain tumor segmentation using convolutional neural networks in mri images", "year": "2016" }, { "authors": "M Caron; P Bojanowski; A Joulin; M Douze", "journal": "", "ref_id": "b22", "title": "Deep clustering for unsupervised learning of visual features", "year": "2018" }, { "authors": "J Xie; R Girshick; A Farhadi", "journal": "PMLR", "ref_id": "b23", "title": "Unsupervised deep embedding for clustering analysis", "year": "2016" }, { "authors": "Z Wang; W Yan; T Oates", "journal": "IEEE", "ref_id": "b24", "title": "Time series classification from scratch with deep neural networks: A strong baseline", "year": "2017" }, { "authors": "B Zhao; H Lu; S Chen; J Liu; D Wu", "journal": "Journal of Systems Engineering and Electronics", "ref_id": "b25", "title": "Convolutional neural networks for time series classification", "year": "2017" }, { "authors": "I Goodfellow; Y Bengio; A Courville", "journal": "MIT press", "ref_id": "b26", "title": "Deep learning", "year": "2016" }, { "authors": "G.-B Huang", "journal": "Cognitive Computation", "ref_id": "b27", "title": "An insight into extreme learning machines: random neurons, random features and kernels", "year": "2014" }, { "authors": "K Jarrett; K Kavukcuoglu; M Ranzato; Y Lecun", "journal": "IEEE", "ref_id": "b28", "title": "What is the best multi-stage architecture for object recognition?", "year": "2009" }, { "authors": "A M Saxe; P W Koh; Z Chen; M Bhand; B Suresh; A Y Ng", "journal": "", "ref_id": "b29", "title": "On random weights and unsupervised feature learning", "year": "2011" }, { "authors": "A Dempster; F Petitjean; G I Webb", "journal": "Data Mining and Knowledge Discovery", "ref_id": "b30", "title": "Rocket: exceptionally fast and accurate time series classification using random convolutional kernels", "year": "2020" }, { "authors": "A Dempster; D F Schmidt; G I Webb", "journal": "", "ref_id": "b31", "title": "Minirocket: A very fast (almost) deterministic transform for time series classification", "year": "2021" }, { "authors": "H A Dau; A Bagnall; K Kamgar; C.-C M Yeh; Y Zhu; S Gharghabi; C A Ratanamahatana; E Keogh", "journal": "IEEE/CAA Journal of Automatica Sinica", "ref_id": "b32", "title": "The ucr time series archive", "year": "2019" }, { "authors": "K Beyer; J Goldstein; R Ramakrishnan; U Shaft", "journal": "Springer", "ref_id": "b33", "title": "When is \"nearest neighbor\" meaningful?", "year": "1999" }, { "authors": "C C Aggarwal; A Hinneburg; D A Keim", "journal": "Springer", "ref_id": "b34", "title": "On the surprising behavior of distance metrics in high dimensional space", "year": "2001" }, { "authors": "T Minka", "journal": "Advances in neural information processing systems", "ref_id": "b35", "title": "Automatic choice of dimensionality for pca", "year": "2000" }, { "authors": "A Javed; B S Lee; D M Rizzo", "journal": "Machine Learning with Applications", "ref_id": "b36", "title": "A benchmark study on time series clustering", "year": "2020" }, { "authors": "W M Rand", "journal": "Journal of the American Statistical association", "ref_id": "b37", "title": "Objective criteria for the evaluation of clustering methods", "year": "1971" }, { "authors": "N X Vinh; J Epps; J Bailey", "journal": "", "ref_id": "b38", "title": "Information theoretic measures for clusterings comparison: is a correction for chance necessary?", "year": "2009" }, { "authors": "L Hubert; P Arabie", "journal": "Journal of classification", "ref_id": "b39", "title": "Comparing partitions", "year": "1985" }, { "authors": "D Steinley", "journal": "Psychological methods", "ref_id": "b40", "title": "Properties of the hubert-arable adjusted rand index", "year": "2004" }, { "authors": "J Demšar", "journal": "The Journal of Machine learning research", "ref_id": "b41", "title": "Statistical comparisons of classifiers over multiple data sets", "year": "2006" }, { "authors": "S Garcia; F Herrera", "journal": "Journal of machine learning research", "ref_id": "b42", "title": "An extension on\" statistical comparisons of classifiers over multiple data sets\" for all pairwise comparisons", "year": "2008" }, { "authors": "A Benavoli; G Corani; F Mangili", "journal": "The Journal of Machine Learning Research", "ref_id": "b43", "title": "Should we really use post-hoc tests based on mean-ranks?", "year": "2016" }, { "authors": "M Löning; A Bagnall; S Ganesh; V Kazakov; J Lines; F J Király", "journal": "", "ref_id": "b44", "title": "sktime: A unified interface for machine learning with time series", "year": "2019" }, { "authors": "F Pedregosa; G Varoquaux; A Gramfort; V Michel; B Thirion; O Grisel; M Blondel; P Prettenhofer; R Weiss; V Dubourg; J Vanderplas; A Passos; D Cournapeau; M Brucher; M Perrot; E Duchesnay", "journal": "Journal of Machine Learning Research", "ref_id": "b45", "title": "Scikit-learn: Machine learning in Python", "year": "2011" }, { "authors": "R L Thorndike", "journal": "", "ref_id": "b46", "title": "Who belongs in the family", "year": "1953" }, { "authors": "M Friedman", "journal": "Journal of the american statistical association", "ref_id": "b47", "title": "The use of ranks to avoid the assumption of normality implicit in the analysis of variance", "year": "1937" }, { "authors": "M J De Hoon; S Imoto; J Nolan; S Miyano", "journal": "Bioinformatics", "ref_id": "b48", "title": "Open source clustering software", "year": "2004" } ]
[]
2023-05-17
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3" ], "table_ref": [], "text": "There has been a recent increase in machine learning driven topology optimization approaches, particularly using neural networks for performing topology optimization. Both data-driven and online training based approaches have been explored. Data-driven approaches require large training database generation and a long training time. They perform instant optimal topology generation during inference time. Online training approaches use the neural network to represent the density field of a single to a small subset of designs for better parameterization. The online training approaches require similar or more time compared to conventional topology optimization approaches like SIMP (Solid Isotropic Material with Penalisation) [1,2]. We find that the results of the online training approaches, particularly the convergence speed, can be improved through insights derived from the mechanical aspects of the problem.\nMachine learning driven topology optimization approaches offer the advantage of being easily able to accommodate additional insights in the form of pre-computed fields. The usage of these fields has been explored in data-driven approaches such as TopologyGAN [3], which use physical fields such as von Mises stress and strain energy density for achieving better results. However, there has been no work incorporating these physical fields in the online training topology optimization setting. In this work, we further improve upon TOuNN (Topology Optimization using Neural Networks), an online training approach proposed by Chandrasekhar and Suresh [4], by adding a strain energy field in addition to the domain coordinates as a conditioning input to the neural network. We show that this improves the convergence speed and can give a better compliance. With the additional strain energy field as a conditioning input, the neural network not only learns a mapping function between the domain coordinates to the density field output but also between the strain energy field to the density field output. Ideally, if the conditioning field is the same as the converged topology, then the neural network only needs to learn a constant function which is the identity function. However, the converged topology is not known at the beginning of the optimization. Thus, the strain energy field is used as a good alternative since it can be computed through a single function call of Finite Element Analysis (FEA) Figure 1: The strain energy field is calculated at the beginning of the optimization based on the boundary condition. The strain energy conditioning field is fixed throughout the training. Domain coordinates and the strain energy value at each coordinate point is used as the input to the neural network. The neural network outputs density ρ at each coordinate point. By sampling coordinate point across the design domain, we obtain the density field. From the density field, we calculate the current volume fraction and the compliance from a FEA solver. The compliance and volume fraction is then formulated as a loss function which is used in back propagation of the training process until convergence.\nprior to the online training of the neural network. We verify the performance increase obtained with this additional conditioning input across parametric experiments with varying boundary conditions and volume fractions.\nThe code for running the experiments in this paper can be found at: https://github.com/HongRayChen/Hybrid-TopOpt" }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b4", "b0", "b1", "b5", "b6", "b7", "b8", "b8" ], "table_ref": [], "text": "Conventional topology optimization: Bendsøe and Kikuchi [5] introduced the homogenization approach for topology optiimization. The SIMP method [1,2] considers the relative material density in each element of the Finite Element (FE) mesh as design variables, allowing for a simpler interpretation and optimised designs with more clearly defined features. Other common approaches to topology optimization include the level-set method [6,7] and evolutionary algorithms [8].\nAll these methods use an iterative process to create a complex mapping from problem characteristics (supports, loads and objective function) to an optimised structure, where each iteration has an expensive FEA calculation involved. A more accurate and detailed solution can be obtained with greater number of elements in the FE mesh, however this increases the computational cost. Therefore, current developments within the field are strongly motivated by the desire to either limit the number of iterations needed to obtain an optimised structure or the computational cost of completing an iteration [9]. Recent advances in deep learning, particularly for image analysis tasks, have showed potential for removing the expensive FEA iterations required until the convergence of the topology in the conventional topology optimization approaches. Hence, various topology optimization approaches that utilize neural networks have been proposed. Woldseth et al. [9] provide an extensive overview on this topic." }, { "figure_ref": [], "heading": "Data-driven topology optimization:", "publication_ref": [ "b9", "b10", "b11", "b12", "b13", "b14", "b2", "b15", "b3", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b3" ], "table_ref": [], "text": "We refer to data-driven topology optimization methods as those that aim to learn a neural network model from a database of topology optimization results for instant prediction of the optimal topology. Many methods rely on Convolutional Neural Networks (CNN) for their capabilities to learn from a large set of image data. Banga et al. [10] used a 3D encoder-decoder CNN to generate 3D topology results and show that interpolating the final output using the 3D CNN from the initial iterations obtained from the 'TopOpt' [11] solver, offers a 40% reduction in time over the conventional approach of using the solver alone. Yu et al. [12] use a conditional generative adversarial network (cGAN) in addition to CNN based encoder-decoder network. However, the results indicate there sometimes there may be disconnections present in the predicted topology which may drastically affect the compliance values. Nakamura and Suzuki [13] improve on the results with their direct design network and with a larger dataset, however, disconnections are still observed in some solutions. Behzadi and Ilies ¸ [14] used deep transfer learning with CNN. Zheng et al. [15] used U-net CNN for 3D topology synthesis. Nie at al. [3] used various physical fields computed on the original, unoptimized material domain, as inputs to the generator of a cGAN and achieved a 3 times Figure 2: Two method is evaluated in terms of processing of the strain energy conditioning field. We used a gamma filter in (a-c) and a log filter in (d) Figure 3: For the gamma filtering of the conditioning field, we adjust the gamma based on the volume fraction target of the optimization reduction in mean square error as compared to a baseline cGAN. Mazé and Ahmed [16] show that diffusion models can outperform GANs for this task. They use regressor and classifier guidance to ensure that the generated structures are manufacturable and mechanical compliance has been minimized. All these data-driven approaches aim to reduce optimal topology prediction time but face difficulties in generalization. Though over the years there have been improvements on the generalization capability, suitable training dataset generation is not trivial, especially for the 3D domain, and satisfactory and reliable results have not been achieved yet for direct use in real-world problems.\nOnline training topology optimization: We refer to online training topology optimization methods as those which do not use any prior data, rather train a neural network in an self-supervised manner for learning the optimal density distribution/topology. Chandrasekhar and Suresh [4] explored a online approach where the density field is parameterized using a neural network. Fourier projection based neural network for length scale control [17] and application for multi-material topology optimization [18] has also been explored . Deng and To [19] propose topology optimization with Deep Representation Learning, with a similar concept of re-parametrization, and demonstrate the effectiveness of proposed method on minimum compliance and stress-constrained problems. Deng and To [20] also propose a neural network based method for level-set topology optimization, where the implicit function of level-set is described by a fully connected deep neural network. Zehnder et al. [21] effectively leverage neural representations in the context of mesh-free topology optimization and use multilayer perceptrons to parameterize both density and displacement fields. It enables self-supervised learning of continuous solution spaces for topology optimization problems. Mai et al. [22] develop a similar approach for optimum design of truss structures. Hoyer et al. [23] use CNNs for density parametrization and directly enforce the constraints in each iteration, reducing the loss function to compliance only. They observe that the CNN solutions are qualitatively different from the baselines and often involve simpler and more effective structures. Zhang at al. [24] adopt a similar strategy and show solutions for different optimization problems including stress-constrained problems and compliant mechanism design.\nGeneralization is not an issue with all these online training topology optimization methods. However, the computational time and cost is similar to traditional topology optimization approaches. An advantage offered is that the density representation is independent of the FE mesh and because of the analytical density-field representation, sharper structural boundaries can be obtained [4]. We show that by adding an initial condition field as an extra input, we can improve the convergence speed and get better results." }, { "figure_ref": [], "heading": "PROPOSED METHOD", "publication_ref": [ "b24" ], "table_ref": [], "text": "In our proposed method, the density distribution of the geometry is directly represented by the topology neural network. The strain energy field and the compliance used for backpropagation is calculated from an FE solver. The program is implemented in Python and backpropagation of the loss function into each module is handled by the machine learning package TensorFlow [25]." }, { "figure_ref": [], "heading": "Neural network", "publication_ref": [ "b16" ], "table_ref": [], "text": "The topology network T (X) (Figure 1), learns a density field in a different manner as compared to typical topology optimization which represents the density field as a finite element mesh. The topology neural network takes in domain coordinates x, y, as well as the strain energy value e at coordinate x, y. The strain energy value gets concatenated with the domain coordinates to form the input to the topology network, X = [x, y, e]. The domain coordinates are normalized between -0.5 to 0.5 for the longest edge. It outputs the density value ρ at each coordinate point. The domain coordinates represent the center of each element in the design domain. During topology optimization, a batch of domain coordinates that correspond to the mesh grid and the corresponding strain energy field is fed into the topology network. The output is then sent to the Finite Element Analysis (FEA) solver. The solver outputs the compliance which is combined with the volume fraction violation as a loss. The loss is then backpropagated to learn the weights of the topology network.\nFor the topology network design, we employed a simple architecture that resembles the function expression of f (x) = wsin(kx+b). Similar neural network architectures have been used to control the length scale of geometry in topology optimization [17]. The conditioned domain coordinates are multiplied with a kernel K. The kernel K regulates the frequency of the sine function. We add a constant value of 1 to break the sine function's rotation symmetry around the origin. We use a Sigmoid function to guarantee the output is between 0 and 1. The topology network can be formulated as follows:\nT (X) = σ(W sin(KX + 1))\nwhere: We can upsample the 3D coordinate input or only sample specific regions of the density field to manipulate the resolution of the discretized visualization. Due to the strain energy conditioning field computed from the finite element mesh grid, interpolation needs to be used to calculate the intermediate values when upsampling the domain coordinates.\nX: Domain coordinate input, X = (x" }, { "figure_ref": [], "heading": "Strain energy conditioning field", "publication_ref": [], "table_ref": [], "text": "The strain energy conditioning field is used to augment the domain coordinate input. We calculate the conditioning field from the initial homogeneous density domain. In topology optimization, for a 2D problem with n elements of four nodes each, the strain energy field E can be calculated as follows:\nE = (U e × S e ) • U e(2)\nwhere:\nU e : the displacement matrix, n × 8 S e : the element stiffness matrix, 8 × 8\nThe summation is along the axis containing the values for each element.\nIn most topology optimization implementations, the compliance is then calculated by summation of the above strain energy for all elements.\nThe strain energy field can vary greatly in range depending on the problem domain size, boundary condition, and geometry constraints. Therefore, normalization needs to be done to regulate the value range of the strain energy field. Otherwise, the range of the strain energy field will deviate from the normalized range of the domain coordinates. Furthermore, a simple normalization will not suffice as the high max value of the strain energy field reduces the amplitude of other relevant features and patterns (Figure 2 (a)). We explore gamma and logarithmic filtering to normalize the strain energy field. For the gamma filtering, we clip the strain energy field by using the 99th percentile, P 99 . After clipping, more details of the field E c can be seen (Figure 2 (b)). We also further adjust the feature of the strain energy field by using gamma correction. The gamma value is set to be the complement of the target volume fraction V * for the optimization (γ = 1-V * ). The effect of the gamma correction based on the volume fraction is illustrated in Figure 3. As the volume fraction increases, the edge feature in the strain energy field is more and more pronounced. Finally, after the gamma correction step, the strain energy field is normalized between 0 and 0.4 to obtain the processed field E p . The processing step on the gamma filtering of strain energy field can be summarized in the following equation:\nE c = min(E, P 99 )(3)\nE γ = 0.4 E c -min(E c ) max(E c ) -min(E c ) γ (4)\nFor the logarithmic filtering, we do not clip the value, instead, the log filter is directly applied to the strain energy field and then normalized between 0 and 0.4. We determine this range empirically to give the best results.\nE log = 0.4 log E -min(log E) max(log E) -min(log E)(5)" }, { "figure_ref": [], "heading": "Online topology optimization with neural network", "publication_ref": [ "b25", "b3" ], "table_ref": [], "text": "During optimization, the topology network outputs the density value at the center for each element. These density values are then sent to the finite element solver to calculate compliance based on the SIMP interpolation.\nThe finite element solver is treated as a black box within the neural network. It takes in the density of each element and outputs the compliance and the sensitivity for each element with respect to the compliance. Variables that are being optimized are the weights W and kernels K of the neural network. Adam [26] is used to train the neural network.\nThe constrained optimization problem needs to be transferred into unconstrained minimization problem for neural network. We adopt the loss function formulated by Chandrasekhar and Suresh [4] of compliance minimization and volume fraction constraint. The combined loss function is In the optimization, the target volume fraction V * is an equality constraint and ρ is the volume fraction of the current design. When α increased to infinity, the equality constraint is satisfied. We assign a maximum value of 100 for α with initial value of 1 and gradually increase α every iteration. c is the current compliance and c 0 is the initial compliance calculated on the design domain with the uniform volume fraction V * .\nL = c c 0 + α( ρ V * -1) 2(6)" }, { "figure_ref": [], "heading": "RESULTS AND DISCUSSIONS", "publication_ref": [], "table_ref": [], "text": "The possible combinations of boundary conditions, problem size, and configurations is enormous. It is impossible for us to cover all. To demonstrate the effectiveness of our proposed approach, we explore both a beam problem and a parametric study in 2D. In the beam problem, we showcase the convergence of the network's output and the convergence history. In the parametric study, problems across different boundary conditions and volume fractions are explored. We report the compliance value where subscript FENN represents Finite Element (FE) compliance solver with Neural Network (NN) as topology representation, and FENNCF as neural topology optimization with strain energy Conditioning Field (CF). For these two experiments, the problem size is 40×20 pixels. All experiments are run on a PC with i7-12700K as processor, 32 GB of RAM, and Nvidia RTX3080 GPU." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Beam example", "publication_ref": [], "table_ref": [], "text": "Our first experiment is the beam example. The left side of the domain is fixed and a downward point load is on the center-right side. The boundary condition illustration and the strain energy conditioning field are shown in Figure 4.\nThe target volume fraction is 0.3. We run the online topology optimization for a total of 1000 epochs.\nThe convergence history plot is illustrated in Figure 4 (c). We observe that by epoch 50, with the strain energy conditioning field, the network's compliance takes over the lead and maintains lower compliance all the way to the end of the training epochs. We also show the density field snapshot and the corresponding compliance during training in Figure 4 (b). Analyzing the geometry of the neural network with the conditioning field, we observe that there is a subtle difference compared to without the conditioning field. The neural network with conditioning field shares greater similarities to the strain energy field where the top and bottom edges are shorter. We can also observe that most of the geometry convergence happens between 0 and 400 epochs. Between 400 to 1000 epochs, the geometry remained relatively unchanged. The only change being a darker tone of red, showing the density values get pushed closer towards 1. In both of the examples, the final volume fraction is within 1% error of the given target volume fraction. Therefore, we do not include the volume fraction convergence plot." }, { "figure_ref": [ "fig_4", "fig_5", "fig_5", "fig_5", "fig_4", "fig_4" ], "heading": "Parametric study", "publication_ref": [ "b26", "b3", "b26" ], "table_ref": [], "text": "We set up a parametric study to analyze the effectiveness of the gamma and log filter of the conditioning field. The boundary condition setup is illustrated in Figure 5 (a). The bottom right loading point is varied across the region highlighted in green which accounts for 50 load conditions. We also vary the target volume fraction between 0.2 to 0.5 with an increment of 0.1. In total, this sums up to 200 total combinations. In the previous beam example, we observe that geometries do not change significantly after 400 epochs, therefore we limit the total epochs for the parametric study to 400 epochs.\nThe parametric study result is summarized in Figure 6. In Figure 6 (a), we sort with respect to the compliance of topology optimization without conditioning field and show the compliance from both methods. We observe that the overall conditioning field converged at lower compliance. The improvement of the conditioning field is more significant when the compliance is higher. The higher compliance occurs when the volume fraction is low. To visualize the convergence speed increase, Figure 6 shows the percentage improvement with the conditioning field. The percentage improvement is calculated by identifying the epoch at which the conditioning field reaches a lower compliance compared to the final compliance of the optimization without the conditioning field. The average performance increase with gamma filter is 37.6% and with log filter is 44.7%. With both filters, the performance increase is more pronounced with lower volume fraction examples. The log filter has a better overall performance increase across all solutions compared to gamma filter.\nWe compare our result against the result of \"88-lines\" by Andreassen et. al. [27] with a filtering radius of 1.5 to accommodate the problem size. We observe that when the compliance is low, FENN performed slightly better than SIMP. This is also consistent with the result reported by Chandrasekhar and Suresh [4]. For problems with relatively higher compliance, we observe that FENN with conditioning field can in some cases converge to a lower compliance than \"88-lines\". We note that in general, the Matlab code [27] takes around 0.2 to 1.5s to run whereas FENN and FENN with either conditioning field takes around 10s. However, a definite time comparison is difficult to establish as \"88-lines\" runs on Matlab whereas FENN runs on Python. In \"88-lines\" the optimizer is optimality criteria whereas FENN rely on Adam with a learning rate of 0.002.\nWe also observe that within the 200 examples with gamma filter, there are four cases where the conditioning field does not improve convergence speed. When plotting out example results in Figure 5, the examples with the load on the right bottom edge have lower performance increase with the conditioning field. On the other hand the examples with the load close to the center have a greater performance increase and a bigger gap in compliance. Our hypothesis is that the conditioning field approach performs best when the topology is complex. The complexity in geometry can occur based on the volume fraction constraint or the configuration of the boundary conditions. As the volume fraction decrease, thinner members are required which increase complexity of the structure. Whereas the geometries in Figure 5 (b) showed that for the same volume fraction, the length scale of the part is also dependent on the boundary condition." }, { "figure_ref": [], "heading": "Additional examples", "publication_ref": [], "table_ref": [], "text": "In Figure 7, we demonstrate the improvements resulting from the conditioning field on 4 complex boundary conditions in 2D. Cases 2, 3 and 4 in Figure 7 have obstacle regions (passive elements). Furthermore, in Figure 8, we analyze the impact of increasing the problem resolution (i.e. the FE mesh size) for the boundary conditions of case 1 in Figure 7, and observe similar improvements. We also show the improvements seen for a 3D problem in Figure 9. [28](standard 3d topology optimization code using SIMP). c) Using a neural network for density parametrization. d) Using a neural network for density parametrization and additional initial strain energy input with log filtering. We observe that FENN and FENN-logCF choose to create a shell around both side which gives an illusion that the volume fraction is higher. However, the volume fraction is also very close to the target volume fraction of 0.3 (both converged to 0.3003 specifically)." }, { "figure_ref": [], "heading": "LIMITATIONS AND FUTURE WORK", "publication_ref": [ "b2", "b26", "b3", "b28" ], "table_ref": [], "text": "We exploit the ability of neural networks as a universal function approximator to learn the additional mapping from the strain energy conditioning field to the density field output. Currently, the improvement with the conditioning field is not stable across all possible boundary condition configurations. More tuning and testing is required. Another aspect is that the current conditioning field remains fixed during optimization. This is due to the neural network's inability to encode temporal features. The strain energy field changes throughout the optimization, without the ability to capture the temporal feature of the changing strain energy field. As such, the neural network has difficulty providing stable optimization results.\nThis work also demonstrates promising results using a conditioning field for online neural topology optimization. The strain energy field may not be the best conditioning field out there and future work may focus on trying out different combinations of conditioning fields similar to TopologyGAN [3]. This conditioning field approach may demonstrate great synergy with the existing data-driven approach. Using the output of data-driven topology optimization as the conditioning field, online optimization can exploit a conditioning field that is much closer to the final solution. This reduces the complexity of the mapping function for which the neural network needs to learn. Since most data-driven approaches lack the guarantee of compliance minimization, online optimization can serve as the final post-processing step to connect disconnected edges and truly minimize the compliance.\nIn this work, we also compare our result against SIMP using \"88-lines\" [27]. However, it may be not possible to determine which one is definitively better or worse. As each program is tuned for different platforms and the possible combinations of problem configuration is endless. Covering all possible problem configurations to reach a conclusion may not be possible. There are exciting possibilities with neural network-based topology optimization, for example, since the design density field is represented by a continuous function, one can infinitely upsample the result to obtain very crisp boundaries [4]. We can also use the same neural network architecture with physics-informed neural networks to conduct mesh-free topology optimization without a FE solver [29] to name a few." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "We have proposed a novel approach for improving neural network based topology optimization using a conditioning field. Our method involves using a topology neural network that is trained on a case-by-case basis to represent the geometry for a single topology optimization problem. By incorporating the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network, we have demonstrated faster convergence speed can be achieved. Our results suggest that the efficacy of neural network based topology optimization can be further improved using a prior initial field on the unoptimized domain. We believe that our proposed conditioning field initialization approach could have broad applications in the field of topology optimization, particularly for problems that involve complex geometries." } ]
We propose conditioning field initialization for neural network based topology optimization. In this work, we focus on (1) improving upon existing neural network based topology optimization, (2) demonstrating that by using a prior initial field on the unoptimized domain, the efficiency of neural network based topology optimization can be further improved. Our approach consists of a topology neural network that is trained on a case by case basis to represent the geometry for a single topology optimization problem. It takes in domain coordinates as input to represent the density at each coordinate where the topology is represented by a continuous density field. The displacement is solved through a finite element solver. We employ the strain energy field calculated on the initial design domain as an additional conditioning field input to the neural network throughout the optimization. The addition of the strain energy field input improves the convergence speed compared to standalone neural network based topology optimization.
TOPOLOGY OPTIMIZATION USING NEURAL NETWORKS WITH CONDITIONING FIELD INITIALIZATION FOR IMPROVED EFFICIENCY A PREPRINT
[ { "figure_caption": ", y, e) σ: Sigmoid activation function K: Trainable frequency kernels, initialized in [-25, 25] W: Trainable weights, initialized to 0", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) Beam boundary condition (b) Density field snapshot at 50, 150, 250, 400, and 1000 epoch (c) Convergence history", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparing the convergence history for a beam example for with and without strain energy conditioning field. The result presented is using the gamma filtering. For FENN-logCF took 22.5s while FENN took 22.1s.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Boundary conditions and some sample topology optimization results with 0.3 volume fraction within the parametric study examples", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparing the final compliance and the speed of convergence for parametric study examples for with gamma and log filter and without the conditioning field. We also run the same problem configuration with \"88-lines\" by Andreassen et al. [27] denoted by the legend \"SIMP\" in the figure.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :Figure 8 :Figure 9 :789Figure7: Four additional test cases across varying boundary conditions and passive elements, all using 0.2 target volume fraction. Each example is 60×60 in resolution and takes around 30 seconds to run with no significant difference between with and without conditioning field. Log filtered conditioning field demonstrates good convergence speed increase.", "figure_data": "", "figure_id": "fig_6", "figure_label": "789", "figure_type": "figure" } ]
Hongrui Chen; Aditya Joglekar Levent; Burak Kara
[ { "authors": " Martin P Bendsøe", "journal": "Structural optimization", "ref_id": "b0", "title": "Optimal shape design as a material distribution problem", "year": "1989" }, { "authors": "M Zhou; Rozvany", "journal": "Computer methods in applied mechanics and engineering", "ref_id": "b1", "title": "The coc algorithm, part ii: Topological, geometrical and generalized shape optimization", "year": "1991" }, { "authors": "Zhenguo Nie; Tong Lin; Haoliang Jiang; Levent Burak; Kara ", "journal": "Journal of Mechanical Design", "ref_id": "b2", "title": "Topologygan: Topology optimization using generative adversarial networks based on physical fields over the initial domain", "year": "2021" }, { "authors": "Aaditya Chandrasekhar; Krishnan Suresh", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b3", "title": "Tounn: Topology optimization using neural networks", "year": "2021" }, { "authors": " Mp Bens0e; Kikuchi", "journal": "Meths. Appl. Mechs. Engng", "ref_id": "b4", "title": "Generating optimal topologies in structural design using a homogenization method, comp", "year": "1988" }, { "authors": "Grégoire Allaire; Anca-Maria Franc ¸ois Jouve; Toader", "journal": "Comptes Rendus Mathematique", "ref_id": "b5", "title": "A level-set method for shape optimization", "year": "2002" }, { "authors": "Y Wangm; X M Wang", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b6", "title": "Guo d ma level set method for structural topology optimizations", "year": "2003" }, { "authors": "Mike Xie; Grant P Steven; Y M Xie; Steven", "journal": "Springer", "ref_id": "b7", "title": "Basic evolutionary structural optimization", "year": "1997" }, { "authors": "Niels Rebekka V Woldseth; Andreas Aage; Ole Baerentzen; Sigmund", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b8", "title": "On the use of artificial neural networks in topology optimisation", "year": "2022" }, { "authors": "Saurabh Banga; Harsh Gehani; Sanket Bhilare; Sagar Patel; Levent Kara", "journal": "", "ref_id": "b9", "title": "3d topology optimization using convolutional neural networks", "year": "2018" }, { "authors": "Niels Aage; Erik Andreassen; Boyan Stefanov; Lazarov ", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b10", "title": "Topology optimization using petsc: An easy-to-use, fully parallel, open source topology optimization framework", "year": "2015" }, { "authors": "Yonggyun Yu; Taeil Hur; Jaeho Jung; In Gwun Jang", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b11", "title": "Deep learning for determining a near-optimal topological design without any iteration", "year": "2019" }, { "authors": "Keigo Nakamura; Yoshiro Suzuki", "journal": "", "ref_id": "b12", "title": "Deep learning-based topological optimization for representing a userspecified design area", "year": "2020" }, { "authors": "Mohammad Mahdi; Behzadi ; Horea T Ilies; ¸ ", "journal": "Computer-Aided Design", "ref_id": "b13", "title": "Real-time topology optimization in 3d via deep transfer learning", "year": "2021" }, { "authors": "Shuai Zheng; Zhenzhen He; Honglei Liu", "journal": "Thin-Walled Structures", "ref_id": "b14", "title": "Generating three-dimensional structural topologies via a u-net convolutional neural network", "year": "2021" }, { "authors": "Mazé Franc; Faez Ahmed", "journal": "", "ref_id": "b15", "title": "Diffusion models beat gans on topology optimization", "year": "" }, { "authors": "Aaditya Chandrasekhar; Krishnan Suresh", "journal": "", "ref_id": "b16", "title": "Length scale control in topology optimization using fourier enhanced neural networks", "year": "2021" }, { "authors": "Aaditya Chandrasekhar; Krishnan Suresh", "journal": "CAD Computer Aided Design", "ref_id": "b17", "title": "Multi-material topology optimization using neural networks", "year": "2021" }, { "authors": "Hao Deng; Albert C To", "journal": "Computational Mechanics", "ref_id": "b18", "title": "Topology optimization based on deep representation learning (drl) for compliance and stress-constrained design", "year": "2020" }, { "authors": "Hao Deng; Albert C To", "journal": "Journal of Mechanical Design", "ref_id": "b19", "title": "A parametric level set method for topology optimization based on deep neural network", "year": "2021" }, { "authors": "Jonas Zehnder; Yue Li; Stelian Coros; Bernhard Thomaszewski", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b20", "title": "Ntopo: Mesh-free topology optimization using implicit neural representations", "year": "2021" }, { "authors": " Hau T Mai; D Dai; Joowon Mai; Jaewook Kang; Jaehong Lee; Lee", "journal": "Engineering with Computers", "ref_id": "b21", "title": "Physics-informed neural energy-force network: a unified solver-free numerical simulation for structural optimization", "year": "2023" }, { "authors": "Stephan Hoyer; Jascha Sohl-Dickstein; Sam Greydanus", "journal": "", "ref_id": "b22", "title": "Neural reparameterization improves structural optimization", "year": "2019" }, { "authors": "Zeyu Zhang; Yu Li; Weien Zhou; Xiaoqian Chen; Wen Yao; Yong Zhao", "journal": "Computer Methods in Applied Mechanics and Engineering", "ref_id": "b23", "title": "Tonr: An exploration for a novel way combining neural network with topology optimization", "year": "2021" }, { "authors": "Martín Abadi; Ashish Agarwal; Paul Barham; Eugene Brevdo; Zhifeng Chen; Craig Citro; Greg S Corrado; Andy Davis; Jeffrey Dean; Matthieu Devin; Sanjay Ghemawat; Ian Goodfellow; Andrew Harp; Geoffrey Irving; Michael Isard; Yangqing Jia; Rafal Jozefowicz; Lukasz Kaiser; Manjunath Kudlur; Josh Levenberg; Dan Mane; Rajat Monga; Sherry Moore; Derek Murray; Chris Olah; Mike Schuster; Jonathon Shlens; Benoit Steiner; Ilya Sutskever; Kunal Talwar; Paul Tucker; Vincent Vanhoucke; Vijay Vasudevan; Fernanda Viegas; Oriol Vinyals; Pete Warden; Martin Wattenberg; Martin Wicke; Yuan Yu; Xiaoqiang Zheng", "journal": "", "ref_id": "b24", "title": "Tensorflow: Large-scale machine learning on heterogeneous distributed systems", "year": "2016" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b25", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Erik Andreassen; Anders Clausen; Mattias Schevenels; S Boyan; Ole Lazarov; Sigmund", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b26", "title": "Efficient topology optimization in matlab using 88 lines of code", "year": "2011" }, { "authors": "Kai Liu; Andrés Tovar", "journal": "Structural and Multidisciplinary Optimization", "ref_id": "b27", "title": "An efficient 3d topology optimization code written in matlab", "year": "2014" }, { "authors": "Aditya Joglekar; Hongrui Chen; Levent Burak; Kara ", "journal": "", "ref_id": "b28", "title": "Dmf-tonn: Direct mesh-free topology optimization using neural networks", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 72, 314.68, 146.35, 9.03 ], "formula_id": "formula_1", "formula_text": "X: Domain coordinate input, X = (x" }, { "formula_coordinates": [ 4, 256.96, 498.13, 283.04, 9.72 ], "formula_id": "formula_2", "formula_text": "E = (U e × S e ) • U e(2)" }, { "formula_coordinates": [ 5, 268.1, 432.81, 271.9, 9.72 ], "formula_id": "formula_3", "formula_text": "E c = min(E, P 99 )(3)" }, { "formula_coordinates": [ 5, 233.25, 466.05, 306.75, 23.44 ], "formula_id": "formula_4", "formula_text": "E γ = 0.4 E c -min(E c ) max(E c ) -min(E c ) γ (4)" }, { "formula_coordinates": [ 5, 229.13, 539.82, 310.87, 22.53 ], "formula_id": "formula_5", "formula_text": "E log = 0.4 log E -min(log E) max(log E) -min(log E)(5)" }, { "formula_coordinates": [ 5, 258.9, 703.11, 281.1, 23.23 ], "formula_id": "formula_6", "formula_text": "L = c c 0 + α( ρ V * -1) 2(6)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b10", "b40", "b32", "b30", "b5", "b31", "b37", "b4", "b6", "b20", "b4", "b30", "b20" ], "table_ref": [], "text": "Fonts with various styles play an important role in content display and distribution. Excellent font design is timeconsuming and labor-intensive. Recent machine learning innovations have made font generation possible, but how to automatically generate high-quality vector fonts remains a task of practical importance in the artistic and computer graphics and vision communities.\nBenefiting from the development of image generation techniques, mainstream font synthesis methods [2,12,24,41,42] could generate pixelated glyph images. Despite the promising quality, images of glyphs incur aliasing artifacts on edges when discretely sampled, and thus are not competent for high-quality rendering or printing at arbitrary resolutions. To alleviate this problem, some methods [7,34] adopt coordinate-based neural networks to model a glyph as a contiguous neural field, which have also shown great potential in modeling 3D geometry and scenes [7,30,32]. Although glyphs represented by the implicit field can be rendered at arbitrary resolutions, it is hard to preserve details in high-frequency regions such as edges and corners, not to mention the high computational costs as the network needs to be evaluated for every pixel. Researchers have made much effort to directly synthesize vector fonts [4, 27,33,39] in recent years, with the main difficulty lying in finding a representation of vector graphics that can be encoded or decoded effectively in a deep learning framework. One typical approach represents a vector shape as a sequence of drawing commands and adopts sequence modeling techniques such as recurrent networks and transformers. The drawbacks are twofold: (1) Modeling command sequences can be much harder than images. There are infinitely many command sequences that correspond to the same-looking shape, which brings ambiguities in learning and makes it hard to construct an effective manifold for valid glyph shapes. (2) Groundtruth drawing commands are often required to provide sufficient supervision for high-quality modeling and synthesis.\nTo overcome these challenges, we first propose a dualpart vector font representation, where each glyph shape is the union of a fixed number of dual parts. Each dual part is formed by the subtraction of a \"positive\" and a \"negative\" geometric primitive. While there are many choices for the geometric primitives [6,8,25], we adopt closed Bézier paths for their great representational ability. They are also widely supported in digital font formats which makes it easy to convert our representation to these formats for practical use. We reduce the problem of predicting complicated drawing command sequences to predicting multiple basic primitives. From this perspective, both manifold learning and latent space interpolation become more feasible.\nBased on the dual-part representation, we introduce Du-alVector, a method to learn such a representation for highquality font modeling and synthesis from only glyph images without any vector supervision. A straightforward way to achieve this is to directly optimize the parameters of the Bézier curves with differentiable rendering techniques for vector graphics [22]. However, this approach easily gets stuck in the local minima as valuable gradients are only defined at shape intersections. Taking inspiration from implicit field training for 2D and 3D shapes [6,25,32], we supervise the occupancy value derived from the analytical expression of the Bézier curves and adopt an initialization strategy based on unsigned distance field (UDF) to provide dense gradients across the entire pixel space. For local detail fidelity, we also train a glyph image generation model and devise a subsequent contour refinement step to align the contour of the vector shape with that of the image by differentiable rendering [22]. We compare our approach with state-of-the-art methods in font modeling and generation and demonstrate the superior quality of our vector font outputs. Our main contributions are:\n• A new dual-part font representation based on boolean operations of Bézier paths, which enables efficient shape modeling and unsupervised manifold learning.\n• A method named DualVector that models both the dual-part and pixelated representation, and introduces a contour refinement step to obtain vector fonts with richer details as well as a UDF initialization strategy for better convergence.\n• DualVector achieves state-of-the-art quality in font modeling and generation, with outputs that can be easily converted to common digital font formats." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Glyph Image Generation", "publication_ref": [ "b8", "b12", "b14", "b10", "b13", "b38", "b0", "b21", "b40", "b0", "b36" ], "table_ref": [], "text": "Benefiting from the development of image generation [10,14,16], either black-white [12,15,37,40] or artistic glyph image generation [2,23,41,42] is well explored in the past decade. MC-GAN [2] synthesized ornamented glyphs for capital letters in an end-to-end manner from a small subset of the same style. Attr2Font [38] generated visually pleasing results according to style attributes and content references. Most of these methods follow the disentanglement of style and content codes, and so does the image generation branch in DualVector. But these glyph image generation efforts do not step outside the 2D pixelated representation which sets an upper bound for rendering quality, efficiency, and practicality." }, { "figure_ref": [], "heading": "Image Rasterization and Vectorization", "publication_ref": [ "b29", "b1", "b15", "b33", "b16", "b42", "b3", "b34", "b20", "b7", "b20" ], "table_ref": [], "text": "Rasterization and vectorization are dual problems in image processing. Previous efforts on rasterization have typically focused on anti-aliasing [9,11,31] and highperformance rendering on modern graphics hardware [3,17,21,26]. Traditional image vectorization approaches rely on region segmentation, line extraction, and contour tracing for gray-scale images [35], pixel arts [18], animations [44] and general images [1, 5,19,36,45,46]. Recently, researchers have turned to bridging the raster and vector domains [22,29], enabling existing well-established algorithms for raster images to be also applied to vector representations. DiffVG [22] is a differentiable rendering technique for vector graphics enabling the optimization of vector primitive parameters based on raster criteria. In our work, it is employed to align the vector and pixel representation in the contour refinement step." }, { "figure_ref": [], "heading": "Vector Font Generation", "publication_ref": [ "b5", "b32", "b26", "b31", "b20", "b37" ], "table_ref": [], "text": "SVG-VAE [27] was the first attempt to build a sequential generative model on SVG fonts. DeepSVG [4] developed a hierarchical transformer-based generative model for complex SVG icons generation and interpolation.Exploiting implicit neural representation, multi-implicits [34] modeled fonts with a permutation-invariant set of learned implicit functions which can preserve font features, such as smooth edges and sharp corners, and generate good interpolation results. Liu et al. [25] proposed a primitive-based representation, which views glyph shapes as the combination of primitives enclosed by implicit curves. Implicit representationbased methods can convert the zero-level set of their output fields to contour representation with the 2D marching cube [28] algorithm, although it produces multiple segmented lines and lacks accuracy and efficiency in rendering and editing. Im2Vec [33] can synthesize complex vector graphics with varying topologies without vector super- vision. It rendered layered Bézier paths predicted from image features using DiffVG [22] and trains the system in an auto-encoding manner. DeepVecFont [39] adopted a dualmodality learning framework that utilizes both sequence and image features for high-quality vector font synthesis.\n(P i , Q i )} N i=1 {P i -Q i } N i=1 O/ Ô L ref ine (a) Vector Branch (c) Contour Refinement (b) Image Branch Subtract Union P 1 Q 1 P 2 Q 2\nBut the results it produces may have the wrong topology, including self-intersection or invalid paths. In contrast to these methods, our approach does not require vector supervision and is very efficient in vector representation, with high quality glyph generation." }, { "figure_ref": [ "fig_1" ], "heading": "DualVector", "publication_ref": [], "table_ref": [], "text": "In this section, we first illustrate the proposed dual-part representation for fonts (Sec. 3.1) and then introduce the components of DualVector for training this representation from glyph images (Secs. 3.2 to 3.4) and how the model can be used for font reconstruction and generation (Sec. 3.5). In DualVector, we adopt a joint representation of vector and image for fonts, as shown in Fig. 2. The latent space is shared by both modalities, and we associate each glyph with a latent code z ∈ R d produced by the task-specific encoding process. The latent code z is fed into the two decoding branches to obtain a dual-part representation and an image representation respectively. The output of the vector branch is further processed by a contour refinement step to generate the final contour representation under the image guidance." }, { "figure_ref": [], "heading": "Dual-Part Representation", "publication_ref": [ "b20", "b29", "b32" ], "table_ref": [], "text": "In our dual-part representation, we consider the closed parametric Bézier paths as our basic primitives for several \n{B i,j (x i,2j-1 , x i,2j , x i,2j+1 )} M j=1 (x i,2M +1 = x i,1\n). We introduce the occupancy field of a closed path P i as\nO Pi (x) = 1, x ∈ P i 0, x / ∈ P i(2)\nThe entire glyph shape g could simply be denoted as the union of all the paths ∪ i P i . We normalize the coordinate range of the glyph to [-1, 1] 2 . Thus given any point x on the canvas [-1, 1] 2 , we can determine whether it is inside the represented glyph shape by a maximum operation, leading to the occupancy field of g:\nO(x) = max i O Pi (x) = 1, x ∈ ∪ i P i 0, x / ∈ ∪ i P i(3)\nHowever, we find that the above representation can model fonts with convex curves on the contour but struggles to reconstruct shapes with holes. Therefore, in practice, we pair each path P i with a negative path Q i , and term this pair (P i , Q i ) as a \"dual part\". In this way, g is represented as\n∪ i (P i -Q i ).\nThe occupancy field O is then derived as:\nO(x) = max i [min(O Pi (x), 1-O Qi (x))] = 1, x ∈ g 0, x / ∈ g (4)\nEven though O is the analytical occupancy field of g, it is not differentiable w.r.t. the parameters of the paths. Therefore, in order to apply gradient descent to learn such a representation, we calculate the approximate occupancy field Ô from the signed distance field (SDF) of g. Since the distance d(p; B) from any point p to a Bézier curve B can be derived analytically, the SDF of a path can be calculated differentiably w.r.t. its control points as follows:\ns Pi (x) = [2O Pi (x) -1] min j d(x; B i,j )(5)\nFollowing previous vector graphics rasterization techniques [22,31,34], we approximate O by analytical pixel prefiltering with a parabolic kernel α:\nÔPi (x) = α(s Pi (x)) Ô = max i [min( ÔPi , 1 -ÔQi )](6)" }, { "figure_ref": [], "heading": "Vector Branch", "publication_ref": [], "table_ref": [], "text": "The vector branch takes a latent code z and outputs a set of dual-parts that represent a vector glyph. The path decoder D P directly predicts the control points of the positive paths {P i } N i=1 and the negative ones\n{Q i } N i=1 from z: {x i,j } = D P (z), 1 ≤ i ≤ 2N, 1 ≤ j ≤ 2M(7)\nwhere {x i,j } N i=1 are control points for {P i } N i=1 and {x i,j } 2N i=N +1 are control points for {Q i } N i=1 ." }, { "figure_ref": [], "heading": "Image Branch", "publication_ref": [], "table_ref": [], "text": "To combine with the advantages of image generation approaches, we train an auxiliary image branch that maps z to a pixelated image I of the glyph it represents. Here we adopt a CNN-based decoder D I :\nI = D I (z)(8)\nwhere I is a gray-scale image in shape H × W . It provides detailed shape guidance to refine the vector shape contour." }, { "figure_ref": [], "heading": "Contour Refinement", "publication_ref": [ "b18", "b20" ], "table_ref": [], "text": "Although the dual-part representation produced by the vector branch can already be rendered, there still exists a gap between it and modern contour representations for fonts. Therefore, we first convert it to the contour of the glyph ∂O with an off-the-shelf tool Paper.js [20] which can perform arbitrary boolean operations on SVG paths. Here the contour ∂O is a set of K closed Bézier paths. ∂O = {C 1 , C 2 , ..., C K } (9) where K denotes the number of independent paths. Each individual part or hole increases K by 1. C i is a closed Bézier path composed of l i quadratic Bézier segments { Bi,j (x i,2j-1 , xi,2j , xi,2j+1 )} li j=1 .\nAfter this conversion, we exploit the image generation results to further refine ∂O with differentiable rendering techniques [22]. I could naturally serve as a pixelated version of the occupancy field O. During the refinement, we devise several strategies to improve the quality and efficiency of our representation: Subdivision To improve the representational ability, we split long Bézier curves (with a length greater than 10% of the canvas width) at their midpoints.\nSimplification To improve the efficiency of the contour representation, we replace the Bézier curves with a line segment connecting its two endpoints if they are close enough. For a Bézier curve (a, b, c), we simply define how faithfully the segment ac could represent the curve by the angle between ⃗ ba and ⃗ bc. Also, we examine if there exist two adjacent Bézier segments that could be combined as a single one. Since all quadratic Bézier curves are parts of a parabola curve, we derive the parameters of the corresponding parabola curves from the control points and combine the two curves if their parameters are close enough. We also aggregate the endpoints of lines and curves that are too short.\nPruning Prior to refinement, some Bézier paths on the contour may enclose extremely small regions which will confuse the refinement process and produce redundant curves that eventually converge to a single point. We trim those closed paths C i with areas under a certain threshold.\nMore implementation details of these strategies can be found in the supplemental material." }, { "figure_ref": [], "heading": "Implementation and Training", "publication_ref": [], "table_ref": [], "text": "In this subsection, we will dive into how we utilize the proposed representation in both the font reconstruction and generation tasks." }, { "figure_ref": [], "heading": "Optimization Objectives", "publication_ref": [ "b41", "b14", "b20" ], "table_ref": [], "text": "Optimization objectives can be divided into those at training time and those at inference time. The training-time objectives are task-related.\nFont reconstruction. For the font reconstruction task, the input to our system is a gray-scale image of a glyph and the output is the corresponding vector representation. The pixelated image I g of the glyph g is first passed through an image encoder E to obtain the latent code z: z = E(I g ) (10) The latent code is then decoded by our vector branch and image branch into dual-parts {(P i , Q i )} N i=1 and a pixelated output I. We optimize the trainable modules E, D I and D P simultaneously with the following loss functions: L = λ P L P + λ I L I (11) where we adopt an extra perceptual loss [43]:\nL P = E x∼[-1,1] 2 [|| Ô(x) -I g (x)|| 1 ] L I = ||I -I g || 2 + LPIPS(I, I g )(12)\nFont generation. The input to the font generation task is multiple style reference images {R i } N R i=1 along with the corresponding character label T . The number of references N R may vary during training to allow the model to accept a variable number of references during inference. The style references are first mapped to their features {f i } N R i=1 with an image encoder E f . We then apply stacked self-attention layers (SA) to aggregate these features to a style latent code z style with a variational loss to enable font sampling. Character labels are mapped to vector representations by a learnable label embedding layer and then fused with z style to yield the latent code z T of the target glyph. Then z T is fed to the decoders to generate the glyph, the same as the reconstruction task. The whole process can be described as follows:\nE f (R i ) µ, σ = SA(f 1 , f 2 , ..., f N R ) z style = µ + ϵσ, ϵ ∼ N (0, 1) z T = FFN([z style , T ])(13)\nTo ease training, we use the pre-trained E from the reconstruction task to encode ground truth output g T and train the encoder with the following guidance:\nL latent = ||z T -E(g T )|| 2(14)\nTo enable font sampling, we adopt a Kullback-Leibler divergence loss [16]:\nL kl = KL(N (µ, σ 2 )||N (0, 1)) (15) Next, the encoder is fine-tuned along with the pre-trained D I and D P from the reconstruction task using the loss in 11 with L kl . Refinement Both the reconstruction and generation tasks share the same contour refinement process. Given ∂O and I, the refinement process produces the optimized glyph contour in SVG format by inference-time optimization. In the optimization, we utilize DiffVG [22], denoted as DR, to render the contour in a differentiable manner. The control points of O are refined to minimize the following loss function:\nL ref ine = L ras + λ reg L reg (16\n)\nwhere L render is the photometric loss between the rasterized image and the image branch output. L ras = ||DR(∂O) -I|| 1 (17) The regularization term limits the overall length of O.\nL reg = K i=1 len(C i )(18)" }, { "figure_ref": [], "heading": "Unsigned Distance Field Warm-up", "publication_ref": [], "table_ref": [], "text": "Although the vector branch is end-to-end trainable, the gradients only appear near the boundaries. Therefore, to overcome the vanishing gradient problem, we add an extra loss based on the unsigned distance field (UDF) of the vector parts to warm up the system training.\nIn general, for two closed curves c 1 , c 2 , we cannot obtain the exact SDF of the shape after arbitrary boolean operations on the curves by point-wise operations on their respective SDFs. For the union of two SDFs, as an approximation, we have\ns c1∪c2 (x) = min(s c1 (x), s c2 (x)), x / ∈ c 1 ∪ c 2 ≤ min(s c1 (x), s c2 (x)), x ∈ c 1 ∪ c 2 (19)\nThe min(•) operation gives a correct SDF outside the shape but an inaccurate SDF inside the shape. Similarly, the max(•) gives a correct SDF inside and an inaccurate SDF outside for the intersection of two SDFs. Therefore, it is hard to calculate the accurate SDF s O for the glyph. Fortunately, we only aim to provide a coarse initialization for the dual parts, preventing the optimization process from falling into local minima early in the training. We find that using an approximate unsigned distance field works well to achieve this goal. Here we define the UDF u as u(x) = max(0, s(x)) (20) where min(•) gives a correct global UDF for unions of shapes. The approximate UDF ûg is computed as\nûg = min i [max(u Pi , u Qi )](21)\nwhere u Pi (x) = max(s Pi (x), 0)\nu Qi (x) = max(-s Qi (x), 0)(22)\nWe apply the following losses to warm up the training process for the font reconstruction task:\nL = λ P L P + λ I L I + λ u L u L u = E x∼[-1,1] 2 [||û g (x) -u g (x)|| 1 ](23)\nwhere the ground truth UDF u g is derived from the pixel image g." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b5", "b37" ], "table_ref": [], "text": "We use the SVG-Fonts dataset from SVG-VAE [27] for all the experiments. Following DeepVecFont [39], we sample a subset of the dataset with 8035 fonts for training and 1425 fonts for evaluation. The resolution of input glyph images is set to 128 × 128. Please refer to the supplementary material for implementation details." }, { "figure_ref": [ "fig_3" ], "heading": "Font Reconstruction", "publication_ref": [ "b31", "b32", "b31", "b32", "b31", "b31", "b32" ], "table_ref": [], "text": "In this experiment, we investigate how faithfully different methods can reconstruct the input glyph images with vector outputs, which can also be regarded as font vectorization. Im2Vec [33] and our method directly produce vector representations from glyph images, while for Multi-Implicits [34], we adopt the 2D marching cube algorithm and extract the contours. As can be seen from a qualitative comparison in Fig. 3, our representation could faithfully reconstruct global shapes as well as local details from the pixelated input. Im2Vec [33] could only preserve coarse shapes and suffers from wrong topologies as shown in the \"A\" example as it is easy to fall into local optimum without the global gradient. Multi-Implicits [34] results in finer details than Im2Vec [33] but still exhibit unsmooth edges and rounded corners. To quantitatively evaluate how accurate the vector outputs are, we calculate three metrics, namely SSIM, L1, and s-IoU, between the rendered images and the ground truth images under different resolutions. Tab. 1 shows that our method surpasses the two alternatives by a clear margin on all metrics. Using parametric curves as primitives, our method, along with Im2Vec [33], maintains a stable L1 error across all resolutions for the smooth edges, while for Multi-Implicits [34] the L1 error gets larger as the resolution increases, resulting from the high-frequency noises at the extracted boundaries." }, { "figure_ref": [], "heading": "Font Generation", "publication_ref": [ "b37", "b32", "b31", "b32", "b32", "b31", "b36", "b31", "b36", "b32", "b37", "b31", "b32", "b36", "b32", "b36", "b31", "b32", "b37", "b32", "b31", "b37" ], "table_ref": [], "text": "To evaluate the generation ability of our method, we adopt the few-shot font generation setting, where a subset of glyphs with the same style is given and models are trained to produce a complete set of glyphs that are consistent with the input glyphs in style. This task is also dubbed \"font complement\" or \"font style transfer\" in other works. We compare our method with several popular methods in font generation: DeepVecFont [39], Multi-Implicits [34],\nTable 1. Quantitative comparison with Im2Vec [33] and Multi-Implicits [34] on three image-level metrics at different resolutions for the font reconstruction task. The gray scale is normalized to [0,1]. s-IoU, from [34], measures the overlap. 0.8957 0.0504 0.6851\nIm2Vec [33] and Attr2Font [38]. For a fair comparison, we use the same four characters, 'A', 'B', 'a', and 'b' as style references for each font and evaluate image-level metrics on the generated characters. Since Im2Vec [33] is an image vectorization method, we apply it to the images generated by Attr2Font [38]. Multi-Implicits [34] follows an autodecoder architecture, so we freeze the decoder and find the optimal latent vector for a font by minimizing the losses between the given style references and the predicted ones using gradient descent. As illustrated in Fig. 4, DeepVecFont [39] may generate characters with wrong topologies due to the incorrect initial contour prediction, such as 'A', 'L', and 'Z'. The fixed topology in the optimization process makes it difficult to produce the necessary details, which are particularly symbolic in serif fonts. In addition, unconstrained contour generation may also produce self-intersections, like the 'g' and 'r', which are not easy to fix in subsequent processing. Im2Vec [33] tends to produce rounded corners and is only able to produce the coarse style, lacking details. Multi-Implicits [34] generates vector results from implicit signed distance fields (SDF) through a 2D Marching cube algorithm, leading to free-form contours. Therefore the edges are not smooth enough and have unpleasing noises, such as the 'g' and 'x' in the figure. Attr2Font [38] can generate generally satisfactory font images, but it cannot maintain high-quality rendering when scaling.\nCompared with the above methods, our method is capable of not only grasping the overall style of the input references but also generating new glyphs with clear boundaries and sharp edges, achieving optimal visual quality. The efficient dual-part representation along with the differentiable occupancy supervision establishes a link between pixelated images and vector graphics, allowing the unsupervised synthesis of vector fonts with aligned shapes and correct topologies. The refinement step allows the composition of the contour to be freely changed and thus ensures Figure 4. Font generation results of Attr2Font, Im2Vec, Multi-Implicits, DeepVecFont, and DualVector. The input style references are marked with red boxes. We trace the zero-level sets of SDFs in a piece-wise linear way to get SVGs from Multi-Implicits [34]. Typical failure cases are marked in blue dashed boxes. Some details are zoomed-in in the leftmost column. Table 2. Quantitative comparison of image-level metrics of Attr2Font [38], Im2Vec [33], Multi-Implicits [34], DeepVecFont [39] and DualVector in the font complement experiment. Tab. 3. Multi-Implicits [34] uses a large number of short line segments to represent the boundary due to the marching cube algorithm. Im2Vec [33] represents a shape with 4 parts each enclosed by 20 cubic Bézier curves, lacking flexibility and details. Our method performs boolean operations on the quadratic Bézier curves from dual parts, achieving comparable compactness with human-designed fonts and outputs from methods with vector supervision [39]." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "To demonstrate the effectiveness of several key design choices in our method, we conduct ablation studies on the font reconstruction task. We investigate the following degraded variants: (a) replace the union of dual parts with the union of all the paths; (b) train without the UDF warm-up process; and (c) remove the refining strategy during refinement. We show the comparison results of a representative We also experimentally investigate the effect of different (N, M )s on the representation capacity. We choose several settings of (N, M ) and train only the vector branch (i.e. no contour refinement) with the same number of epochs to reconstruct the input. Tab. 5 shows that M = 4 is optimal and the reconstruction accuracy of the vector branch increases as N increases. The accuracy does not gain too much changing N from 6 to 8. So considering the time cost for training and inference, we select (N, M ) = (6, 4) in the above experiments. " }, { "figure_ref": [], "heading": "Font Sampling and Interpolation", "publication_ref": [], "table_ref": [], "text": "In the font generation task, the style latent code z style is trained with a variational loss L kl , enabling generating fonts of new styles by sampling z style from N (0, 1). We show in Fig. 1 " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b37", "b11" ], "table_ref": [], "text": "The contour refinement step requires gradient descent during inference and therefore imposes a large time overhead, similar to DeepVecFont [39]. The process could potentially be accomplished by forwarding inferences of some generation models, such as the diffusion models [13]. Another limitation is that DualVector only focuses on the synthesis of glyphs on a fixed-size canvas, without considering the kerning between them, making the spacing between characters less natural. This may be solved by postprocessing the SVG with some automatic kerning methods or by learning through a data-driven approach." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present DualVector as a new representation and generation method for vector fonts. DualVector models a glyph as dual parts and introduces a contour refinement step to add pleasing details to the contours of the dual parts. Our method is able to generate vector fonts without vector supervision, which can be directly converted to common digital font formats for content presentation and distribution. Comparisons with different methods of font reconstruction and generation tasks demonstrate that our approach produces fonts with the best quality among all the alternatives. In the future, more plausible initialization and supervision could be exploited to generalize this representation to more complex fonts (e.g. Chinese fonts). We hope this research contributes to the typeface design industry and provide inspiration for broader works on the understanding and generation of 2D man-made content such as cartoon characters and icons." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement This work was supported by the Natural Science Foundation of China (Project Number 61832016), and Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology." } ]
The five boxing wizards jump quickly. The five boxing wizards jump quickly. The five boxing wizards jump quickly. The five boxing wizards jump quickly. The five boxing wizards jump quickly. Figure 1. High-quality vector fonts (copyable on electronic devices) synthesized by DualVector (left), with smoothly-interpolated styles (right). Both sentences are pangrams. Please zoom in for details.
DualVector: Unsupervised Vector Font Synthesis with Dual-Part Representation
[ { "figure_caption": "Figure 2 .2Figure 2. DualVector contains the following components: (a) a vector branch that maps the latent code z to several closed Bézier paths which are further gathered to form a global shape of the glyph; (b) an image branch that generates pixelated images with faithful details; (c) a contour refinement step that obtains the contour via boolean operations on dual-parts and optimizes it with the image guidance at inference time. Both the vector and the image branch are trained using the glyph images. Dual parts are distinguished by their colors.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "reasons: (1) Bézier paths are powerful enough to approximate various shapes; (2) The contour of a closed shape produced by boolean operations on Bézier paths is still a set of Bézier paths, i.e., the closure property; (3) Bézier paths are widely used in modern font design, making the learned representation highly applicable. For convenience, we use a triplet of control points (a, b, c) to denote a quadratic Bézier curve B(•; a, b, c) determined by the control points: B(t; a, b, c) = (1 -t) 2 a + (1 -t)tb + t 2 ct ∈ [0, 1] (1) Since a glyph often consists of a number of strokes with similar appearances, we propose to represent each glyph g with N closed Bézier paths {P i } N i=1 . Each path P i is defined by M end-to-end connected quadratic Bézier curves", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Comparison of font reconstruction quality. We show the input glyph image row by row, our reconstructed image I from the image branch, dual-parts O from the vector branch, and our final result after contour refinement, as well as vector outputs from alternative methods.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. The contours of an \"M\" reconstructed by our method, compared with three degraded variants.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "case in Fig. 5 .5Without the dual-part representation, variant (a) cannot reconstruct concave regions very well, as they can be hard to represent by the union of simply closed paths. Without UDF initialization, variant (b) produces contours with useless curves that are hard to be eliminated during optimization. Without the refining strategy, variant (c) may generate inaccurate edges with zigzags. Quantitative results in Tab. 4 also verify the quality improvements brought by the three design choices.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "(left) and Fig. 6 multiple styles of fonts sampled. Please refer to our supplementary material for more examples. Also, benefiting from our dual-part representation, we can perform smooth interpolation between arbitrary font styles as demonstrated in Fig. 1 (right).", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. New fonts with style codes sampled in N (0, 1).", "figure_data": "", "figure_id": "fig_7", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "CommandMLQCTotal↓Ours 1.428.2111.06020.69DeepVecFont [39] 1.378.2309.6819.28Multi-Implicits [34] 6.16 1446.53001452.69Im2Vec [33]4008084Human-Designed 1.389.1708.5119.06", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Quantitative results of reconstruction accuracy for different model variants in ablation studies. All the output vector glyphs are rendered at 512 × 512 to compute the L1 error.", "figure_data": "L1↓#Lines↓ # Curves↓(a) w/o dual parts0.014312.2314.06(b) w/o warm-up0.014611.1913.12(c) w/o refining strategy 0.0148-22.07Full Model0.01368.9711.81", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Different reconstruction settings and metrics. The imagebased metrics use a resolution of 512 × 512.", "figure_data": "N M SSIM↑L1↓s-IOU↑240.9262 0.0282 0.8338440.9292 0.0259 0.8464640.9303 0.0250 0.8512840.9306 0.0247 0.8525630.9293 0.0256 0.8480650.9300 0.0252 0.8498", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Ying-Tian Liu; Zhifei Zhang; Yuan-Chen Guo; Matthew Fisher; Zhaowen Wang; Song-Hai Zhang
[ { "authors": "Samaneh Azadi; Matthew Fisher; Vladimir Kim; Zhaowen Wang; Eli Shechtman; Trevor Darrell", "journal": "", "ref_id": "b0", "title": "Multi-content gan for few-shot font style transfer", "year": "2018" }, { "authors": "Vineet Batra; Mark J Kilgard; Harish Kumar; Tristan Lorach", "journal": "ACM Trans. Graph", "ref_id": "b1", "title": "Accelerating vector graphics rendering using the graphics hardware pipeline", "year": "2015-07" }, { "authors": "Alexandre Carlier; Martin Danelljan; Alexandre Alahi; Radu Timofte", "journal": "", "ref_id": "b2", "title": "Deepsvg: A hierarchical generative network for vector graphics animation", "year": "2020-12-06" }, { "authors": "", "journal": "Inc. Vector magic", "ref_id": "b3", "title": "", "year": "" }, { "authors": "Zhiqin Chen; Andrea Tagliasacchi; Hao Zhang", "journal": "IEEE", "ref_id": "b4", "title": "Bspnet: Generating compact meshes via binary space partitioning", "year": "2020" }, { "authors": "Zhiqin Chen; Hao Zhang", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b5", "title": "Learning implicit fields for generative shape modeling", "year": "2019" }, { "authors": "Boyang Deng; Kyle Genova; Soroosh Yazdani; Sofien Bouaziz; Geoffrey E Hinton; Andrea Tagliasacchi", "journal": "IEEE", "ref_id": "b6", "title": "Cvxnet: Learnable convex decomposition", "year": "2020" }, { "authors": "A E Fabris; A R Forrest", "journal": "ACM Press/Addison-Wesley Publishing Co", "ref_id": "b7", "title": "Antialiasing of curves by discrete pre-filtering", "year": "1997" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron C Courville; Yoshua Bengio", "journal": "", "ref_id": "b8", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Chris Green", "journal": "", "ref_id": "b9", "title": "Improved alpha-tested magnification for vector textures and special effects", "year": "2007" }, { "authors": "Hideaki Hayashi; Kohtaro Abe; Seiichi Uchida", "journal": "Knowl. Based Syst", "ref_id": "b10", "title": "Glyphgan: Style-consistent font generation based on generative adversarial networks", "year": "2019" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b11", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Phillip Isola; Jun-Yan Zhu; Tinghui Zhou; Alexei A Efros", "journal": "", "ref_id": "b12", "title": "Image-to-image translation with conditional adversarial networks", "year": "2017" }, { "authors": "Yue Jiang; Zhouhui Lian; Yingmin Tang; Jianguo Xiao", "journal": "ACM", "ref_id": "b13", "title": "Dcfont: an end-to-end deep chinese font generation system", "year": "2017" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b14", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Yoshiyuki Kokojima; Kaoru Sugita; Takahiro Saito; Takashi Takemoto", "journal": "Association for Computing Machinery", "ref_id": "b15", "title": "Resolution independent rendering of deformable vector objects using graphics hardware", "year": "2006" }, { "authors": "Johannes Kopf; Dani Lischinski", "journal": "ACM Transactions on Graphics (Proceedings of SIGGRAPH", "ref_id": "b16", "title": "Depixelizing pixel art", "year": "2011" }, { "authors": "Gregory Lecot; Bruno Levy", "journal": "Eurographics Association", "ref_id": "b17", "title": "Ardeco: Automatic region detection and conversion", "year": "2006" }, { "authors": "Jürg Lehni; Jonathan Puckey", "journal": "", "ref_id": "b18", "title": "The swiss army knife of vector graphics scripting", "year": "2021" }, { "authors": "Rui Li; Qiming Hou; Kun Zhou", "journal": "ACM Trans. Graph", "ref_id": "b19", "title": "Efficient gpu path rendering using scanline rasterization", "year": "2016-11" }, { "authors": "Tzu-Mao Li; Michal Lukác; Michaël Gharbi; Jonathan Ragan-Kelley", "journal": "ACM Trans. Graph", "ref_id": "b20", "title": "Differentiable vector graphics rasterization for editing and learning", "year": "2005" }, { "authors": "Xiang Li; Lei Wu; Xu Chen; Lei Meng; Xiangxu Meng", "journal": "IEEE", "ref_id": "b21", "title": "Dse-net: Artistic font image synthesis via disentangled style encoding", "year": "2022" }, { "authors": "Xianming Lin; Jie Li; Hualin Zeng; Rongrong Ji", "journal": "Multim. Tools Appl", "ref_id": "b22", "title": "Font generation based on least squares conditional generative adversarial nets", "year": "2019" }, { "authors": "Ying-Tian Liu; Yuan-Chen Guo; Yi-Xiao Li; Chen Wang; Song-Hai Zhang", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b23", "title": "Learning implicit glyph shape representation", "year": "2022" }, { "authors": "Charles Loop; Jim Blinn", "journal": "", "ref_id": "b24", "title": "Resolution independent curve rendering using programmable graphics hardware", "year": "2005" }, { "authors": "Gontijo Raphael; David Lopes; Douglas Ha; Jonathon Eck; Shlens", "journal": "IEEE", "ref_id": "b25", "title": "A learned representation for scalable vector graphics", "year": "2019-11-02" }, { "authors": "William E Lorensen; Harvey E Cline", "journal": "SIG-GRAPH Comput. Graph", "ref_id": "b26", "title": "Marching cubes: A high resolution 3d surface construction algorithm", "year": "1987-08" }, { "authors": "Xu Ma; Yuqian Zhou; Xingqian Xu; Bin Sun; Valerii Filev; Nikita Orlov; Yun Fu; Humphrey Shi", "journal": "", "ref_id": "b27", "title": "Towards layerwise image vectorization", "year": "2022" }, { "authors": "Ben Mildenhall; P Pratul; Matthew Srinivasan; Jonathan T Tancik; Ravi Barron; Ren Ramamoorthi; Ng", "journal": "Springer", "ref_id": "b28", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "Diego Nehab; Hugues Hoppe", "journal": "ACM Trans. Graph", "ref_id": "b29", "title": "Random-access rendering of general vector graphics", "year": "2008" }, { "authors": "Jeong Joon Park; Peter Florence; Julian Straub; Richard A Newcombe; Steven Lovegrove", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b30", "title": "Deepsdf: Learning continuous signed distance functions for shape representation", "year": "2019" }, { "authors": "Pradyumna Reddy; Michaël Gharbi; Michal Lukác; Niloy J Mitra", "journal": "Computer Vision Foundation / IEEE", "ref_id": "b31", "title": "Im2vec: Synthesizing vector graphics without vector supervision", "year": "2021" }, { "authors": "Pradyumna Reddy; Zhifei Zhang; Zhaowen Wang; Matthew Fisher; Hailin Jin; Niloy J Mitra", "journal": "", "ref_id": "b32", "title": "A multi-implicit neural representation for fonts", "year": "2007" }, { "authors": "Peter Selinger", "journal": "", "ref_id": "b33", "title": "Potrace: a polygon-based tracing algorithm", "year": "2003" }, { "authors": "Jian Sun; Lin Liang; Fang Wen; Heung-Yeung Shum", "journal": "ACM Trans. Graph", "ref_id": "b34", "title": "Image vectorization using optimized gradient meshes", "year": "2007-07" }, { "authors": "Yuchen Tian", "journal": "", "ref_id": "b35", "title": "zi2zi: Master chinese calligraphy with conditional adversarial networks", "year": "2017" }, { "authors": "Yizhi Wang; Yue Gao; Zhouhui Lian", "journal": "ACM Trans. Graph", "ref_id": "b36", "title": "Attribute2font: Creating fonts you want from attributes", "year": "2020-07" }, { "authors": "Yizhi Wang; Zhouhui Lian", "journal": "ACM Trans. Graph", "ref_id": "b37", "title": "Deepvecfont: synthesizing high-quality vector fonts via dual-modality learning", "year": "2008" }, { "authors": "Yankun Xi; Guoli Yan; Jing Hua; Zichun Zhong", "journal": "ACM", "ref_id": "b38", "title": "Jointfontgan: Joint geometry-content GAN for font generation via few-shot learning", "year": "2020" }, { "authors": "Shuai Yang; Jiaying Liu; Wenjing Wang; Zongming Guo", "journal": "", "ref_id": "b39", "title": "Tet-gan: Text effects transfer via stylization and destylization", "year": "2019" }, { "authors": "Gao Yue; Guo Yuan; Lian Zhouhui; Tang Yingmin; Xiao Jianguo", "journal": "ACM Trans. Graph", "ref_id": "b40", "title": "Artistic glyph image synthesis via one-stage fewshot learning", "year": "2019" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "Computer Vision Foundation / IEEE Computer Society", "ref_id": "b41", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018-06-18" }, { "authors": "Song-Hai Zhang; Tao Chen; Yi-Fei Zhang; Shi-Min; Ralph R Hu; Martin", "journal": "IEEE Transactions on Visualization and Computer Graphics", "ref_id": "b42", "title": "Vectorizing cartoon animations", "year": "2009" }, { "authors": "Shuang Zhao; Frédo Durand; Changxi Zheng", "journal": "IEEE Trans. Vis. Comput. Graph", "ref_id": "b43", "title": "Inverse diffusion curves using shape optimization", "year": "2018" }, { "authors": "Haikuan Zhu; Juan Cao; Yanyang Xiao; Zhonggui Chen; Zichun Zhong; Yongjie Jessica Zhang", "journal": "ACM Trans. Graph", "ref_id": "b44", "title": "Tcb-spline-based image vectorization", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 123.09, 90.21, 326.56, 143.06 ], "formula_id": "formula_0", "formula_text": "(P i , Q i )} N i=1 {P i -Q i } N i=1 O/ Ô L ref ine (a) Vector Branch (c) Contour Refinement (b) Image Branch Subtract Union P 1 Q 1 P 2 Q 2" }, { "formula_coordinates": [ 3, 308.86, 504.69, 211.28, 12.32 ], "formula_id": "formula_1", "formula_text": "{B i,j (x i,2j-1 , x i,2j , x i,2j+1 )} M j=1 (x i,2M +1 = x i,1" }, { "formula_coordinates": [ 3, 372.46, 530.71, 172.65, 21.64 ], "formula_id": "formula_2", "formula_text": "O Pi (x) = 1, x ∈ P i 0, x / ∈ P i(2)" }, { "formula_coordinates": [ 3, 339.48, 627.33, 205.64, 21.64 ], "formula_id": "formula_3", "formula_text": "O(x) = max i O Pi (x) = 1, x ∈ ∪ i P i 0, x / ∈ ∪ i P i(3)" }, { "formula_coordinates": [ 4, 50.11, 75.16, 53.28, 9.65 ], "formula_id": "formula_4", "formula_text": "∪ i (P i -Q i )." }, { "formula_coordinates": [ 4, 50.11, 87.98, 236.25, 32.32 ], "formula_id": "formula_5", "formula_text": "O(x) = max i [min(O Pi (x), 1-O Qi (x))] = 1, x ∈ g 0, x / ∈ g (4)" }, { "formula_coordinates": [ 4, 88.02, 221.02, 198.34, 14.43 ], "formula_id": "formula_6", "formula_text": "s Pi (x) = [2O Pi (x) -1] min j d(x; B i,j )(5)" }, { "formula_coordinates": [ 4, 107.95, 273.41, 178.42, 33.96 ], "formula_id": "formula_7", "formula_text": "ÔPi (x) = α(s Pi (x)) Ô = max i [min( ÔPi , 1 -ÔQi )](6)" }, { "formula_coordinates": [ 4, 80.41, 373.17, 205.95, 24.18 ], "formula_id": "formula_8", "formula_text": "{Q i } N i=1 from z: {x i,j } = D P (z), 1 ≤ i ≤ 2N, 1 ≤ j ≤ 2M(7)" }, { "formula_coordinates": [ 4, 145.98, 503.6, 140.39, 9.65 ], "formula_id": "formula_9", "formula_text": "I = D I (z)(8)" }, { "formula_coordinates": [ 5, 93.04, 124.08, 193.32, 27.11 ], "formula_id": "formula_10", "formula_text": "L P = E x∼[-1,1] 2 [|| Ô(x) -I g (x)|| 1 ] L I = ||I -I g || 2 + LPIPS(I, I g )(12)" }, { "formula_coordinates": [ 5, 109.7, 335.45, 176.66, 54.48 ], "formula_id": "formula_11", "formula_text": "E f (R i ) µ, σ = SA(f 1 , f 2 , ..., f N R ) z style = µ + ϵσ, ϵ ∼ N (0, 1) z T = FFN([z style , T ])(13)" }, { "formula_coordinates": [ 5, 115.33, 433.73, 171.03, 9.65 ], "formula_id": "formula_12", "formula_text": "L latent = ||z T -E(g T )|| 2(14)" }, { "formula_coordinates": [ 5, 111.53, 619.05, 170.68, 9.65 ], "formula_id": "formula_13", "formula_text": "L ref ine = L ras + λ reg L reg (16" }, { "formula_coordinates": [ 5, 282.21, 619.37, 4.15, 8.64 ], "formula_id": "formula_14", "formula_text": ")" }, { "formula_coordinates": [ 5, 128.79, 686.01, 157.57, 30.32 ], "formula_id": "formula_15", "formula_text": "L reg = K i=1 len(C i )(18)" }, { "formula_coordinates": [ 5, 320.34, 213.76, 224.78, 32.32 ], "formula_id": "formula_16", "formula_text": "s c1∪c2 (x) = min(s c1 (x), s c2 (x)), x / ∈ c 1 ∪ c 2 ≤ min(s c1 (x), s c2 (x)), x ∈ c 1 ∪ c 2 (19)" }, { "formula_coordinates": [ 5, 374.35, 407.49, 170.76, 14.43 ], "formula_id": "formula_17", "formula_text": "ûg = min i [max(u Pi , u Qi )](21)" }, { "formula_coordinates": [ 5, 369.19, 442.86, 175.92, 16.7 ], "formula_id": "formula_18", "formula_text": "u Qi (x) = max(-s Qi (x), 0)(22)" }, { "formula_coordinates": [ 5, 351.31, 488.79, 193.81, 24.9 ], "formula_id": "formula_19", "formula_text": "L = λ P L P + λ I L I + λ u L u L u = E x∼[-1,1] 2 [||û g (x) -u g (x)|| 1 ](23)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b2", "b9", "b20", "b25" ], "table_ref": [], "text": "A NOMALY detection is a fundamental and widely appli- cable data mining and machine learning task, whose aim is to isolate samples in a dataset that are suspected of being generated by a distribution different from the rest of the data.\nDepending on the composition of training and test sets, anomaly detection settings can be classified as unsupervised, semi-supervised, and supervised. In the supervised setting training data are labeled as normal and abnormal and and the goal is to build a classifier. The difference with standard classification problems there is posed by the fact that abnormal data form a rare class. In the semi-supervised setting, the training set is composed by both labelled and unlabelled data. A special case of this setting is the oneclass classification when we have a training set composed only by normal class items. In the unsupervised setting the goal is to detect outliers in an input dataset by assigning a score or anomaly degree to each object.\nFor each of these scenarios many techniques have been proposed, and recently deep learning based ones are witnessing great attention due to their effectiveness, among them, those employing Autoencoders are largely used. Roughly speaking, Autoencoders are neural networks that aim at reconstructing the input data after a dimensionality reduction step and anomalies are data with high reconstruction error. In [3], [10], [21], [26] detailed descriptions of such settings and related techniques are provided.\nIn this work we present a deep learning based technique for semi-supervised anomaly detection particularly suitable in the context in which also a few known anomalies are available. The idea at its basis is to output a reconstructed version of the input data through an innovative strategy to enlarge the difference between the reconstruction error of anomalies and normal items by instructing the model to put known anomalies outside of the domain description of the normal data.\nSpecifically we propose the AE-SAD algorithm (for Semi-supervised Anomaly Detection through Auto-Encoders), based on a novel training procedure and a new loss function that exploits the information of the labelled anomalies in order to allow Autoencoders to learn to poorly reconstruct them. We note that due to the strategy pursued by standard Autoencoders, which is based on a loss aiming at minimizing the training data reconstruction error such as the mean squared error function, if they are not provided just with normal data during the training they would learn to correctly reconstruct also the anomalous examples and this may cause a worsening of their generalization capabilities. Thus, the best strategy to exploit Autoencoders for semi-supervised anomaly detection, that is when anomalies are available, consists simply in disregarding anomalous examples. In this way the knowledge about the anomalies would not be exploited. Conversely, by means of the here proposed loss, the Autoencoder is able to take advantage of the information about the known anomalies and use it to increase the contrast between the reconstruction error associated with normal examples and those associated with both known and unknown anomalies, thus enhancing anomaly detection performances.\nThe experiments show that this new procedure achieves better performances than the standard Autoencoder approach and the main deep learning techniques for both unsupervised and semi-supervised anomaly detection. More-over it shows better generalization on anomalies generated with a distribution different from the one of the anomalies in the training set and robustness to normal data pollution.\nThe main contributions of this work are listed next:\n•\nWe introduce AE-SAD, a novel approach to train Autoencoders that applies to the semi-supervised anomaly detection setting and exploits the presence of labeled anomalies in the training set;\n• We show that AE-SAD is able to obtain excellent performances even with few anomalous examples in the training set;\n• We perform a sensitivity analysis confirming that the selection of the right values for the hyperparameters of AE-SAD is not critic and the method reaches good results in a relatively small number of epochs;\n•\nWe apply AE-SAD on tabular and image datasets, and on both types of data it outperforms the state-ofthe-art competitors;\n• Finally, we show that our method behaves better than competitors also in two particularly relevant scenarios, that are the one in which anomalies in the test set belong to classes different from the ones in the training set and the one in which the training set is contaminated by some mislabeled anomalies.\nThe paper is organized as follows. In Section 2 the main related works are described. In Section 3 the method AE-SAD is described and a comparative example with standard Autoencoders is provided in order to highlight the motivations behind our approach. In Section 4 the experimental campaign is described and the results are presented. Finally, Section 5 concludes the paper." }, { "figure_ref": [], "heading": "RELATED WORKS", "publication_ref": [ "b32", "b33", "b7", "b10", "b11", "b12", "b13", "b34", "b35", "b36", "b37", "b8", "b38", "b0", "b1", "b2", "b9", "b3", "b4", "b20", "b25", "b5", "b6", "b5", "b21", "b22", "b23", "b24", "b14", "b15", "b16", "b18", "b26", "b40", "b26", "b17", "b18", "b19", "b40", "b26", "b18" ], "table_ref": [], "text": "Several classical data mining and machine learning approaches have been proposed to detect anomalies, namely, statistical-based [33], [34], distance and density-based [8], [11], [12], [13], [14], [35], reverse nearest neighbor-based [36], [37], [38], isolation-based [9], angle-based [39], SVM-based [1], [2], and many others [3], [10].\nIn last years, the main focus has been on deep learningbased methods [4], [5], [21], [26]. Traditional deep learning methods for anomaly detection belong to the family of reconstruction error-based methods employing Autoencoders. Reconstruction error-based anomaly detection [6], [7] consists in training a Autoencoder to reconstruct a set of examples and then to detect as anomalies those inputs that show a sufficiently large reconstruction error. This approach is justified by the fact that, since the reconstruction process includes a dimensionality reduction step (the encoder) followed by a step mapping back representations in the compressed space (also called the latent space) to examples in the original space (the decoder), regularities should be better compressed and, hopefully, better reconstructed [6].\nBesides reconstruction error approaches, recently some methods [22], [23], [24], [25] based on Generative Adversarial Networks have been introduced. Basically they consist in a training procedure where a generative network learns to produce artificial anomalies as realistic as possible, and a discriminative network that assigns an anomaly score to each item.\nIn this work we focus on the family of reconstruction error-based methods. In this context a well known weakness of Autoencoders is that they suffer from the fact that after the training they become able to well reconstruct also anomalies, thus worsening the effectiveness of the reconstruction error as anomaly score. In order to reduce the impact of this phenomenon, methods that isolate anomalies by taking into account the mapping of the data into the latent space together with their reconstruction error have been introduced [15], [16], [17]. Specifically these methods are tailored for the unsupervised and One-Class settings, but they do not straightly apply to the semi-supervised one, in that they cannot exploit labeled examples.\nIn last years, some approaches [19], [27], [41] based on deep neural networks have been proposed to specifically address the task of Semi-Supervised Anomaly Detection.\nIn [27] is introduced A 3 , a neural architecture that combines three different networks that work together to identify anomalies: a target network that address the task of feature extractor, an anomaly network that generates synthetic anomalies and an alarm network that discriminate between normal and anomalous data. In particular the anomaly network has a generative task and is used to balance the training set with artificial anomalous examples.\nIn [18] is presented Deep-SVDD a method that tries to mimic, by means of deep learning architectures, the strategy pursued by different traditional one-class classification approaches, like SVDD and OC-SVM, consisting in transforming normal data so to enclose them in a minimum volume hyper-sphere. Differently form traditional reconstruction error based-approaches, these approaches can be trained to leave labeled anomalous outside the hyper-sphere representing the region of normality.\nAlthough the original method was specifically designed for One-Class classification, in [19] it has been modified in order to address the semi-supervised task. This is done by means of a loss function that minimizes the distance of normal points from a fixed center and maximizes the distance of anomalous ones from it; this approach is called Deep-SAD. This kind of approach turns out to be so effective to be applied also to the unsupervised setting, indeed in [20] the neural architecture of Deep-SAD is trained together with an Autoencoder with a strategy that mixes reconstruction error and distance from an hyper-sphere.\nIn [41] is introduced DevNet which represents one of the seminal Deep Learning approaches specific for Semi-Supervised Anomaly Detection. Next, the authors of [27] and [19] showed that their methods outperform DevNet." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "In this section the technique AE-SAD is presented. It is designed for anomaly detection and, in particular, it is focused on semi-supervised anomaly detection, namely on a setting where the training set mainly contains samples of one normal class and very few examples of anomalous data. The aim is to exploit the information coming from this set to train a system able to detect anomalies in a test set. Specifically, let X = {x 1 , . . . , x n } be the training set and for each i ∈ [1 . . . n], let y i ∈ {0, 1} be the label of x i , with y i = 0 if x i belongs to the normal class and y i = 1 if x i is anomalous; w.l.o.g. we assume that X ⊆ [0, 1] d which is always possible to obtain by normalizing the data.\nThe proposed technique is based on an Autoencoder (AE), a special type of neural network whose aim is to reconstruct in output the data x received in input and, thus, its loss can be computed as the mean squared error between x and x:\nL(x) = ||x -x|| 2 2 .\n(1)\nIn the unsupervised scenario, Autoencoder-based anomaly detection consists in training an AE to reconstruct a set of data and then to detect as anomalies those samples showing largest reconstruction error, which is defined as the value assumed by Equation (1) after the training phase and plays the role of anomaly score. This approach is justified by the observation that, since the reconstruction process includes a dimensionality reduction step (encoding) followed by a step mapping back representations in the latent space to examples in the original space (decoding), regularities should be better compressed and, hopefully, better reconstructed. Unfortunately, deep non-linear architectures are able to perform high dimensionality reduction while keeping reconstruction error low and often they generalize so well that they can also well reconstruct anomalies.\nTo alleviate this problem and, thus, to improve the performance of the AE exploiting the presence of anomalous examples in the training set, we propose the following novel formulation of the loss:\nL F (x) = (1 -y) • ||x -x|| 2 + λ • y • ||F (x) -x|| 2 ,(2)\nwhere F : [0, 1] d → [0, 1] d , and λ is an hyperparameter that controls the weight of the anomalies, in relation to the normal items, during the training. When x is a normal item the contribution it brings to the loss is ||x -x|| 2 which means that the reconstruction x is forced to be similar to x as in the standard approach. Conversely, if x is an anomaly, the contribution brought to the loss is ||F (x) -x|| 2 which means that in this case\nx is forced to be similar to F (x). Hence, the idea is that, by exploiting Equation (2) to evaluate the loss during the training process, normal data x are likely to be mapped to x which is as similar as possible to x and anomalous data x are likely to be mapped to F (x) which is substantially different from x.\nIn this way, basically, the Autoencoder is trained to reconstruct in output the anomalies in the worst possible way and at the same time to maintain a good reconstruction of the normal items.\nMoreover, differently from the standard approach, the proposed technique does not employ the same function both for training the system and for computing the anomaly score. Indeed, Equation ( 2) is exploited to compute the loss during the training process and Equation ( 1) is exploited to compute the anomaly score since it accounts for the reconstruction error and then it is likely to evaluate anomalies as data incorrectly reconstructed being F (x) likely to be substantially different from x.\nIn this work we use F (x) = 1 -x which corresponds, in the domain of the images, to the negative image of x." }, { "figure_ref": [ "fig_2", "fig_3", "fig_2", "fig_2", "fig_3" ], "heading": "Comparative behavior with standard Autoencoder", "publication_ref": [], "table_ref": [], "text": "In many real life situations, in general the anomalies that can arise in a evaluation phase may be generated from a distribution different from the anomalies used for the training.\nIn order to investigate the behaviour of our method in such a situation, we build a set-up in which, for each of the considered datasets, we select for the training phase all the items of a certain class as inliers, and a fixed number of examples from only some of the other classes as anomalies; the test set, on the contrary, is composed by all the classes of the selected dataset. In particular in the test set there will be In particular some unseen classes, such as 7, are reconstructed in reverse because apparently the Autoencoder judges them more similar to the anomalies used for the training than to the normal class; other classes, such e.g. as 2 and 4, present a reconstruction more similar to the original image (the background remains black) but from which it is more difficult to determine the digit that it represents.\nIn any case all these reconstructions are very worse than the one obtained in a semi-supervised setting (Figure 1(b)), and this leads to better performances of the reconstruction error as anomaly score even in this setting where the Autoencoder is trained with only a portion of the anomalies.\nFigure 2 reports some quantitative results associated with the above experiment. In particular, Figures 1(a) and (b) report the boxplot of the test reconstruction error associated with the examples of the classes 0 to 9 for the standard AE and for AE-SAD. These plots highlight the main effect of our strategy over the classical use of an AE, that is reconstruction errors of anomalous classes are greatly amplified (consider that the y-scale is logarithmic) and the overlapping with the normal class is reduced. For example, the class 1, that is essentially indistinguishable from the normal data by the AE, is almost completely separated by our method. Clearly AE-SAD takes advantage of the fact that the class 1 is an anomalous class seen during training. However, we can observe that a similar behavior is exhibited also by some 1 Comparison between the AUC of AE and AE-SAD on each anomalous class for the example of Figures 1 and2.\nTable 1 reports the AUC obtained on a test set consisting in unseen examples from the normal and each anomalous class of AE and AE-SAD, and the increase in AUC achieved by AE-SAD, respectively. Specifically, the first four rows concern classes seen during training, while the remaining five rows classes unseen during training. As a main result, we can observe that all the anomalous classes, both seen and unseen, take advantage of the AE-SAD strategy. As for the unseen classes, the increase in AUC is always above +0.05 and is practically close the maximum achievable, since all the final AUC values are around 0.99. As for the unseen classes, two of them achieve sensible AUC improvements, namely class 4 and 7, while for the remaining classes there is a gain, even if less marked.\nIn conclusion, notably, all the seen anomalous classes, even those which are practically indistinguishable from the normal one by standard AE, become almost completely separated using our method. Moreover, also unseen anomalous may achieve large accuracy improvements either since they are reversely reconstructed or their associated reconstruction error get worse due to a more confused reconstruction." }, { "figure_ref": [], "heading": "EXPERIMENTAL RESULTS", "publication_ref": [], "table_ref": [], "text": "In this section we describe experiments conducted to study the behavior of the proposed method. In particular, we start by describing the experimental settings (Section 4.1) and, subsequently, we report the experimental evaluation of AE-SAD driven by three main goals, namely\n• studying how parameters affect its behavior,\n• comparing its performance with existing algorithms,\n• analyzing its effectiveness in the scenario of polluted data.\nIn detail, sections are organized as follows. In Sections 4.2 and 4.3 we describe the sensitivity of the method to the main parameters of the algorithm, namely the number of known anomalies, the regularization term and the number of training epochs. In Section 4.4 we compare AE-SAD with existing methods in the context where they are trained on a data set containing normal data and few known anomalies and tested on a data set with normal data and anomalies belonging to both seen and unseen anomalous classes. In Section 4.5 we test AE-SAD in the challenging scenario, often arising in real life applications, where the training set is polluted by mislabeled anomalies. Specifically, the methods are trained with a set containing normal data, data labeled as anomalous, and anomalous data incorrectly labeled as normal. Furthermore, we report, in Section 4.6, the behavior of the proposed technique when known anomalies belong to a set of classes." }, { "figure_ref": [], "heading": "Experimental settings", "publication_ref": [ "b1", "b26", "b18", "b17", "b39", "b28", "b29", "b30" ], "table_ref": [], "text": "In general, given a multi-labeled dataset consisting of a training set and of a test set, we select some of the original classes to form the normal class of our problem and some other classes as known anomalous classes. The remaining classes form the unknown anomalous classes. Then, the training set for our method consists of all the training examples of the normal class, labeled as 0, and of s randomly picked training examples of the anomalies, labeled as 1. Our test set coincides with the original test set and, hence, it comprises all the kinds of anomalies, both known and unknown.\nIn the rest of the section, we consider different rules for selecting normal and known anomalous classes in order to take into account different scenarios. We will refer to them later as one-vs-one, one-vs-many, one-vs-all, and many-vsmany, depending on the number of original classes used to form both the normal and the known anomalous classes. Moreover, sometimes we consider also the case in which the normal data is polluted by injecting a certain percentage of mislabeled known anomalous examples (these examples are wrongly labeled as 0).\nWhen an unsupervised method is employed to detect the anomalies in the test set, since it does not take advantage of the labels associated with training data, we remove the examples labeled as 1 from the training set to avoid it incorporates them in the normal behavior. Indeed, in this case the method works as a semi supervised classifier and, hence, it expects to be trained only on normal data.\nBecause of fluctuations due to the random selection of the s known anomalies, we run the methods on 10 different versions of the same kind of training sets. Results of each experiment are obtained as the mean on the different runs.\nThe competitors considered were the baseline Autoencoder employing the same architecture of AE-SAD but with the classical reconstruction error loss, the One-Class SVM (OC-SVM) which is a classical statistical learning-based oneclass classification approach [2], the A 3 method [27] which is deep learning semi-supervised anomaly detection approach exploiting the analysis of the activation values in its target network, and Deep-SAD [19] which is a deep learning semi-supervised anomaly detection approach extending the Deep-SVDD [18] strategy based on mapping normal data within an hyper-sphere. As for the datasets, depending on the experiment, we consider the tabular real-world anomaly detection specifc datasets belonging to the ODDS library [40], as well as the MNIST [29], E-MNIST [30], Fashion-MNIST [31], and CIFAR-10 [32] image datasets.\nAs for the architecture of AE-SAD, we employ an Autoencoder consisting of two layers for the encoder, a latent space of dimension 32, and two layers symmetrical to the ones of the encoder for the decoder. We use dense layers on the MNIST and Fashion MNIST datasets and convolutional layers on CIFAR-10. As for the other methods, we employ the parameters suggested in their respective papers." }, { "figure_ref": [ "fig_5" ], "heading": "Sensitivity analysis to the parameters λ and s", "publication_ref": [], "table_ref": [], "text": "In this section, we intend to determine the impact of the regularization parameter λ on the results of the method and the behavior for different numbers of labeled anomalies. To this aim, we first consider the MNIST dataset and vary both the number s of labeled anomalies in {8, 16, 32, 64, 128} and the value of the parameter λ.\nAs for the latter parameter, since it weights the contribution to the loss coming from the labeled anomalies, it makes sense to relate it to the number s of anomalies seen during training. Thus, let α be a value in [0, 1]. We denote by λ(α) the following value for the regularization parameter\nλ(α) = 1 + α n s -1 ,\nvarying between 1 (for α = 0) and n/s (for α = 1). Since the weight of each is 1 in our loss, for α = 0 each single anomalous example weights as a single inlier, while for α = 1 each single anomalous example weights as n/s inliers and, thus, globally inliers and outliers contribute half of the total loss in terms of weights. For α > 1 outliers weight globally more than inliers. In our experiments, we varied λ in λ ∈ {λ 0 = λ(0) = 1, λ 1 = λ(α 1 ), . . . , λ 5 = λ(α 5 )}, where α 1 , . . . , α 5 are 5 log-spaced values between 0.1 and 2.\nAs for the datasets, in this experiment we consider the one-vs-one setting. Consider a dataset containing m classes. We fix in turn one of the m classes as the normal one and then obtain m -1 different training sets by selecting one the remaining classes as the known anomalous. Since each of the considered datasets consists of m = 10 classes and since Since we consider 6 different values for λ and 5 different values for s, the total number of runs is in this case 27,000. Specifically each curve reports the average AUC of AE-SAD when the normal class is held fixed. All the curves show a similar behavior. As for the regularization parameter λ, it can be concluded that its selection does not represent a critical choice. For λ > 1 some improvements can be appreciated, but the method appears to be not too much sensitive to the variation of the regularization parameter independently from the number of labeled anomalies. This behavior can be justified by the fact that, even if anomalies are few in comparison to normal examples and, moreover, even if anomalies would globally weight less than inliers in the loss function, the reconstruction error of a badly reconstructed anomaly (that is not reversely reconstructed as required by our loss) is potentially much larger than the reconstruction error associated with a single inlier. Indeed, if the network learned to output the original reconstruction of even a single anomaly, the associated loss contribution would be very large due to the requirement to minimize the distance to the target reverse reconstruction function. Informally, we can say that AE-SAD cannot afford to make a mistake in reconstructing anomalies. From the above analysis, as for the parameter λ in the following, if not otherwise stated, we hold fixed λ to the value λ 1 = λ(0.1).\nFigure 4 shows the sensitivity to the parameter s on Fashion-MNIST. The AUC of the baseline AE is also reported for comparison. The latter AUC is constant since AE in an unsupervised method and do not uses the labeled anomalies. These curves confirm the behavior already observed on MNIST. Summarizing, as for the number of la- beled anomalies s, AE-SAD performs remarkably well even when only a few anomalies are available. The same reason provided above can be used to justify this additional result. As for the parameter s in the following, if not otherwise stated, we set s = 8 by default to considered the challenging scenario in which a limited number of labeled anomalous examples is available." }, { "figure_ref": [ "fig_7", "fig_7", "fig_7" ], "heading": "Sensitivity analysis to the number of epochs", "publication_ref": [], "table_ref": [], "text": "The number training epochs is an important hyperparameter for all methods that employ neural networks.\nIn the anomaly detection context the choice of this hyperparameter might become critical since the value of the loss function of a standard AE is not indicative of the quality as anomaly detector. Indeed, while a low MSE value on the training set means that training examples are well reconstructed, this does not necessarily implies that anomalies are better identified. Differently, with our method we introduce a novel loss and a novel strategy to train an Autoencoder that aims at enlarging the difference between the reconstruction error of the anomalies and the ones of the normal items.\nThus in order to compare the behaviour of AE-SAD with standard Autoencoders as the number of epochs grows, we consider the ODDS datasets and we train both the models for 2500 epochs and at each epoch we compute the AUC on the test set and the value of the relative loss function on the training set. Since the ODDS datasets are not partitioned into a training and a test set by design, we split each of them by randomly selecting the 60% of the normal items to be part of the training set and keeping the rest for the test set. As for the anomalies, we randomly pick a number of them such that they represent the 5% of the training set, while the remaining anomalies are used in the test set. For AE-SAD, we fix the parameter λ by following the rule obtained in previous section.\nThe results reported in Figure 5 show that in general AE-SAD outperforms the standard AE in terms of AUC at each epoch. Moreover the trend of the AUC of our method is much more regular and almost always increasing. This can be justified because after a certain number of epochs the standard AE learns to reconstruct also the anomalies, while by using the loss (2) we always increase the contrast between the reconstruction of the two classes during all the training process. It can be concluded that, while the number of epochs is a critical hyperparameter for standard AE, this is not the case for AE-SAD. In our case it is not hard to select a good value for this hyperparameter, since the more epochs we consider, the higher the expected performances.\nLegends of Figure 5 report between brackets the AUC value in the epoch in which the lowest value of the loss is reached. As we can see, while for AE-SAD a low value of the loss always implies an AUC value close to the maximum, the same fact does not hold for standard AE. Indeed, for what concerns the latter method, often the maximum AUC value is obtained when the value of the loss is still high. This happens because, the standard AE is able to generalize to classes not included in the training set that could share similarities with anomalies, while the same effect is mitigated in AE-SAD by the nature of the loss.\nFinally from Figure 5, we can deduce that in general AE-SAD achieves good AUC values even after a relatively small number of epochs, thus the training is not too slow." }, { "figure_ref": [], "heading": "Comparison with other methods", "publication_ref": [], "table_ref": [], "text": "In this section, we compare AE-SAD with the baseline AE, OC-SVM, A 0 -1.00 1.00 1.00 0.88 1.00 1.00 1.00 1.00 1.00 0.99 1 0.92 -1.00 0.52 1.00 1.00 0.72 1.00 1.00 0.78 0.88 2 1.00 1.00 -1.00 1.00 1.00 0.98 0.95 1.00 1.00 0.99 3 1.00 1.00 1.00 -1.00 1.00 1.00 1.00 1.00 1.00 1.00 4 0.98 1.00 0.78 1.00 -0.88 1.00 0.92 1.00 0.85 0.93 5 0.98 1.00 1.00 0.98 1.00 -1.00 1.00 1.00 1.00 0.99 6 0.98 0.75 0.95 0.85 1.00 0.88 -1.00 0.78 0.85 0.89 7 0.98 0.90 0.82 1.00 0.70 0.80 0.70 -1.00 0.65 0.84 8 0.95 1.00 1.00 0.95 1.00 1.00 1.00 1.00 -1.00 0.99 9 0.70 0.78 0.70 0.48 0.55 0.40 0.70 0.48 0.45 -0.58 µ 0.94 0.94 0.92 0.86 0.90 0.88 0.90 0.93 0.91 0.90 0.91" }, { "figure_ref": [ "fig_8" ], "heading": "TABLE 4 Probability of winning of AE-SAD for each pair of normal and", "publication_ref": [ "b26" ], "table_ref": [], "text": "anomalous classes in the one-vs-one setting.\nconsidered methods on 5 different runs for each dataset and we compute the mean and the standard deviation of the AUC. The results, reported in the Table 2 show that our method almost always achieves better performances than competitors, and sometimes this improvements are relevant. We also consider for the comparison with the competitors the MNIST, Fashion-MNIST and CIFAR-10 datasets, for which we implement the one-vs-one setting already illustrated in Section 4.2. This setting simulates the scenario in which the detector has limited knowledge of the expected anomalies and, as such, it is a notable scenarios to deal with.\nTable 3 reports the relative number of wins for each pair of methods on the MNIST, Fashion MNIST, and CIFAR-10 datasets. Specifically, each entry of the table is the probability that the method located on the corresponding row scores an AUC greater than that of the method located on the corresponding column. In the considered setting AE-SAD shows a larger probability of winning over all the competitors on all the datasets. Deep-SAD is the runner-up in terms of number of wins, while A 3 does not perform sufficiently in this setting despite being a semi-supervised method. In Section 4.6, we compare AE-SAD and A 3 on a setting where competitor performance are shown in [27] to be strong across various configurations. known anomalous classes, by reporting the relative number of wins of AE-SAD on the competitors. Specifically, each entry of the table is the probability that the AUC scored by AE-SAD is greater than the AUC scored by another method. For almost all the combinations AE-SAD is preferable to a randomly selected method of the pool.\nTo provide details also on the comparison of the methods on the each single dataset, Figure 7 reports the scatter plot of the AUC of AE-SAD (on the y-axis) vs the AUC of each competitor (on the x-axis) associated with each of the above experiments. Specifically, blue +-marks are relative to the experiments described in this section, while red ×-marks are relative experiments reported in the next section concerning polluted normal data. The plots provide a more complete picture of the relative performances of our method. " }, { "figure_ref": [ "fig_8" ], "heading": "Normal data polluted", "publication_ref": [], "table_ref": [], "text": "In real scenarios it may happen that the training set is polluted by anomalies that are mislabeled and thus appear to the model as normal items. E.g. consider the cases in which the normal data is collected by exploiting semi-automatic or not completely reliable procedures and/or limited resources are available for cleaning the possibly huge normal data collection, and an human analyst points out to the system a small number of selected anomalies in order to improve its performances. In this section we investigate the above scenario. We pollute the datasets by injecting a certain percentage of mislabeled known anomalous examples (these examples are wrongly labeled as 0). The percentage level of pollution quantifies the number of mislabeled known anomalies as a percentage of the number of true inliers in the dataset. That is to say, a 5% level of pollution means that in the training set there are 0.05n mislabeled known anomalies (i.e., labeled 0) and n inliers (also labeled 0).\nBecause of the shape of the loss (2) the Autoencoder is trained to well reconstruct the mislabeled anomalies, it is important to select a value of the hyper-parameter λ that forces the model to give more relevance to the (bad) reconstruction of the anomalies labeled 1. In Figure 6 we show the results obtained for different values of λ. The number s of correctly labeled known anomalies is set to s = 100. The red lines refer to the 2% percentage level of pollution, while the blue lines to the 4% of pollution. We can observe that increasing values of λ tend to mitigate the effect of the pollution and to obtain better results. Despite this, the choice of a good value for λ is not critical, since all the values of λ considered guarantee good performances, though the largest ones are generally associated with the best scores. In the sequel of this subsection we set λ to about n/s which corresponds approximately to λ 4 .\nIn Table 5 we compare our method with other anomaly detection algorithms for various percentages of pollution. The one-vs-three setting is considered here to increase the diversity of the pollution. The number of correctly labeled known anomalies is s = 150 (50 for each of the three known anomalous classes). AE-SAD improves on the baseline AE and performs better also in this setting.\nIn order to make the results more robust, we next build up two systematic experiments. In each one, we add to the inliers in the training set 100 randomly selected elements for each anomalous class. Moreover, we assume that only 2 examples per anomalous class are correctly labeled 1.\nThe first experiment concerns the one-vs-all scenario in which we consider all the examples of a selected class as inliers and all the other classes as known anomalous ones. These datasets are extremely polluted, being their pollution percentage about 15%, and anomalies have large diversity.\nThe results, reported in Table 6, show that for each class our method achieves good performances being almost always the best method. Even in those few cases in which it is not the best method, it is always the second ranked with an AUC value very close to the one of the first.\nThe second experiment considers the one-vs-many scenario in which we have three known anomalous classes. Deep-SAD 0 Specifically, to make experiments easily reproducible we fix in turn the normal class and partition the remaining nine classes into three groups consisting of three consecutive classes. The pollution percentage in this case is about the 5%. Table 7 reports the result of this experiment, which confirm the generalization ability of AE-SAD in the polluted scenario. In Figure 7, red ×-marks compare the AUCs on polluted data of experiments in Tables 6 and7." }, { "figure_ref": [], "heading": "Behavior in the A 3 experimental setting", "publication_ref": [ "b26", "b26", "b26", "b8", "b27", "b25", "b26" ], "table_ref": [ "tab_6" ], "text": "In this section, in order to compare AE-SAD and A 3 on a setting where competitor performance are shown in [27] to be strong across various configurations, we consider precisely the one proposed by the A 3 authors where they show how their method well surpasses the unsupervised baseline methods, and on most experiments also the semisupervised baseline methods. In these experiments both normal examples and anomalies belong to a set of classes, thus can be characterized as a many-vs-many scenario, and consider the there employed image data, namely MNIST and E-MNIST. The composition of the datasets, as specified in [27], are reported in Table 8 and the results have been obtained by using the same experimental setting. In this experiments, authors [27] compare their method, namely A 3 , with standard AE, Isolation Forest [9], DAGMM [28] and DevNet [26]. Since A 3 overcomes other competitors, we report only A 3 results and refer the interested readers to Table 3 of [27] for the performances of other methods.\nIt can be noted that performances of A 3 and AE-SAD are comparable when the set of anomalies in the test set belong to the same classes of the known anomalies, while AE-SAD outperforms A 3 in all datasets and experiments when the test set contains anomalies in classes not represented in the training set. This confirms the ability of AE-SAD in detecting anomalies belonging to completely unknown classes, an important requirement of anomaly detection systems." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this work, we deal with the semi-supervised anomaly detection problem in which a limited number of anomalous examples is available and introduce the AE-SAD algorithm, based on a new loss function that exploits the information of the labelled anomalies and allows Autoencoders to learn to reconstruct them poorly. This strategy increases the contrast between the reconstruction error of normal examples and that associated with both known and unknown anomalies, thus enhancing anomaly detection performances.\nWe focus on classical Autoencoders and show that our proposed strategy is able to reach state-of-the-art performances even when built on very simple architectures. We also prove that AE-SAD is very effective in some relevant anomaly detection scenarios, namely the one in which anomalies in the test set are different from the ones in the training set and the one in which the training set is polluted by some mislabeled examples.\nAs a future work we intend to inject the here presented strategy in other reconstruction-based architectures that permit to organize their latent space to guarantee continuity and completeness, such as VAEs and GANs, to possibly obtain even improved detection performances." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Fabrizio Angiulli is a full professor of computer science at DIMES, University of Calabria, Italy. His research interests include data mining, machine learning, and artificial intelligence, with a focus on anomaly detection, large and high-dimensional data analysis, and explainable learning. He has authored more than one hundred papers appearing in premier journals and conferences. He regularly serves on the program committee of several conferences and, as an associate editor of AI Communications. " } ]
Reconstruction error-based neural architectures constitute a classical deep learning approach to anomaly detection which has shown great performances. It consists in training an Autoencoder to reconstruct a set of examples deemed to represent the normality and then to point out as anomalies those data that show a sufficiently large reconstruction error. Unfortunately, these architectures often become able to well reconstruct also the anomalies in the data. This phenomenon is more evident when there are anomalies in the training set. In particular when these anomalies are labeled, a setting called semi-supervised, the best way to train Autoencoders is to ignore anomalies and minimize the reconstruction error on normal data. The goal of this work is to investigate approaches to allow reconstruction error-based architectures to instruct the model to put known anomalies outside of the domain description of the normal data. Specifically, our strategy exploits a limited number of anomalous examples to increase the contrast between the reconstruction error associated with normal examples and those associated with both known and unknown anomalies, thus enhancing anomaly detection performances. The experiments show that this new procedure achieves better performances than the standard Autoencoder approach and the main deep learning techniques for semi-supervised anomaly detection.
Reconstruction Error-based Anomaly Detection with Few Outlying Examples
[ { "figure_caption": "•examples from the normal class,• anomalies belonging to classes that have been used for the training phase,• anomalies belonging to classes that have not been used for the training, and thus are unknown for the model.In Figure1(a) are reported the original and the reconstructed images from the training and the test set obtained in a setting where the class 8 of the dataset MNIST is considered as normal and the classes 1, 3, 5, and 9 are the anomalous classes used for the training. There are 100 labeled anomalies in the training set. As we expected, in the training set the normal examples are well reconstructed in a similar way as what happens for the standard AE training setting, while the anomalies are reconstructed as the negative of the input images; for what concerns the test set we can see that the normal examples are again well reconstructed, while the anomalies must be distinguished in two different types: while the classes 1, 3, 5, 9, that have been used in the training, are reconstructed in a way very similar to the negative image, the remaining classes show a more confused reconstruction.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(a) AE-SAD. (b) Standard AE.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .1Fig. 1. Original and reconstructed images of training and test set. In this example the class 8 is normal and the class 1, 3, 5, 9 are the anomalous classes used in the training.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. In this example the class 8 is normal and the class 1, 3, 5, 9 are the anomalous classes used in the training.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Sensitivity to the regularization parameter λ.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Sensitivity to the number of labeled anomalous examples s.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 33Figure 3 reports the results of the experiment on MNIST.Since we consider 6 different values for λ and 5 different values for s, the total number of runs is in this case 27,000. Specifically each curve reports the average AUC of AE-SAD when the normal class is held fixed. All the curves show a similar behavior. As for the regularization parameter λ, it can be concluded that its selection does not represent a critical choice. For λ > 1 some improvements can be appreciated, but the method appears to be not too much sensitive to the variation of the regularization parameter independently from the number of labeled anomalies. This behavior can be justified by the fact that, even if anomalies are few in comparison to normal examples and, moreover, even if anomalies would globally weight less than inliers in the loss function, the reconstruction error of a badly", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Sensitivity to the number of epochs. For both standard Autoencoder and AE-SAD is reported in the legend the AUC value relative to the epoch in which the lowest value of the loss has been observed.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Comparison of the AUCs of AE-SAD and competitors on both clean (blue +-marks) and polluted (red ×-marks) data.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "3 , and Deep-SAD. First, we test them on the ODDS datasets splitting them into training and test set", "figure_data": "ODDS Datasets AE-SAD AE OC-SVM annthyroid (6) .990±.001 .692±.104 .585±.003 .689±.084 .662±.040 Dataset (d) A 3 Deep-SAD arrhythmia (274) .796±.054 .789±.011 .808±.019 .692±.041 .751±.036 breastw (9) .992±.003 .995±.002 .994±.002 .995±.002 .977±.008 cardio (21) .994±.002 .857±.078 .954±.003 .904±.025 .785±.071N0 1 2 3AMNIST 4 5 -1.00 1.00 1.00 0.98 1.00 1.00 0.95 1.00 0.98 0.99 0 1 2 3 6 7 8 9 µ 0.98 -0.90 0.95 1.00 0.98 0.95 1.00 0.95 1.00 0.97 0.92 1.00 -0.95 0.92 0.98 0.88 0.95 1.00 0.92 0.95 0.90 1.00 0.95 -0.90 0.95 1.00 0.98 1.00 0.98 0.96glass (9).974±.014 .664±.114 .527±.182 .639±.098 .880±.07640.82 0.98 1.00 1.00 -0.88 0.70 0.95 0.98 0.88 0.91ionosphere (33) .971±.01 .910±.009 .918±.006 .668±.014 .953±.009 letter (32) .908±.035 .802±.023 .546±.016 .539±.016 .768±.021 lympho (18) .986±.011 .993±.007 .940±.043 .977±.037 .860±.158 mammography (6) .916±.029 .874±.021 .880±.019 .835±.047 .759±.106 musk (166) 1.00±.000 1.00±.000 1.00±.000 1.00±.000 1.00±.000 optdigits (64) 1.00±.000 .975±.007 .718±.027 .906±.028 .843±.0515 6 7 8 9 µ0.88 1.00 0.98 1.00 0.92 -0.98 1.00 1.00 1.00 0.97 0.85 1.00 0.95 0.98 0.88 0.95 -0.90 1.00 1.00 0.94 0.95 1.00 0.95 0.98 0.98 1.00 0.98 -0.98 1.00 0.98 0.72 0.98 0.80 0.75 0.75 0.78 0.75 0.72 -0.80 0.78 0.88 1.00 1.00 0.98 0.95 0.98 0.92 1.00 0.92 -0.96 0.88 0.99 0.95 0.95 0.92 0.94 0.91 0.94 0.98 0.95 0.94pendigits (16) 1.00±.000 .747±.083 .983±.006 .983±.009 .964±.023Fashion MNISTpima (8) satellite (36) satimage-2 (36) .999±.001 .998±.002 .997±.002 .982±.004 .968±.012 .733±.022 .594±.033 .698±.013 .634±.049 .657±.024 .907±.025 .812±.014 .728±.004 .802±.003 .827±.030 shuttle (9) .808±.353 .914±.138 .993±.000 .977±.007 .984±.015 speech (400) .479±.024 .463±.025 .465±.023 .535±.050 .478±.034 thyroid (6) .987±.011 .922±.072 .897±.038 .776±.123 .885±.089 vertebral (6) .635±.138 .444±.078 .522±.063 .351±.033 .558±.037N0 1 2 3 4 5A0 -0.95 0.98 0.92 0.95 0.95 0.98 0.85 0.90 0.85 0.92 1 2 3 4 5 6 7 8 9 µ 1.00 -1.00 0.98 1.00 0.98 1.00 1.00 1.00 0.95 0.99 1.00 0.88 -0.98 0.88 0.95 0.95 0.90 0.88 0.92 0.92 0.95 0.98 0.92 -0.90 0.92 0.95 0.88 0.92 0.82 0.92 0.98 1.00 0.85 1.00 -0.90 0.90 0.78 0.82 0.80 0.89 0.88 0.80 0.88 0.92 0.80 -0.80 1.00 0.82 0.92 0.87vowels (12).975±.021 .904±.026 .816±.051 .528±.061 .937±.05560.98 0.98 0.88 1.00 0.95 1.00 -0.95 1.00 0.98 0.97wbc (30) wine (13) Comparison with competitors in the ODDS datasets. .970±.023 .964±.020 .953±.02 .878±.052 .891±.065 .994±.004 .829±.121 .998±.001 .893±.080 .817±.142 TABLE 27 8 9 µ0.98 0.90 0.92 0.92 0.98 1.00 0.98 -0.92 0.98 0.95 0.82 0.75 0.85 0.80 0.82 0.80 0.78 0.82 -0.80 0.81 0.68 0.82 0.62 0.72 0.72 0.72 0.85 0.98 0.75 -0.76 0.92 0.89 0.88 0.92 0.89 0.91 0.91 0.91 0.89 0.89 0.90CIFAR-10NA0123456789µMNISTA E -S A DO C -S V MA EA 3D ee p -S A DmeanAE-SAD-1.000.911.000.850.94OC-SVM0.00-0.000.860.050.23AE0.091.00-1.000.700.70A 30.000.140.00-0.020.04Deep-SAD0.150.950.300.98-0.59mean0.060.740.280.940.38-Fashion MNISTA E -S A DO C -S V MA EA 3D ee p -S A DmeanAE-SAD-1.000.980.980.640.90OC-SVM0.00-0.010.900.110.25AE0.020.99-0.970.380.59A 30.020.100.03-0.030.05Deep-SAD0.360.890.620.97-0.51mean0.110.650.370.890.26-CIFAR-10A E -S A DO C -S V MA EA 3D ee p -S A DmeanAE-SAD-0.971.000.860.810.91OC-SVM0.03-0.480.390.290.30AE0.000.52-0.540.420.37A 30.140.610.46-0.480.42Deep-SAD0.190.710.580.52-0.50mean0.090.700.630.480.50-TABLE 3Probability of winning for each pair of methods in the one-vs-oneexperimental setting.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "details the result for each pair of normal and Fig. 6. Sensitivity analysis of the regularization parameter λ for different levels of pollution.", "figure_data": "MNIST 3vs[4,5,6]Method0.25%0.5%Pollution 2.5%5%12.5%AE-SAD .", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Standard AE .944±.005 .942±.004 .919±.006 .904±.005 .864±.011 OC-SVM .871±.000 .870±.000 .863±.000 .854±.000 .821±.001 A 3 .622±.037 .675±.035 .725±.099 .653±.085 .603±.077 Deep-SAD .958±.009 .957±.004 .948±.003 .944±.009 .911±.032", "figure_data": "Fashion MNIST 3vs[7,8,9]Method0.25%0.5%Pollution 2.5%5%12.5%AE-SAD .946±.002 .944±.003 .945±.004 .942±.004 .939±.002Standard AE .917±.001 .908±.002 .890±.003 .880±.003 .858±.002OC-SVM .902±.000 .901±.000 .898±.000 .895±.000 .880±.000A 3.877±.008 .875±.005 .861±.013 .852±.013 .838±.015Deep-SAD .925±.015 .913±.005 .904±.016 .905±.013 .901±.009TABLE 5Comparison with competitors for various percentage levels of pollution.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "885±.015 .911±.001 .770±.018 .910±.037 1 .996±.001 .996±.001 .986±.001 .983±.038 .983±.011 2 .913±.020 .767±.024 .729±.002 .676±.018 .869±.033 3 .929±.015 .821±.010 .820±.001 .730±.013 .861±.045 4 .940±.009 .854±.013 .876±.001 .794±.066 .880±.044 5 .915±.014 .807±.017 .697±.002 .484±.110 .827±.058 6 .958±.012 .877±.023 .863±.001 .882±.083 .925±.033 7 .956±.023 .907±.005 .895±.001 .879±.057 .900±.031 8 Deep-SAD T-shirt/top .911±.071 .781±.007 .859±.001 .692±.053 .831±.061 Trouser .980±.008 .958±.003 .957±.001 .949±.009 .957±.026 Pullover .877±.039 .729±.016 .843±.001 .576±.103 .801±.072 Dress .920±.028 .807±.009 .879±.001 .857±.024 .904±.021 Coat .847±.041 .770±.039 .853±.001 .762±.064 .843±.049 Sandal .908±.018 .633±.025 .807±.001 .841±.037 .864±.015 Shirt .796±.037 .701±.039 .778±.001 .452±.078 .760±.030 Sneaker .982±.004 .927±.010 .975±.001 .970±.004 .931±.058 Bag .940±.019 .541±.009 .755±.001 .607±.051 .878±.052 Ankle boot .971±.012 .807±.023 .952±.001 .966±.018 .937±.068 TABLE 6 Comparison with competitors in the one-vs-all polluted setting.", "figure_data": "MNISTClassAE-SADAEOC-SVMA 3Deep-SAD0.980±.006 ..843±.018 .750±.016 .788±.001 .663±.069 .866±.0289.942±.016 .897±.008 .884±.001 .829±.066 .922±.010Fashion MNISTClassAE-SADAEOC-SVMA 3", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results in the many-vs-many experimental setting described in the A 3 paper[27].", "figure_data": "DataNormalTrain Anomaly Anomaly TestAE-SAD AUCA 3 AUC0, . . . , 56, 76, 7.98±.00.99±.00MNIST4, . . . , 9 0, . . . , 50, 1 6, 70,1 6, 7, 8, 91.0±.00 .92±.011.0±.00 .88±.034, . . . , 90, 10, 1, 2, 3.96±.01.92±.020, . . . , 5A, . . . , EA, . . . , E.99±.00.99±.01E-MNIST0, . . . , 5A, . . . , EA, . . . , E, V, . . . , Z.98±.00.96±.010, . . . , 5V, . . . , ZV, . . . , Z.99±.00.99±.000, . . . , 5V, . . . , ZA, . . . , E, V, . . . , Z.98±.00.95±.02", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" } ]
Fabrizio Angiulli; Fabio Fassetti; Luca Ferragina
[ { "authors": "D Tax; R Duin", "journal": "", "ref_id": "b0", "title": "Support Vector Data Description", "year": "2004" }, { "authors": "B Sch Ölkopf; J Platt; J Shawe-Taylor; A Smola; R Williamson", "journal": "Neural Computation", "ref_id": "b1", "title": "Estimating the Support of a High-Dimensional Distribution", "year": "2001" }, { "authors": "C Aggarwal", "journal": "Springer", "ref_id": "b2", "title": "Outlier Analysis", "year": "2013" }, { "authors": "I Goodfellow; Y Bengio; A Courville", "journal": "MIT Press", "ref_id": "b3", "title": "Deep Learning", "year": "2016" }, { "authors": "R Chalapathy; S Chawla", "journal": "", "ref_id": "b4", "title": "Deep Learning for Anomaly Detection: A Survey", "year": "2019" }, { "authors": "S Hawkins; H He; G Williams; R Baxter", "journal": "Int. Conf", "ref_id": "b5", "title": "Outlier Detection Using Replicator Neural Networks", "year": "2002" }, { "authors": "J An; S Cho", "journal": "", "ref_id": "b6", "title": "Variational autoencoder based anomaly detection using reconstruction probability", "year": "2015" }, { "authors": "F Angiulli", "journal": "ACM (TKDD)", "ref_id": "b7", "title": "CFOF: A Concentration Free Measure for Anomaly Detection", "year": "2020" }, { "authors": "F Liu; K Ting; Z Zhou", "journal": "ACM (TKDD)", "ref_id": "b8", "title": "Isolation-Based Anomaly Detection", "year": "2012" }, { "authors": "V Chandola; A Banerjee; V Kumar", "journal": "ACM Comput. Surv", "ref_id": "b9", "title": "Anomaly detection: A survey", "year": "2009" }, { "authors": "F Angiulli; F Fassetti", "journal": "ACM (TKDD)", "ref_id": "b10", "title": "DOLPHIN: an Efficient Algorithm for Mining Distance-Based Outliers in Very Large Datasets", "year": "2009" }, { "authors": "F Angiulli; C Pizzuti", "journal": "", "ref_id": "b11", "title": "Fast Outlier Detection in Large High-Dimensional Data Sets", "year": "2002" }, { "authors": "M Breunig; H Kriegel; R Ng; J Sander; Lof", "journal": "SIGMOD)", "ref_id": "b12", "title": "Identifying Density-based Local Outliers", "year": "2000" }, { "authors": "E Knorr; R Ng; V Tucakov", "journal": "VLDB Journal", "ref_id": "b13", "title": "Distance-Based Outlier: algorithms and applications", "year": "2000" }, { "authors": "F Angiulli; F Fassetti; L Ferragina", "journal": "", "ref_id": "b14", "title": "Improving Deep Unsupervised Anomaly Detection by Exploiting VAE Latent Space Distribution", "year": "2020" }, { "authors": "F Angiulli; F Fassetti; L Ferragina", "journal": "Machine Learning", "ref_id": "b15", "title": "LatentOut: an unsupervised deep anomaly detection approach exploiting latent space distribution", "year": "2022" }, { "authors": "F Angiulli; F Fassetti; L Ferragina", "journal": "", "ref_id": "b16", "title": "Detecting Anomalies with LatentOut: Novel Scores, Architectures, and Settings", "year": "2022" }, { "authors": "L Ruff; N Deecke; L Siddiqui; S Vandermeulen; R Binder; A ; E Kloft; M ", "journal": "", "ref_id": "b17", "title": "Deep One-Class Classification", "year": "2018" }, { "authors": "L Ruff; R Vandermeulen; N Binder; A ; E ; K Kloft; M ", "journal": "", "ref_id": "b18", "title": "Deep Semi-Supervised Anomaly Detection. 8th ICLR 2020", "year": "2020" }, { "authors": "F Angiulli; F Fassetti; L Ferragina; R Spada", "journal": "", "ref_id": "b19", "title": "Cooperative Deep Unsupervised Anomaly Detection", "year": "2022" }, { "authors": "L Ruff; J Kauffmann; R Vandermeulen; G Montavon; W Samek; M Kloft; T Dietterich; K ", "journal": "", "ref_id": "b20", "title": "A Unifying Review of Deep and Shallow Anomaly Detection", "year": "2021" }, { "authors": "T Schlegl; P Seeb Öck; S Waldstein; U Schmidt-Erfurth; G Langs", "journal": "IPMI", "ref_id": "b21", "title": "Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery", "year": "2017" }, { "authors": "Y Liu; Z Li; C Zhou; Y Jiang; J Sun; M Wang; X He", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b22", "title": "Generative Adversarial Active Learning for Unsupervised Outlier Detection", "year": "2020" }, { "authors": "S Akcay; A Atapour-Abarghouei; T Breckon; Ganomaly", "journal": "", "ref_id": "b23", "title": "Semi-Supervised Anomaly Detection via Adversarial Training", "year": "2018" }, { "authors": "T Schlegl; P Seeb Öck; S Waldstein; G Langs; U Schmidt-Erfurth; F-Anogan", "journal": "Medical Image Analysis", "ref_id": "b24", "title": "Fast Unsupervised Anomaly Detection with Generative Adversarial Networks", "year": "2019" }, { "authors": "G Pang; C Shen; L Cao; A Hengel", "journal": "ACM Comput. Surv", "ref_id": "b25", "title": "Deep Learning for Anomaly Detection: A Review", "year": "2021" }, { "authors": "P Sperl; J Schulze; K ", "journal": "", "ref_id": "b26", "title": "A 3 : Activation Anomaly Analysis", "year": "2020" }, { "authors": "B Zong; Q Song; M Min; W Cheng; C Lumezanu; D Cho; H Chen", "journal": "ICLR", "ref_id": "b27", "title": "Deep Autoencoding Gaussian Mixture Model for Unsupervised Anomaly Detection", "year": "2018" }, { "authors": "Y Lecun; C Cortes", "journal": "", "ref_id": "b28", "title": "MNIST handwritten digit database", "year": "2010" }, { "authors": "G Cohen; S Afshar; J Tapson; A Schaik; Em", "journal": "", "ref_id": "b29", "title": "NIST: an extension of MNIST to handwritten letters", "year": "2017" }, { "authors": "H Xiao; K Rasul; R Vollgraf", "journal": "", "ref_id": "b30", "title": "Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms", "year": "2017" }, { "authors": "A Krizhevsky; V Nair; G Hinton", "journal": "", "ref_id": "b31", "title": "CIFAR-10", "year": "" }, { "authors": "L Davies; U Gather", "journal": "Journal Of The American Statistical Association", "ref_id": "b32", "title": "The identification of multiple outliers", "year": "1993" }, { "authors": "V Barnett; T Lewis", "journal": "John Wiley & Sons", "ref_id": "b33", "title": "Outliers in Statistical Data", "year": "1994" }, { "authors": "W Jin; A Tung; J Han", "journal": "ACM (KDD)", "ref_id": "b34", "title": "Mining Top-n Local Outliers in Large Databases", "year": "2001" }, { "authors": "V Hautamäki; I Kärkkäinen; P Fränti", "journal": "", "ref_id": "b35", "title": "Outlier Detection Using k-Nearest Neighbour Graph", "year": "2004" }, { "authors": "M Radovanović; A Nanopoulos; M Ivanović", "journal": "IEEE (TKDE)", "ref_id": "b36", "title": "Reverse Nearest Neighbors in Unsupervised Distance-Based Outlier Detection", "year": "2015" }, { "authors": "F Angiulli", "journal": "ECMLPKDD", "ref_id": "b37", "title": "Concentration Free Outlier Detection", "year": "2017" }, { "authors": "H Kriegel; M Schubert; A Zimek", "journal": "", "ref_id": "b38", "title": "Angle-based outlier detection in high-dimensional data", "year": "2008" }, { "authors": "S Rayana", "journal": "", "ref_id": "b39", "title": "ODDS Library", "year": "2017" }, { "authors": "G Pang; C Shen; A Hengel", "journal": "", "ref_id": "b40", "title": "Deep Anomaly Detection with Deviation Networks", "year": "2019" } ]
[ { "formula_coordinates": [ 3, 134.91, 142.96, 78.19, 14.11 ], "formula_id": "formula_0", "formula_text": "L(x) = ||x -x|| 2 2 ." }, { "formula_coordinates": [ 3, 60.07, 389.75, 239.93, 12.62 ], "formula_id": "formula_1", "formula_text": "L F (x) = (1 -y) • ||x -x|| 2 + λ • y • ||F (x) -x|| 2 ,(2)" }, { "formula_coordinates": [ 5, 388.11, 548.54, 99.79, 22.31 ], "formula_id": "formula_2", "formula_text": "λ(α) = 1 + α n s -1 ," } ]
10.1145/3581783.3611970
2023-08-04
[ { "figure_ref": [], "heading": "Unaligned", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Noise", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "RGB", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Multimodal learning in practice", "publication_ref": [], "table_ref": [], "text": "Modality X" }, { "figure_ref": [], "heading": "Mining Semantics", "publication_ref": [ "b50", "b55", "b59", "b60", "b4", "b17", "b34", "b44", "b7", "b8", "b43", "b60", "b77" ], "table_ref": [], "text": "Variation Consistency Noise Our solution Figure 1: We consider the multimodal learning scenario in practice. Due to environmental factors and calibration bias, the complementary modality may be biased and not wellaligned with the RGB camera, making the fusion process challenging in a real-world setting. To deal with this practical issue, we propose a robust segmentation pipeline by mining the cross-modal semantics. Our method leads to preserved multimodal consistency, while pulling the modality-specific features in opposite directions to maximize the joint entropy, making our fusion design efficient and robust.\n(SOTA) calibration methods cannot ensure a perfect alignment across sensors, making multimodal tasks challenging.\nIn the literature, there have been various fusion methods proposed for effectively merging multimodal clues [51,56,60,61]. Many of them assume that the multi-sensor clues are heterogeneous, and can be directly merged to maximize the joint entropy [5,18]. However, this design may have limitations, as it can also consider misleading noise as useful clues, leading to biased predictions. Therefore, a method that can effectively and robustly segment objects with any complementary clues is highly demanded.\nIn this work, we propose a novel approach for exploring the relationship between multi-sensor inputs by mining the cross-modal semantics, as shown in Figure 1. Our motivation stems from the observation that multimodal features, despite their modal specificity, inherently contain shared representations that are robust to measurements and/or calibration errors. Building upon this observation, we aim to leverage cross-modal consistency to guide the fusion of variant features that are specific to each input modality.\nTo achieve our objective, we first begin by explicitly decoupling the modality-shared and modality-specific features, treating them as separate entities during our modeling process. Drawing on the sensor denoising approaches suggested in previous works [35,45], we adopt averaging as a popular method for mitigating the effects of shot, speckle, and ambient noise, which are known to impact accuracy. As such, we decompose each feature into two distinct components -mean and variance. The mean component of the feature map captures the shared consistency in a broader context, making it more robust to noise. Meanwhile, the variance component represents the relative modality-specific variation that may vary across modalities and are more susceptible to noise. Next, we employ an all-round attentive fusion strategy to process these two components. On one hand, based on the shared representation, we analyze the inner correlation between multimodal inputs and generate a learnable weight that balances the contributions of each modality to form the fused output. We expect less accurate depth or infrared features to exhibit lower similarity compared to the RGB input, and consequently, contribute less to the fused output, and vice versa. On the other hand, considering the modality-specific features, which may also be noisy, we aim to determine which regions and patterns should be taken into account. By mining the cross-modal semantics, our fusion block enables a more effective feature modeling approach, retaining only the most informative modality-specific clues to maximize the joint entropy, while being robust to sensor noise.\nSecond, we also address the architectural aspect of our model. The U-shape skip connection has demonstrated outstanding results in object segmentation [8,9,44,61,78]. However, in the realm of multimodal fusion, existing approaches with conventional one-toone correspondence may not fully exploit the potential of sensor fusion. To overcome this limitation, we propose a novel two-stage coarse-to-fine decoder. Initially, the feature map is decoded based on the shared semantics to estimate a rough object mask. Subsequently, the mask is further refined through cross-modal mining, resulting in a more discriminative output retaining the most informative clues from each input source.\nThird, to improve the network learning process, we introduce constraints on the semantic consistency across the decoding layers. We postulate that despite the spatial variations depending upon the network depth, neighboring layers should still carry semantically related attributes. To achieve this goal, we gradually group the decoder outputs in pairs, forming low-, middle-, and high-level outputs, and then minimize the Kullback-Leibler divergence between them. This allows us to improve the stability and interpretability of feature decoding with minimal additional learning costs.\nTo conclude, our contributions can be summarized as follow:\n• We propose a novel all-round attentive fusion by mining the cross-modal semantics in an explicit manner, which improves the accuracy and robustness of object segmentation. • We introduce a two-stage decoder that combines convolutionbased operations with cross-modal querying. The coarse-tofine detection pipeline leads to a more discriminative output with a clearer contour. • We further employ constraints on the semantic consistency across the decoding layers by minimizing the cross-level divergence, leading to improved learning stability and interpretability with minimal costs. • Our network sets new SOTA records on multimodal object segmentation with both depth and infrared inputs. We also validate our robustness in challenging scenes with suboptimal and/or misaligned inputs." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b6", "b12", "b59", "b84", "b12", "b27", "b84", "b7", "b24", "b28", "b83", "b81", "b82", "b20", "b61", "b63", "b67", "b18", "b46", "b62", "b73", "b16", "b45", "b53", "b78", "b85", "b86", "b50", "b87", "b51", "b8", "b20", "b61", "b63", "b67", "b9", "b63", "b51" ], "table_ref": [], "text": "Object Segmentation with Depth Clues: Deep learning-based RGB-D networks [7,13,60,85] have shown promising performance in object segmentation tasks by leveraging depth clues for improved scene understanding. Most existing works assume spatial alignment between RGB and depth images, based on which various fusion blocks have been proposed such as early fusion [13], middle fusion [28,85], late fusion [8,25], and output fusion [29,84]. Nevertheless, such an assumption is not always the case in reality due to the sensor calibration error. To avoid the alignment issue, other works introduce depth-free modeling during testing by leveraging depth supervision during training [82,83]. Sharing the same motivation, recent research proposes to generate pseudo-depth from RGB inputs [21,62,64,68]. Nevertheless, in such a case, the quality of input data has been overlooked, and pseudo-depth may also suffer from domain gap issues. Several alternatives have proposed robust fusion strategies that consider input data quality, such as using attention mechanisms for feature calibration [19,47,63,74]. However, these approaches do not explicitly differentiate modality-specific and shared clues during fusion, which can lead to inefficient RGB-D integration. In contrast, our work fully leverages consistency across multimodal inputs to merge modality-specific clues, resulting in a more robust and effective fusion design for object segmentation.\nObject Segmentation with Thermal Clues: Recently, infrared images have gained research attention for object segmentation [17,46,54,79,86,87], as they capture the thermal radiation emitted by objects, providing temperature information that can make objects distinguishable. Similar to RGB-D methods, learning-based RGB-T methods have achieved dominant performance in object segmentation. [51] suggests mining and modeling cross-modal interactions through channel attention, [88] proposes a lightweight model for real-time applications. [52] analyzes the RGB-T performance under the unaligned settings. [9] further studies the necessity of thermal clues based on the illuminance score. In contrast, our work computes the similarity between modality-shared features among RGB-T features and employs all-round fusion to fully benefit from multimodal inputs. By mining the cross-modal semantics, our network enables more robust and accurate object segmentation.\nChallenging Scenes: In this paper, we additionally conduct experiments on challenging scenes with inaccurate depth and/or unaligned thermal inputs. To mimic inaccurate depth, we leverage off-the-shelf depth estimation methods following previous works [21,62,64,68], which generate more realistic but noisy depth due to domain gap. We explicitly validate our method's effectiveness on camouflage datasets [10,64], which contain concealed scenes that are challenging for object segmentation. As for RGB-T inputs, since there is no existing thermal estimation method, we conduct experiments with unaligned inputs following [52]. The quantitative results validate the effectiveness and robustness of our method." }, { "figure_ref": [ "fig_0" ], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "For ease and brevity of reading, in this section, we take Depth as the auxiliary modality as an example, since the RGB-T model follows the same pipeline. Given an input image 𝐼 with size 𝐼 ∈ R 3×𝐻 ×𝑊 , our objective is to segment the target object with the help of the depth clues 𝐷, which is resized to be the same resolution as 𝐼 from the input side. As shown in Figure 2, 𝐼, 𝐷 are fed into parallel encoders and output multi-scale encoded features 𝐹 𝐼 𝑖 , 𝐹 𝐷 𝑖 , where 𝑖 stands for the number of encoder layers. At each scale, the encoded features are fed together into the all-round attentive fusion block (AF) to generate the shared output. After that, these features are later processed and refined by the coarse-to-fine decoder (CFD) to estimate the object's location. To supervise the learning pipeline end-to-end, we leverage multi-scale supervision with the help of the ground truth mask 𝐺. Moreover, we explore the semantic consistency across different levels with respect to the network depth, which improves the network stability and interpretability." }, { "figure_ref": [], "heading": "All-Round Attentive Fusion", "publication_ref": [ "b62", "b83", "b18", "b81", "b75", "b77", "b71" ], "table_ref": [], "text": "We observe that modality-shared features exhibit a strong correlation with scene semantics, suggesting a natural way to analyze cross-modal consistency. On the other hand, the residual part of the features may contain both discriminative modality-specific variations that contribute to segmentation and noise that hinders accurate predictions. Building upon this observation, we propose an all-round attentive fusion approach that mines the cross-modal semantics while respecting the inner consistency to maximize joint entropy and attenuate the impact of noise. Adjusting Modal Proportion: Taking the encoded feature (𝑓 𝐼 𝑖 , 𝑓 𝐷 𝑖 ) of the 𝑖 𝑡ℎ layer as an example, we first decompose it into two complementary components, i.e., the mean encodings (𝑚 𝐼 𝑖 , 𝑚 𝐷 𝑖 ) and the residual variance encoding (𝑣 𝐼 𝑖 , 𝑣 𝐷 𝑖 ). The mean encodings are with shape R 𝑐 ×1×1 and are obtained by performing global average pooling (GAP) on the input features, making the representation more robust to noise. Then, the mean encodings are fused together to generate the shared underlying features 𝑚 𝑠 ∈ R 𝑐 ×1×1 of the scene. Mathematically, we have: [63,84], channel attention [19,82], and both [76,78]. However, existing works often directly compute the attention from the input features, without explicitly modeling the cross-modal semantics. Differently, we decompose each modality into shared and specific components and perform all-round fusion by adjusting the proportion, region, and pattern, depending upon the quality.\n𝑚 𝑆 𝑖 = 𝑀𝐿𝑃 (𝑚 𝐼 𝑖 ⊗ 𝑚 𝐷 𝑖 ), 𝑚 𝐼 𝑖 = 𝐺𝐴𝑃 (𝑓 𝐼 𝑖 ), 𝑚 𝐷 𝑖 = 𝐺𝐴𝑃 (𝑓 𝐷 𝑖 ),(1)\nwhere MLP denotes multi-layer perceptron and ⊗ is the matrix multiplication. Therefore, we can obtain the confidence score (𝛼 𝑖 , 1-𝛼 𝑖 ) referring to the proportion of each modal contribution. Formally, we obtain 𝛼 𝑖 by computing the cosine similarity:\n𝛼 𝑖 = 𝑎 𝑖 𝑎 𝑖 + 𝑏 𝑖 , 𝑎 𝑖 = 𝑐𝑜𝑠𝑖𝑛𝑒 (𝑚 𝑆 𝑖 , 𝑚 𝐼 𝑖 ), 𝑏 𝑖 = 𝑐𝑜𝑠𝑖𝑛𝑒 (𝑚 𝑆 𝑖 , 𝑚 𝐷 𝑖 ).(2)\nRegion-wise Modeling: The variance encodings are with shape R 𝑐 ×ℎ×𝑤 and are obtained by the residual subtraction. They are supposed to contain valuable modality-specific clues but are also sensitive to noise. Therefore, we aim to preserve and enhance the most informative clues, while minimizing the inherent noisy response. To achieve this goal, we propose trio spatial attention (TSA) and trio channel attention (TCA) to improve the feature modeling pattern-wise and region-wise, respectively. Our TSA follows a hybrid design of \"max-pooling + average-pooling + convolution\". The maxaverage branches contribute to preserving the effective and global clues, respectively, while the convolutional branch constrains network attention to local regions to alleviate ambiguity. After learning the spatial maps from each input, we enable cross-modal interaction by concatenation, and generate the final spatial-wise calibration maps (𝑠 𝐼 𝑖 , 𝑠 𝐷 𝑖 ) by convolution. Formally, we have:\n𝑠 ′𝐼 𝑖 , 𝑠 ′𝐷 𝑖 = 𝑐ℎ𝑢𝑛𝑘 (𝐶𝐶 (𝑇 𝑆𝐴(𝑣 𝐼 𝑖 ),𝑇 𝑆𝐴(𝑣 𝐷 𝑖 ))), 𝑠 𝐼 𝑖 = 𝜎 (𝑠 ′𝐼 𝑖 ), 𝑠 𝐷 𝑖 = 𝜎 (𝑠 ′𝐷 𝑖 ),(3)\nwhere 𝐶𝐶 stands for concat-conv and 𝜎 is the sigmoid function.\nPattern-wise Modeling: As for channel dimension, our TCA follows the same philosophy by replacing the convolution with the gating function adopted from [72]. We obtain the channel-wise calibration maps (𝑐 𝐼 𝑖 , 𝑐 𝐷 𝑖 ) by:\n𝑐 ′𝐼 𝑖 , 𝑐 ′𝐷 𝑖 = 𝑐ℎ𝑢𝑛𝑘 (𝐶𝑠𝐶 (𝑇𝐶𝐴(𝑣 𝐼 𝑖 ),𝑇𝐶𝐴(𝑣 𝐷 𝑖 ))), 𝑐 𝐼 𝑖 = 𝜎 (𝑐 ′𝐼 𝑖 ), 𝑐 𝐷 𝑖 = 𝜎 (𝑐 ′𝐷 𝑖 ),(4)\nwhere 𝐶𝑠𝐶 stands for concat-shuffle-conv. Finally, we can obtain the shared output 𝑓 𝑆 𝑖 by:\n𝑓 ′𝐼 𝑖 = 𝛼 𝑖 • 𝑠 𝐼 𝑖 ⊗ 𝑐 𝐼 𝑖 ⊗ 𝑓 𝐼 𝑖 , 𝑓 ′𝐷 𝑖 = (1 -𝛼 𝑖 ) • 𝑠 𝐷 𝑖 ⊗ 𝑐 𝐷 𝑖 ⊗ 𝑓 𝐷 𝑖 , 𝑓 𝑆 𝑖 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝑀𝐿𝑃 (𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝑜𝑛𝑐𝑎𝑡 (𝑓 ′𝐼 𝑖 , 𝑓 ′𝐷 𝑖 ))))(5)\nwhere 𝑀𝐿𝑃 stands for the multi-layer perceptron with the required size arrangement. Starting from the second layer (𝑖 > 1), we merge the output 𝑓 𝑆 𝑖 with the previous level output 𝑓 𝑆 𝑖 -1 with concat-conv." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2" ], "heading": "Coarse-to-Fine Decoder", "publication_ref": [ "b18", "b62", "b83", "b84", "b41", "b80", "b70", "b3", "b31", "b31", "b58" ], "table_ref": [], "text": "As a multimodal pipeline, the decoder aims to leverage both modalityspecific and shared-learning representations to accurately generate the output. Many existing works [19,63] are only based on the shared learning network, neglecting the rich modality-specific features for the decoder. Several recent models [84,85] introduce triple decoders with both specific and shared networks, at the cost of increased learning complexity. In contrast, we propose a novel decoder that initially estimates the object's location based on shared features and then refines it using our modality-aware querying to further mine the cross-modal semantics, yielding a lightweight yet efficient manner for object segmentation. Initial Prediction: Our initial prediction block consists of a Local-Global Modeling (LGM) block, a Feature Merging (FM) block, and a prediction Head. The LGM block aims to improve the feature modeling with global and local awareness while being lightweight.\nInspired by the success of inverted residual block from MobileNets [42], we first project the input feature map into a higher-dimension latent space by 1 × 1 convolution and then divide the projected feature into several subparts along the channel dimension. Each subpart is processed separately with maxpooling operation of different receptive fields. Our design shares the same motivation as PSPNet [81] and ASPP [71] that we all aim to leverage multi-scale global awareness. Differently, instead of using convolution, we only employ pooling operations which do not add extra learning parameters. The maxpooling operation can also contribute to preserving the most informative clues. We pad the feature map with respect to the pooling window so that the output resolution remains unchanged. Further, we employ a combination of depthwise separate convolution (DSConv) to add locality into the network since fine-grained details are vital for accurate segmentation. Finally, the obtained feature is fed into a 1×1 convolution to enable information exchanges across channels and to reduce the dimension. Detailed feature flow can be found in Figure 4. Formally, let 𝑓 𝑆 𝑖 be the input of LGM block of 𝑖 𝑡ℎ decoding layer, the output 𝐿𝐺 𝑖 is obtained by:\n𝑀𝑃 1 , 𝑀𝑃 2 , ..., 𝑀𝑃 𝑁 = 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(𝑐ℎ𝑢𝑛𝑘 (𝐶𝑜𝑛𝑣 1×1 (𝑓 𝑆 𝑖 ))). 𝐿𝐺 𝑖 = 𝐶𝑜𝑛𝑣 1×1 (𝐷𝑆𝐶𝑜𝑛𝑣 3×3 (𝐶𝑜𝑛𝑐𝑎𝑡 (𝑀𝑃 1 , 𝑀𝑃 2 , ..., 𝑀𝑃 𝑁 ))).(6)\nAside from the LGM block, we propose an FM block that enables feature interaction between different decoding layers to benefit from the multi-granularity properties. As shown in Figure 4, the lower-resolution feature 𝑓 𝑖+1 is upsampled, multiplied with the higher-resolution feature 𝐿𝐺 𝑖 , and fed into the MLP block to generate the initial feature 𝑓 𝑖 . When 𝑖 = 4, i.e., the deepest layer, we replace the lower-resolution input of the FM block with the averaged feature. Finally, we employ the detection 𝐻𝑒𝑎𝑑 composed of one 𝐶𝑜𝑛𝑣 3×3 and one 𝐶𝑜𝑛𝑣 1×1 , as shown in Figure 4, for the initial prediction 𝑝 𝑖 . Formally, we have:\n𝑓 𝑖 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝑀𝐿𝑃 (𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝑓 𝑖+1 ⊗ 𝐿𝐺 𝑖 ))), 𝑝 𝑖 = 𝐻𝑒𝑎𝑑 (𝑓 𝑖 ), (7\n)\nwhere 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 is the manipulation of the feature shape. Modality-Aware Querying: Based on the shared features, it is possible to obtain a rough but unprecise segmentation mask. Therefore, we propose a novel masked semantic mining attention, termed MS-A, with modality-aware querying. Our query is generated by mining the semantics across RGB and depth feature (𝑓 𝐼 𝑖 , 𝑓 𝐷 𝑖 ), while the key and value are computed and masked from the shared feature by 𝑝 𝑖 ⊗ 𝑓 𝑖 . Formally, our MS-A can be formulated as:\n𝑄 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝐶 (𝐶𝑜𝑛𝑣 1×1 (𝑓 𝐼 𝑖 ), 𝐶𝑜𝑛𝑣 1×1 (𝑓 𝐷 𝑖 ))), 𝐾 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝑜𝑛𝑣 1×1 (𝑝 𝑖 ⊗ 𝑓 𝑖 )), 𝑉 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝑜𝑛𝑣 1×1 (𝑝 𝑖 ⊗ 𝑓 𝑖 )), 𝑀𝑆-𝐴(𝑄, 𝐾, 𝑉 ) = 𝑠𝑜 𝑓 𝑡𝑚𝑎𝑥 ( 𝑄𝐾 𝑇 √︁ 𝑑 𝑘 )𝑉 ,(8)\nwhere 𝑄, 𝐾, 𝑉 stands for query, key, and value matrices. Our masked attention specifically focuses on the channel dimension for three reasons: (1) it requires very few learning parameters compared to other spatial counterparts [4,32,32,59]; (2) we leverage modalityaware querying to mine the cross-modal semantics, making the attention map less sensitive to inherent sensor noise; (3) this design aligns with our goal that the prediction mask can only be refined but not enlarged, once again improving the robustness. With the help of our MS-A, we can output the refined feature map 𝑓 𝑜 𝑖 by:\n𝑓 𝑜 𝑖 = 𝑀𝑆-𝐴(𝑄, 𝐾, 𝑉 ) + 𝑝 𝑖 ⊗ 𝑓 𝑖 .(9)" }, { "figure_ref": [], "heading": "Objective Function", "publication_ref": [ "b62", "b84" ], "table_ref": [], "text": "Our XMSNet is end-to-end trainable and is only supervised by the GT mask 𝐺. Following previous works [63,85], we adopt mixed objectives function L consists of weighted binary cross-entropy loss L 𝑤𝑏𝑐𝑒 and the IoU loss L 𝐼𝑜𝑈 as follow:\nL (.) = L 𝑤𝑏𝑐𝑒 (.) + L 𝐼𝑜𝑈 (.).(10)\nWe first employ multi-scale loss L 𝑚𝑠 to supervise the initial predictions 𝑝 𝑖 from different decoding layers to fully benefit from the hierarchical information. We have:\nL 𝑚𝑠 = 4 ∑︁ 𝑖=1 𝜆 𝑖 • L (𝑝 𝑖 , 𝐺),(11)\nwhere 𝜆 𝑖 are the weighting hyperparameters.\nAside from this, we also leverage a multi-level loss L 𝑚𝑙 . Our assumption is that neighboring layers should carry closely-related attributes, making it possible to group decoded outputs in pairs, forming low-, middle-, and high-level outputs. Technically, taking the low-level output 𝑝 𝑙𝑜𝑤 as an we can obtain it by:\n𝑝 𝑙𝑜𝑤 = 𝐻𝑒𝑎𝑑 (𝑓 𝑙𝑜𝑤 ), 𝑓 𝑙𝑜𝑤 = 𝐶𝐶 (𝑓 𝑜 4 , 𝑓 𝑜 3 ). (12\n)\nThen, the 𝑓 𝑙𝑜𝑤 is fused with the next layer output, gradually forming the middle-level output 𝑓 𝑚𝑖𝑑 and high-level output 𝑓 ℎ𝑖𝑔ℎ , from which we compute the refined masks 𝑝 𝑚𝑖𝑑 and 𝑝 ℎ𝑖𝑔ℎ , respectively. Mathematically, our multi-level loss L 𝑚𝑙 is formulated as:\nL 𝑚𝑙 = L (𝑝 𝑙𝑜𝑤 , 𝐺) + 𝛽 1 • L (𝑝 𝑚𝑖𝑑 , 𝐺) + 𝛽 2 • L (𝑝 ℎ𝑖𝑔ℎ , 𝐺), (13\n)\nwhere 𝛽 1 and 𝛽 2 are the weighting parameters. Moreover, from a macroscopic view, there should exist a semantic consistency across different levels despite the resolution differences. Hence, we employ Kullback-Leibler divergence (𝐾𝐿) to force semantic consistency. Formally, our divergence loss L 𝑑𝑖𝑣 becomes:\nL 𝑑𝑖𝑣 = L 𝐾𝐿 (𝑝 𝑙𝑜𝑤 , 𝑝 𝑚𝑖𝑑 ) + L 𝐾𝐿 (𝑝 𝑚𝑖𝑑 , 𝑝 ℎ𝑖𝑔ℎ ), L 𝐾𝐿 (𝐴, 𝐵) = 𝐾𝐿(𝐴||𝐵) + 𝐾𝐿(𝐵||𝐴).(14)\nHence, our overall losses function L 𝑎𝑙𝑙 becomes:\nL 𝑎𝑙𝑙 = L 𝑚𝑠 + 𝛾 1 • L 𝑚𝑙 + 𝛾 2 • L 𝑑𝑖𝑣 .(15)\nwhere 𝛾 1 and 𝛾 2 are the weighting parameters. The ablation study on the hyperparameters can be found in the supplementary material." }, { "figure_ref": [], "heading": "EXPERIMENTS 4.1 Experimental Settings", "publication_ref": [ "b11", "b18", "b62", "b84", "b21", "b38", "b38", "b21", "b36", "b10", "b35", "b40", "b8", "b17", "b29", "b53", "b52", "b53", "b56", "b51", "b35", "b40", "b63", "b9", "b22", "b22", "b42", "b9", "b33", "b63", "b84", "b58", "b84" ], "table_ref": [ "tab_3" ], "text": "Benchmark Datasets: We present experimental results on both RGB-D and RGB-T salient object detection (SOD) datasets to validate our effectiveness. For the RGB-D SOD task, we follow the standard protocol of previous works [12,19,63,85] and select 1,485 samples from NJU2K [22] and 700 samples from NLPR [39] for training. We evaluate our model on four widely used RGB-D SOD datasets, including NLPR-Test [39], NJUK-Test [22], STERE [37], and SIP [11]. To analyze the robustness against noisy measurements, we also evaluate our model with pseudo-depth inputs [36,41].\nFor the RGB-T SOD task, we follow the conventional train/val split as used in previous works [9,18,30,54] and evaluate our method on three widely used RGB-T datasets: VT5000 [53], VT1000 [54], and VT821 [57]. To analyze the robustness against sensor misalignment, we also test on the biased datasets from [52].\nIn addition, we evaluate our method on the challenging task of camouflage object detection (COD), using pseudo-depth inputs, following previous works [36,41,64]. We use 3,040 images from COD10K [10] and 1,000 images from CAMO [23] for training, and test on four COD benchmark datasets: CAMO-Test [23], CHAM. [43], COD10K-Test [10], and NC4K [34]. Evaluation Metrics: We use four widely used evaluation metrics, namely Mean absolute error (𝑀), max F-measure (𝐹 𝑚 ), S-measure (𝑆 𝑚 ), and E-measure (𝐸 𝑚 ), as commonly used in previous works [64,85]. More details can be found in the supplementary material. Implementation Details: Our model is implemented based on Pytorch with a V100 GPU. We chose pretrained PVT [59] as our backbone. Detailed comparisons with different backbones can be found in Table 4. The input dimension is set to 384 × 384. The Adam algorithm is adopted as an optimizer. The initial learning rate is set to 1e-4 which is divided by 10 every 60 epochs. Follow [85], we use common data augmentation operations. The network is trained for 200 epochs with the same hyperparameters settings for each task." }, { "figure_ref": [ "fig_3" ], "heading": "Comparison with RGB-D/T Inputs", "publication_ref": [ "b63", "b67", "b39", "b40", "b11", "b79", "b55", "b19", "b37", "b51", "b51", "b31" ], "table_ref": [ "tab_0", "tab_1", "tab_1" ], "text": "Quantitative Comparison w/ Ground Truth Depth: We present our performance on RGB-D SOD benchmarks in Table 1. When using ground truth depth, our method achieves significantly superior performance over all the counterparts by a large margin. The superior performance validates the effectiveness of our XMSNet.\nQuantitative Comparison w/o Ground Truth Depth: We also evaluate our method by replacing GT depth with source-free depth. Following previous works [64,68], we generate pseudo-depth from the input RGB image [40,41]. We retrain three RGB-D methods, i.e., BBSNet [12], DFMNet [80], and DCMF [56], as well as two state-of-the-art segmentation methods with RGB-only inputs, i.e., SegMAR [20] and ZoomNet [38]. It can be seen that most existing RGB-D methods fail to perform accurately when dealing with pseudo-depth, and the RGB-only methods provide reasonable but far from satisfactory performance. In contrast, our XMSNet still leverages pseudo-depth clues and provides promising results. This is attributed to our design of mining cross-modal semantics, which eliminates misleading noise in the depth map, yielding superior robustness and efficiency compared to all other counterparts. Quantitative Comparison w/ Aligned Thermal Image: Table 2 displays the performance of our method on RGB-T SOD benchmarks, where our approach significantly outperforms all counterparts on every dataset and metric.\nQuantitative Comparison w/o Aligned Thermal Image: We conducted experiments on the unaligned datasets [52], where misalignment was generated through random spatial affine transformation. The results are presented in Table 2. Our proposed method achieved outstanding performance, outperforming DCNet [52], which introduced a specific modality alignment module. Our approach highlights the significance of mining cross-modal semantics, which leads to stable performance. While SwinNet [32], the secondbest approach, leverages transformer attention for feature fusion, it only aims to maximize joint entropy and may classify unaligned features as useful modality-specific clues. In contrast, our approach explicitly models cross-modal consistency to guide multimodal fusion, resulting in better robustness against misalignment.\nQualitative Comparison: We provide qualitative results on RGB-D/T benchmark datasets in Figure 5. Our method efficiently distinguishes misleading and useful clues during multimodal fusion, either for RGB-D or RGB-T inputs." }, { "figure_ref": [], "heading": "Performance on the Camouflaged Scenes", "publication_ref": [ "b9", "b76", "b63", "b66" ], "table_ref": [ "tab_2" ], "text": "Camouflaged object detection (COD) is a challenging task that has recently drawn great research attention [10,77]. As the object is concealed from the background, the COD task is inherently difficult.\nWhile multimodal clues such as depth have been shown to be useful for object segmentation [64,67], the lack of ground truth inputs, such as the depth map, makes it necessary to use off-theshelf depth estimation methods, which introduces noise due to the domain gap. As shown in Table 3, existing RGB-D methods with pseudo-depth perform worse than RGB-only methods. However, our method outperforms both RGB-D and RGB-only methods in all COD benchmarks, demonstrating our ability to efficiently leverage pseudo-depth despite several noisy representations. Qualitative comparisons can be found in the supplementary material." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6" ], "heading": "Ablation Study", "publication_ref": [ "b14", "b30", "b58", "b62", "b72", "b84" ], "table_ref": [ "tab_3", "tab_4", "tab_5" ], "text": "Different Backbones: We present in Table 4 our performance with different backbones such as (R) ResNet [15], (C) ConvNext [31], and (P) PVT [59]. Our model outperforms all the counterparts with the same backbone, validating our effectiveness. The comparison of the model sizes can be found in Figure 6, where our (R), (C), and (P) variants require 233MB, 727MB, and 670MB, respectively. It 5. Gradually adding our proposed block leads to improved performance. To enhance our analysis, we replaced fusion and masked attention components with the SOTA counterparts [63,73,85]. The results demonstrate that such replacements resulted in a deterioration of performance, further confirming the effectiveness of our proposed methods. For a more comprehensive understanding of each proposed block's contribution, we provide visualizations of our encoded, fused, refined, and consistency-constrained features in Figure 7.\nAblation Study on the Fusion: We present a detailed study on the fusion block, comprising trio spatial attention (TSA), trio channel attention (TCA), and the proportion 𝛼, in Table 6. By gradually removing other components, we validate the effectiveness of each element, with our full fusion design achieving the best performance. Additionally, we conducted a comprehensive analysis by replacing the trio attention with each basic attention branch. The observed deteriorated performance highlights the significance of our complete trio branches attention design. For a better understanding of our fusion block's functionality, we provide attention visualizations in Figure 8. When dealing with unaligned input, our TSA leverages global clues, such as shape and contour, while the TCA focuses on feature alignment across modalities.\nOverall Modal Contribution: During testing on the RGB-X SOD dataset, consisting of 2729 RGB-D samples and 4321 RGB-T samples, we observed that RGB features were assigned higher weights in 1858 images (68.1% of the cases) for RGB-D inputs, and 3407 images (78.8% of the cases) for RGB-T inputs. This result confirms our initial hypothesis that depth or thermal clues may contain noise or misalignment compared to RGB inputs, and hence should generally contribute less to the shared output." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "We demonstrate a successful case of mining cross-modal semantics for object segmentation. In this paper, we leverage the modalityshared consistency to guide the fusion of modality-specific variation, making the fusion design more robust to sensor noise and misalignment. Further, we design a coarse-to-fine decoder that fully benefits from the multimodal clues to strengthen the feature discriminability. Finally, we add restrictions on the decoded outputs to ensure semantic consistency across different layers, yielding a simple yet efficient manner to employ the network hierarchies.\nExhaustive experiments on RGB-D and RGB-T SOD and COD benchmarks, with both GT and inferior inputs, validate the effectiveness, generalization, and robustness of our XMSNet." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This research is financed in part by the Alexander von Humboldt Foundation and the Conseil Régional de Bourgogne-Franche-Comté. The authors thank the anonymous reviewers and ACs for their tremendous efforts and helpful comments." } ]
Multi-sensor clues have shown promise for object segmentation, but inherent noise in each sensor, as well as the calibration error in practice, may bias the segmentation accuracy. In this paper, we propose a novel approach by mining the Cross-Modal Semantics to guide the fusion and decoding of multimodal features, with the aim of controlling the modal contribution based on relative entropy. We explore semantics among the multimodal inputs in two aspects: the modality-shared consistency and the modality-specific variation. Specifically, we propose a novel network, termed XMSNet, consisting of (1) all-round attentive fusion (AF), (2) coarse-to-fine decoder (CFD), and (3) cross-layer self-supervision. On the one hand, the AF block explicitly dissociates the shared and specific representation and learns to weight the modal contribution by adjusting the proportion, region, and pattern, depending upon the quality. On the other hand, our CFD initially decodes the shared feature and then refines the output through specificity-aware querying. Further, we enforce semantic consistency across the decoding layers to enable interaction across network hierarchies, improving feature discriminability. Exhaustive comparison on eleven datasets with depth or thermal clues, and on two challenging tasks, namely salient and
Object Segmentation by Mining Cross-Modal Semantics
[ { "figure_caption": "Figure 2 :2Figure 2: (a) Overall architecture of XMSNet. Our network relies on standard backbones to extract RGB and depth features in a parallel manner. Then, the multimodal clues are attentively fused and decoded to segment the target object. (b) Details on the fusion design. We propose to leverage the semantics within the multimodal features before outputting the fused representation, by explicitly modeling the shared and specific components. (c) Decoder pipeline. We introduce a coarse-to-fine decoding strategy by first predicting the mask based on the shared representation, then refining it with modality-aware querying. By mining the cross-modal semantics, our network enables a more robust and efficient fusion architecture for object segmentation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Fusion comparison. Attention mechanisms have been proven for multimodal fusion, such as spatial attention[63,84], channel attention[19,82], and both[76,78]. However, existing works often directly compute the attention from the input features, without explicitly modeling the cross-modal semantics. Differently, we decompose each modality into shared and specific components and perform all-round fusion by adjusting the proportion, region, and pattern, depending upon the quality.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Our proposed coarse-to-fine decoder. Based on the shared representation, we first estimate the rough segmentation mask by employing wider receptive fields (LGB) and network hierarchies (FM). Then, the rough segmentation is refined with masked attention which aims to leverage the cross-modal semantics for improved discriminability.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative comparison on RGB-D/T SOD benchmarks. Our network can efficiently leverage both visual and additional clues from supporting modality to accurately segment the target object which is closer to the ground truth.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Model size comparison. Our networks lead to the best trade-off between efficiency and performance.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Feature visualization. Red value denotes the modal contribution. Please, zoom in for more details.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Attention Visualization under Unaligned Setting.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison on RGB-D SOD datasets with both GT depth and pseudo-depth. ↑ (↓) denotes that the higher (lower) is better. Bold denotes the best performance.𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑ 𝑀 ↓ 𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑ 𝑀 ↓ 𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑ 𝑀 ↓ 𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑", "figure_data": "Dataset Metric 𝑀 ↓ Oracle setting -Performance of Models Trained w/ GT Depth NLPR [39] Public.NJUK [22]STERE [37]SIP [11]𝐶𝑉 𝑃𝑅 21 [19] DCF.022 .918.924.958.036 .922.912.946.039 .911.902.940.052 .899.876.916𝐼𝐶𝐶𝑉 21 [85] SPNet.021 .925.927.959.028 .935.925.954.037 .915.907.944.043 .916.894.930𝑀𝑀 21 [33]TriTrans.020 .923.928.960.030 .926.920.925.033 .911.908.927.043 .898.886.924𝐸𝐶𝐶𝑉 22 [84] MVSalNet .022 .931.930.960.036 .923.912.944.036 .921.913.944----𝐸𝐶𝐶𝑉 22 [25] SPSN.023 .917.923.956.032 .927.918.949.035 .909.906.941.043 .910.891.932𝑇 𝐼 𝑃 23 [61]HiDAnet.021 .929.930.961.029 .939.926.954.035 .921.911.946.043 .919.892.927OursXMSNet.018 .938 .936 .967.025 .942 .931 .960.026 .935 .927 .958.032 .939 .913 .952Practical setting -Performance of Models Trained w/o GT Depth𝐸𝐶𝐶𝑉 20 [12] BBSNet.023 .922.923.952.037 .925.915.939.037 .919.914.937.053 .892.875.912𝑀𝑀 21 [80]DFMNet.027 .909.914.944.046 .903.895.927.042 .906.903.934.067 .873.850.891𝑇 𝐼 𝑃 22 [56]DCMF.027 .915.921.943.044 .908.903.929.041 .909.907.931.067 .873.853.893𝐶𝑉 𝑃𝑅 22 [20] SegMAR.024 .923.920.952.036 .921.909.941.037 .916.907.936.052 .893.872.914𝐶𝑉 𝑃𝑅 22 [38] ZoomNet.023 .916.919.944.037 .926.914.940.037 .918.909.938.054 .891.868.909OursXMSNet.018 .929 .933 .964.026 .941 .929 .959.027 .934 .926 .955.039 .926 .899 .936", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative comparison on RGB-T SOD datasets with both aligned and unaligned inputs.", "figure_data": "Public.Dataset Metric𝑀 ↓VT5000 [53] 𝐹 𝑚 ↑ 𝑆 𝑚 ↑𝐸 𝑚 ↑𝑀 ↓VT1000 [54] 𝐹 𝑚 ↑ 𝑆 𝑚 ↑𝐸 𝑚 ↑𝑀 ↓VT821 [57] 𝐹 𝑚 ↑ 𝑆 𝑚 ↑𝐸 𝑚 ↑Oracle setting -Performance of Models Trained w/ Aligned Thermal Inputs𝑇 𝑀𝑀 22 [9]TNet.032.895.895.932.021.937.928.957.030.903.898.928𝑇𝐶𝑆𝑉𝑇 22 [58]CGFnet.035.886.882.923.023.933.921.955.036.881.879.916𝑇 𝐼 𝑃 22 [52]DCNet.040.848.853.906.023.918.915.953.036.848.859.911𝑇 𝐼 𝑃 23 [88]LSNet.037.871.877.917.022.930.925.954.033.870.878.915𝐼𝐶𝑀𝐸 23 [30]SSNet.042.845.843.894.026.918.905.945.035.867.856.896OursXMSNet.025.909.908.949.016.942.936.968.023.913.909.944Practical setting -Performance of Models Trained w/o Aligned Thermal Inputs𝑇 𝐼 𝑃 21 [51]MIDD.052.841.843.889.034.906.895.934.058.835.840.882𝑇𝐶𝑆𝑉𝑇 21 [32]SwinNet.034.876.878.932.026.919.913.954.040.860.868.912𝑇 𝐼 𝑃 22 [52]DCNet.045.833.844.906.027.899.901.949.052.824.839.897𝑇 𝑀𝑀 22 [9]TNet.043.846.856.912.031.896.894.933.044.839.855.904𝑇 𝐼 𝑃 23 [88]LSNet.049.810.828.898.035.876.883.933.047.799.829.894OursXMSNet.028.895.897.943.018.935.928.962.029.889.892.933", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Quantitative comparison with pseudo-depth on challenging COD datasets 𝐹𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑ 𝑀 ↓ 𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑ 𝑀 ↓ 𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑ 𝑀 ↓ 𝐹 𝑚 ↑ 𝑆 𝑚 ↑ 𝐸 𝑚 ↑", "figure_data": "Public.Dataset Metric𝑀 ↓CAMO [23]CHAM. [43]COD10K [10]NC4K [34]Performance of RGB COD Models𝐶𝑉 𝑃𝑅 21 [26]UJSC.072.812.800.861.030.874.891.948.035.761.808.886.047.838.841.900𝐶𝑉 𝑃𝑅 22 [20]SegMAR.080.799.794.857.032.871.887.935.039.750.799.876.050.828.836.893𝐶𝑉 𝑃𝑅 22 [38]ZoomNet.074.818.801.858.033.829.859.915.034.771.808.872.045.841.843.893𝐶𝑉 𝑃𝑅 23 [14]FEDER.071.823.802.868.029.874.886.948----.044.852.847.909Performance of RGB-D COD Models𝑀𝑀 21 [75]CDINet.100.638.732.766.036.787.879.903.044.610.778.821.067.697.793.830𝐶𝑉 𝑃𝑅 21 [19]DCF.089.724.749.834.037.821.850.923.040.685.766.864.061.765.791.878𝑇 𝐼 𝑃 21 [27]HAINet.084.782.760.829.028.876.876.942.049.735.781.865.057.809.804.872𝐼𝐶𝐶𝑉 21 [76]CMINet.087.798.782.827.032.881.891.930.039.768.811.868.053.832.839.888𝐼𝐶𝐶𝑉 21 [85]SPNet.083.807.783.831.033.872.888.930.037.776.808.869.054.828.825.874𝑇 𝐼 𝑃 22 [56]DCMF.115.737.728.757.059.807.830.853.063.679.748.776.077.782.794.820𝐸𝐶𝐶𝑉 22 [25]SPSN.084.782.773.829.032.866.887.932.042.727.789.854.059.803.813.867OursXMSNet.048.871.864.923.025.895.904.950.024.828.861.927.034.877.879.933", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Quantitative results based on different backbones.", "figure_data": "BackboneDatasetModelResNet [15]SIP [11]C2DFNet [78] Ours.053 .036 .920 .902 .942 .894 .782 .911ConvNext [31] CHAM. [43]CamoFormer [73] .024 Ours .021 .899 .913 .970 .886 .901 .954PVT [59]VT821 [57]MTFNet[4] Ours.026 .023 .913 .909 .944 .906 .905 .938", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Key component analysis.", "figure_data": "Component Ablation Study---.042.903.894.933.048.901.882.922✓--.036.919.910.943.042.911.893.930✓✓-.035.920.910.942.041.911.895.932✓✓✓.032 .928 .915 .948 .036 .920 .902 .942Replacing Our Component with SOTA CounterpartsAF → RFNet [63].035.919.911.942.045.907.887.923AF → SPNet [85].035.918.910.943.047.904.882.918RGB & FeatureDepth & FeatureFusedRefined Consist..58.42.59.41.31.69", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ablation study on the proposed fusion block.", "figure_data": "Component Ablation Study---.037.918.909.939.049.901.880.916✓--.033.923.914.947.045.908.887.923-✓-.033.922.914.948 .039.914.898.934--✓.033.921.913.947.040.912.895.932✓-✓.033.924.915 .947.040.912.897.934✓✓-.034.919.912.945.038.917.900.938✓✓✓.032 .928 .915 .948 .036 .920 .902 .942Replacing TSA/TCA with Basic ComponentTSA → Max.034.920.911.945.047.906.886.920TSA → Mean.034.921.910.945.046.906.885.921TSA → Conv.033.924.914.946.043.907.891.928TCA → GCT.034.920.911.944.042.910.890.929TCA → Mean.034.919.909.944.042.913.894.929TCA → Max.034.921.914.945.043.908.891.929Image ModalityGTTCATSAOurs", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Zongwei Wu; Jingjing Wang; Zhuyun Zhou; Zhaochong An; Qiuping Jiang; Cédric Demonceaux; Guolei Sun; Radu Timofte
[ { "authors": "Haley Adams; Jeanine Stefanucci; Sarah Creem-Regehr; Bobby Bodenheimer", "journal": "IEEE VR. IEEE", "ref_id": "b0", "title": "Depth perception in augmented reality: The effects of display, shadow, and position", "year": "2022" }, { "authors": "Xuyang Bai; Zeyu Hu; Xinge Zhu; Qingqiu Huang; Yilun Chen; Hongbo Fu; Chiew-Lan Tai", "journal": "", "ref_id": "b1", "title": "Transfusion: Robust lidar-camera fusion for 3d object detection with transformers", "year": "2022" }, { "authors": "Helmut Budzier; Gerald Gerlach", "journal": "John Wiley & Sons", "ref_id": "b2", "title": "Thermal infrared sensors: theory, optimisation and practice", "year": "2011" }, { "authors": "Gang Chen; Feng Shao; Xiongli Chai; Hangwei Chen; Qiuping Jiang; Xiangchao Meng; Yo-Sung Ho", "journal": "IEEE TCSVT", "ref_id": "b3", "title": "Modality-Induced Transfer-Fusion Network for RGB-D and RGB-T Salient Object Detection", "year": "2022" }, { "authors": "Qian Chen; Zhenxi Zhang; Yanye Lu; Keren Fu; Qijun Zhao", "journal": "IEEE TNNLS", "ref_id": "b4", "title": "3-D Convolutional Neural Networks for RGB-D Salient Object Detection and Beyond", "year": "2022" }, { "authors": "Kelvin Cheng; Kensuke Koda; Soh Masuko", "journal": "IEEE", "ref_id": "b5", "title": "Reimagining the Stadium Spectator Experience using Augmented Reality and Visual Positioning System", "year": "2022" }, { "authors": "Xiaolong Cheng; Xuan Zheng; Jialun Pei; He Tang; Zehua Lyu; Chuanbo Chen", "journal": "IEEE TMM", "ref_id": "b6", "title": "Depth-induced Gap-reducing Network for RGB-D Salient Object Detection: An Interaction, Guidance and Refinement Approach", "year": "2022" }, { "authors": "Runmin Cong; Qinwei Lin; Chen Zhang; Chongyi Li; Xiaochun Cao; Qingming Huang; Yao Zhao", "journal": "IEEE TIP", "ref_id": "b7", "title": "CIR-Net: Cross-modality Interaction and Refinement for RGB-D Salient Object Detection", "year": "2022" }, { "authors": "Runmin Cong; Kepu Zhang; Chen Zhang; Feng Zheng; Yao Zhao; Qingming Huang; Sam Kwong", "journal": "IEEE TMM", "ref_id": "b8", "title": "Does thermal really always matter for RGB-T salient object detection", "year": "2022" }, { "authors": "Deng-Ping Fan; Ge-Peng Ji; Guolei Sun; Ming-Ming Cheng; Jianbing Shen; Ling Shao", "journal": "", "ref_id": "b9", "title": "Camouflaged object detection", "year": "2020" }, { "authors": "Deng-Ping Fan; Zheng Lin; Zhao Zhang; Menglong Zhu; Ming-Ming Cheng", "journal": "IEEE TNNLS", "ref_id": "b10", "title": "Rethinking RGB-D salient object detection: Models, datasets, and large-scale benchmarks", "year": "2021" }, { "authors": "Deng-Ping Fan; Yingjie Zhai; Ali Borji; Jufeng Yang; Ling Shao", "journal": "", "ref_id": "b11", "title": "BBS-Net: RGB-D salient object detection with a bifurcated backbone strategy network", "year": "2020" }, { "authors": "Keren Fu; Deng-Ping Fan; Ge-Peng Ji; Qijun Zhao", "journal": "", "ref_id": "b12", "title": "JL-DCF: Joint learning and densely-cooperative fusion framework for RGB-D salient object detection", "year": "2020" }, { "authors": "Chunming He; Kai Li; Yachao Zhang; Longxiang Tang; Yulun Zhang; Zhenhua Guo; Xiu Li", "journal": "", "ref_id": "b13", "title": "Camouflaged Object Detection with Feature Decomposition and Edge Reconstruction", "year": "2023" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b14", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Yisheng He; Haibin Huang; Haoqiang Fan; Qifeng Chen; Jian Sun", "journal": "", "ref_id": "b15", "title": "Ffb6d: A full flow bidirectional fusion network for 6d pose estimation", "year": "2021" }, { "authors": "Dong Huo; Jian Wang; Yiming Qian; Yee-Hong Yang", "journal": "IEEE TTP", "ref_id": "b16", "title": "Glass segmentation with RGB-thermal image pairs", "year": "2023" }, { "authors": "Fushuo Huo; Xuegui Zhu; Qian Zhang; Ziming Liu; Wenchao Yu", "journal": "IEEE TIM", "ref_id": "b17", "title": "Realtime one-stream semantic-guided refinement network for RGB-Thermal salient object detection", "year": "2022" }, { "authors": "Wei Ji; Jingjing Li; Shuang Yu; Miao Zhang; Yongri Piao; Shunyu Yao; Qi Bi; Kai Ma; Yefeng Zheng; Huchuan Lu", "journal": "", "ref_id": "b18", "title": "Calibrated RGB-D Salient Object Detection", "year": "2021" }, { "authors": "Qi Jia; Shuilian Yao; Yu Liu; Xin Fan; Risheng Liu; Zhongxuan Luo", "journal": "", "ref_id": "b19", "title": "Segment, Magnify and Reiterate: Detecting Camouflaged Objects the Hard Way", "year": "2022" }, { "authors": "Wen-Da Jin; Jun Xu; Qi Han; Yi Zhang; Ming-Ming Cheng", "journal": "IEEE TIP", "ref_id": "b20", "title": "CDNet: Complementary depth network for RGB-D salient object detection", "year": "2021" }, { "authors": "Ran Ju; Ling Ge; Wenjing Geng; Tongwei Ren; Gangshan Wu", "journal": "", "ref_id": "b21", "title": "Depth saliency based on anisotropic center-surround difference", "year": "2014" }, { "authors": "Trung-Nghia Le; Tam V Nguyen; Zhongliang Nie; Minh-Triet Tran; Akihiro Sugimoto", "journal": "CVIU", "ref_id": "b22", "title": "Anabranch network for camouflaged object segmentation", "year": "2019" }, { "authors": "Hyemin Lee; Daijin Kim", "journal": "", "ref_id": "b23", "title": "Salient region-based online object tracking", "year": "2018" }, { "authors": "Minhyeok Lee; Chaewon Park; Suhwan Cho; Sangyoun Lee", "journal": "", "ref_id": "b24", "title": "SPSN: Superpixel Prototype Sampling Network for RGB-D Salient Object Detection", "year": "2022" }, { "authors": "Aixuan Li; Jing Zhang; Yunqiu Lv; Bowen Liu; Tong Zhang; Yuchao Dai", "journal": "", "ref_id": "b25", "title": "Uncertainty-aware joint salient object and camouflaged object detection", "year": "2021" }, { "authors": "Gongyang Li; Zhi Liu; Minyu Chen; Zhen Bai; Weisi Lin; Haibin Ling", "journal": "IEEE TIP", "ref_id": "b26", "title": "Hierarchical Alternate Interaction Network for RGB-D Salient Object Detection", "year": "2021" }, { "authors": "Gongyang Li; Zhi Liu; Linwei Ye; Yang Wang; Haibin Ling", "journal": "", "ref_id": "b27", "title": "Cross-modal weighting network for RGB-D salient object detection", "year": "2020" }, { "authors": "Jia Li; Shengye Qiao; Zhirui Zhao; Chenxi Xie; Xiaowu Chen; Changqun Xia", "journal": "", "ref_id": "b28", "title": "Rethinking Lightweight Salient Object Detection via Network Depth-Width Tradeoff", "year": "2023" }, { "authors": "Zhengyi Liu; Xiaoshen Huang; Guanghui Zhang; Xianyong Fang; Linbo Wang; Bin Tang", "journal": "", "ref_id": "b29", "title": "Scribble-Supervised RGB-T Salient Object Detection", "year": "2023" }, { "authors": "Zhuang Liu; Hanzi Mao; Chao-Yuan Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b30", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Zhengyi Liu; Yacheng Tan; Qian He; Yun Xiao", "journal": "IEEE TCSVT", "ref_id": "b31", "title": "SwinNet: Swin transformer drives edge-aware RGB-D and RGB-T salient object detection", "year": "2021" }, { "authors": "Zhengyi Liu; Wang Yuan; Zhengzheng Tu; Yun Xiao; Bin Tang", "journal": "ACM MM", "ref_id": "b32", "title": "Tri-TransNet: RGB-D Salient Object Detection with a Triplet Transformer Embedding Network", "year": "2021" }, { "authors": "Yunqiu Lv; Jing Zhang; Yuchao Dai; Aixuan Li; Bowen Liu; Nick Barnes; Deng-Ping Fan", "journal": "", "ref_id": "b33", "title": "Simultaneously localize, segment and rank the camouflaged objects", "year": "2021" }, { "authors": "Arian Christopher A Metzler; Richard G Maleki; Baraniuk", "journal": "IEEE TIT", "ref_id": "b34", "title": "From denoising to compressed sensing", "year": "2016" }, { "authors": "H Mahdi; Sebastian Miangoleh; Long Dille; Sylvain Mai; Yagiz Paris; Aksoy", "journal": "", "ref_id": "b35", "title": "Boosting monocular depth estimation models to high-resolution via contentadaptive multi-resolution merging", "year": "2021" }, { "authors": "Yuzhen Niu; Yujie Geng; Xueqing Li; Feng Liu", "journal": "", "ref_id": "b36", "title": "Leveraging stereopsis for saliency analysis", "year": "2012" }, { "authors": "Youwei Pang; Xiaoqi Zhao; Tian-Zhu Xiang; Lihe Zhang; Huchuan Lu", "journal": "", "ref_id": "b37", "title": "Zoom in and Out: A Mixed-Scale Triplet Network for Camouflaged Object Detection", "year": "2022" }, { "authors": "Houwen Peng; Bing Li; Weihua Xiong; Weiming Hu; Rongrong Ji", "journal": "", "ref_id": "b38", "title": "RGBD salient object detection: a benchmark and algorithms", "year": "2014" }, { "authors": "René Ranftl; Alexey Bochkovskiy; Vladlen Koltun", "journal": "", "ref_id": "b39", "title": "Vision transformers for dense prediction", "year": "2021" }, { "authors": "René Ranftl; Katrin Lasinger; David Hafner; Konrad Schindler; Vladlen Koltun", "journal": "IEEE TPAMI", "ref_id": "b40", "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer", "year": "2022" }, { "authors": "Mark Sandler; Andrew Howard; Menglong Zhu; Andrey Zhmoginov; Liang-Chieh Chen", "journal": "", "ref_id": "b41", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "2018" }, { "authors": "Przemysław Skurowski; Hassan Abdulameer; J Błaszczyk; Tomasz Depta; Adam Kornacki; Kozieł", "journal": "Unpublished manuscript", "ref_id": "b42", "title": "Animal camouflage analysis: Chameleon database", "year": "2018" }, { "authors": "Mengke Song; Wenfeng Song; Guowei Yang; Chenglizhao Chen", "journal": "IEEE TIP", "ref_id": "b43", "title": "Improving RGB-D Salient Object Detection via Modality-Aware Decoder", "year": "2022" }, { "authors": "Qingkun Song; Li Ma; Jiankun Cao; Xiao Han", "journal": "", "ref_id": "b44", "title": "Image denoising based on mean filter and wavelet transform", "year": "2015" }, { "authors": "Fan Sun; Wujie Zhou; Lv Ye; Lu Yu", "journal": "IEEE SPL", "ref_id": "b45", "title": "Hierarchical decoding network based on swin transformer for detecting salient objects in RGB-T images", "year": "2022" }, { "authors": "Peng Sun; Wenhu Zhang; Huanyu Wang; Songyuan Li; Xi Li", "journal": "", "ref_id": "b46", "title": "Deep RGB-D Saliency Detection with Depth-Sensitive Attention and Automatic Multi-Modal Fusion", "year": "2021" }, { "authors": "Chris Sweeney; Greg Izatt; Russ Tedrake", "journal": "", "ref_id": "b47", "title": "A supervised approach to predicting noise in depth images", "year": "2019" }, { "authors": "Bin Tang; Zhengyi Liu; Yacheng Tan; Qian He", "journal": "IEEE TCSVT", "ref_id": "b48", "title": "HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection", "year": "2023" }, { "authors": "Lv Tang; Bo Li; Yijie Zhong; Shouhong Ding; Mofei Song", "journal": "", "ref_id": "b49", "title": "Disentangled high quality salient object detection", "year": "2021" }, { "authors": "Zhengzheng Tu; Zhun Li; Chenglong Li; Yang Lang; Jin Tang", "journal": "IEEE TIP", "ref_id": "b50", "title": "Multiinteractive dual-decoder for RGB-thermal salient object detection", "year": "2021" }, { "authors": "Zhengzheng Tu; Zhun Li; Chenglong Li; Jin Tang", "journal": "IEEE TIP", "ref_id": "b51", "title": "Weakly alignmentfree RGBT salient object detection with deep correlation network", "year": "2022" }, { "authors": "Zhengzheng Tu; Yan Ma; Zhun Li; Chenglong Li; Jieming Xu; Yongtao Liu", "journal": "IEEE TMM", "ref_id": "b52", "title": "RGBT salient object detection: A large-scale dataset and benchmark", "year": "2022" }, { "authors": "Zhengzheng Tu; Tian Xia; Chenglong Li; Xiaoxiao Wang; Yan Ma; Jin Tang", "journal": "IEEE TMM", "ref_id": "b53", "title": "RGB-T image saliency detection via collaborative graph learning", "year": "2019" }, { "authors": "Chen Wang; Danfei Xu; Yuke Zhu; Roberto Martín-Martín; Cewu Lu; Li Fei-Fei; Silvio Savarese", "journal": "", "ref_id": "b54", "title": "Densefusion: 6d object pose estimation by iterative dense fusion", "year": "2019" }, { "authors": "Fengyun Wang; Jinshan Pan; Shoukun Xu; Jinhui Tang", "journal": "IEEE TIP", "ref_id": "b55", "title": "Learning Discriminative Cross-Modality Features for RGB-D Saliency Detection", "year": "2022" }, { "authors": "Guizhao Wang; Chenglong Li; Yunpeng Ma; Aihua Zheng; Jin Tang; Bin Luo", "journal": "Springer", "ref_id": "b56", "title": "RGB-T saliency detection benchmark: Dataset, baselines, analysis and a novel approach", "year": "2018" }, { "authors": "Jie Wang; Kechen Song; Yanqi Bao; Liming Huang; Yunhui Yan", "journal": "IEEE TCSVT", "ref_id": "b57", "title": "CGFNet: Cross-Guided Fusion Network for RGB-T Salient Object Detection", "year": "2022" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Deng-Ping Fan; Kaitao Song; Ding Liang; Tong Lu; Ping Luo; Ling Shao", "journal": "CVMJ", "ref_id": "b58", "title": "Pvt v2: Improved baselines with pyramid vision transformer", "year": "2022" }, { "authors": "Hongfa Wen; Chenggang Yan; Xiaofei Zhou; Runmin Cong; Yaoqi Sun; Bolun Zheng; Jiyong Zhang; Yongjun Bao; Guiguang Ding", "journal": "TIP", "ref_id": "b59", "title": "Dynamic selective network for RGB-D salient object detection", "year": "2021" }, { "authors": "Zongwei Wu; Guillaume Allibert; Fabrice Meriaudeau; Chao Ma; Cédric Demonceaux", "journal": "IEEE TIP", "ref_id": "b60", "title": "HiDAnet: RGB-D Salient Object Detection via Hierarchical Depth Awareness", "year": "2023" }, { "authors": "Zongwei Wu; Guillaume Allibert; Christophe Stolz; Chao Ma; Cédric Demonceaux", "journal": "", "ref_id": "b61", "title": "Modality-Guided Subnetwork for Salient Object Detection", "year": "2021" }, { "authors": "Zongwei Wu; Shriarulmozhivarman Gobichettipalayam; Brahim Tamadazte; Guillaume Allibert; Danda Pani Paudel; Cédric Demonceaux", "journal": "", "ref_id": "b62", "title": "Robust RGB-D Fusion for Saliency Detection", "year": "2022" }, { "authors": "Zongwei Wu; Danda Pani Paudel; Deng-Ping Fan; Jingjing Wang; Shuo Wang; Cédric Demonceaux; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b63", "title": "Source-free Depth for Object Pop-out", "year": "2022" }, { "authors": "Zhenyu Wu; Lin Wang; Wei Wang; Tengfei Shi; Chenglizhao Chen; Aimin Hao; Shuo Li", "journal": "ACM MM", "ref_id": "b64", "title": "Synthetic data supervised salient object detection", "year": "2022" }, { "authors": "W U Zongwei; Zhou Zhuyun; Guillaume Allibert; Christophe Stolz; Cédric Demonceaux; Chao Ma", "journal": "", "ref_id": "b65", "title": "Transformer Fusion for Indoor Rgb-D Semantic Segmentation", "year": "2022" }, { "authors": "Mochu Xiang; Jing Zhang; Yunqiu Lv; Aixuan Li; Yiran Zhong; Yuchao Dai", "journal": "", "ref_id": "b66", "title": "Exploring Depth Contribution for Camouflaged Object Detection", "year": "2021" }, { "authors": "Xiaolin Xiao; Yicong Zhou; Yue-Jiao Gong", "journal": "IEEE TIP", "ref_id": "b67", "title": "RGB-'D' Saliency Detection With Pseudo Depth", "year": "2019" }, { "authors": "Zhengxuan Xie; Feng Shao; Gang Chen; Hangwei Chen; Qiuping Jiang; Xiangchao Meng; Yo-Sung Ho", "journal": "IEEE TCSVT", "ref_id": "b68", "title": "Cross-Modality Double Bidirectional Interaction and Fusion Network for RGB-T Salient Object Detection", "year": "2023" }, { "authors": "Senbo Yan; Liang Peng; Chuer Yu; Zheng Yang; Haifeng Liu; Deng Cai", "journal": "ACM MM", "ref_id": "b69", "title": "Domain Reconstruction and Resampling for Robust Salient Object Detection", "year": "2022" }, { "authors": "Maoke Yang; Kun Yu; Chi Zhang; Zhiwei Li; Kuiyuan Yang", "journal": "", "ref_id": "b70", "title": "Denseaspp for semantic segmentation in street scenes", "year": "2018" }, { "authors": "Zongxin Yang; Linchao Zhu; Yu Wu; Yi Yang", "journal": "", "ref_id": "b71", "title": "Gated channel transformation for visual recognition", "year": "2020" }, { "authors": "Xuying Bowen Yin; Qibin Zhang; Bo-Yuan Hou; Deng-Ping Sun; Luc Fan; Van Gool", "journal": "", "ref_id": "b72", "title": "CamoFormer: Masked Separable Attention for Camouflaged Object Detection", "year": "2022" }, { "authors": "Chen Zhang; Runmin Cong; Qinwei Lin; Lin Ma; Feng Li; Yao Zhao; Sam Kwong", "journal": "ACM MM", "ref_id": "b73", "title": "Cross-modality discrepant interaction network for RGB-D salient object detection", "year": "2021" }, { "authors": "Chen Zhang; Runmin Cong; Qinwei Lin; Lin Ma; Feng Li; Yao Zhao; Sam Kwong", "journal": "ACM MM", "ref_id": "b74", "title": "Cross-modality Discrepant Interaction Network for RGB-D Salient Object Detection", "year": "2021" }, { "authors": "Jing Zhang; Deng-Ping Fan; Yuchao Dai; Xin Yu; Yiran Zhong; Nick Barnes; Ling Shao", "journal": "", "ref_id": "b75", "title": "RGB-D Saliency Detection via Cascaded Mutual Information Minimization", "year": "2021" }, { "authors": "Miao Zhang; Shuang Xu; Yongri Piao; Dongxiang Shi; Shusen Lin; Huchuan Lu", "journal": "ACM MM", "ref_id": "b76", "title": "PreyNet: Preying on Camouflaged Objects", "year": "2022" }, { "authors": "Miao Zhang; Shunyu Yao; Beiqi Hu; Yongri Piao; Wei Ji", "journal": "IEEE TMM", "ref_id": "b77", "title": "C2DFNet: Criss-Cross Dynamic Filter Network for RGB-D Salient Object Detection", "year": "2022" }, { "authors": "Qiang Zhang; Tonglin Xiao; Nianchang Huang; Dingwen Zhang; Jungong Han", "journal": "IEEE TCSVT", "ref_id": "b78", "title": "Revisiting feature fusion for RGB-T salient object detection", "year": "2020" }, { "authors": "Wenbo Zhang; Ge-Peng Ji; Zhuo Wang; Keren Fu; Qijun Zhao", "journal": "ACM MM", "ref_id": "b79", "title": "Depth Quality-Inspired Feature Manipulation for Efficient RGB-D Salient Object Detection", "year": "2021" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b80", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "Jiawei Zhao; Yifan Zhao; Jia Li; Xiaowu Chen", "journal": "", "ref_id": "b81", "title": "Is depth really necessary for salient object detection?", "year": "2020" }, { "authors": "Yifan Zhao; Jiawei Zhao; Jia Li; Xiaowu Chen", "journal": "IEEE TIP", "ref_id": "b82", "title": "RGB-D salient object detection with ubiquitous target awareness", "year": "2021" }, { "authors": "Jiayuan Zhou; Lijun Wang; Huchuan Lu; Kaining Huang; Xinchu Shi; Bocong Liu", "journal": "", "ref_id": "b83", "title": "MVSalNet: Multi-view Augmentation for RGB-D Salient Object Detection", "year": "2022" }, { "authors": "Tao Zhou; Huazhu Fu; Geng Chen; Yi Zhou; Deng-Ping Fan; Ling Shao", "journal": "", "ref_id": "b84", "title": "Specificity-preserving RGB-D Saliency Detection", "year": "2021" }, { "authors": "Wujie Zhou; Qinling Guo; Jingsheng Lei; Lu Yu; Jenq-Neng Hwang", "journal": "IEEE TCSVT", "ref_id": "b85", "title": "ECFFNet: Effective and consistent feature fusion network for RGB-T salient object detection", "year": "2021" }, { "authors": "Wujie Zhou; Yun Zhu; Jingsheng Lei; Jian Wan; Lu Yu", "journal": "IEEE TETCI", "ref_id": "b86", "title": "APNet: Adversarial learning assistance and perceived importance fusion network for all-day RGB-T salient object detection", "year": "2021" }, { "authors": "Wujie Zhou; Yun Zhu; Jingsheng Lei; Rongwang Yang; Lu Yu", "journal": "IEEE TIP", "ref_id": "b87", "title": "LSNet: Lightweight spatial boosting network for detecting salient objects in RGB-thermal images", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 323.52, 700.48, 235.22, 9.75 ], "formula_id": "formula_0", "formula_text": "𝑚 𝑆 𝑖 = 𝑀𝐿𝑃 (𝑚 𝐼 𝑖 ⊗ 𝑚 𝐷 𝑖 ), 𝑚 𝐼 𝑖 = 𝐺𝐴𝑃 (𝑓 𝐼 𝑖 ), 𝑚 𝐷 𝑖 = 𝐺𝐴𝑃 (𝑓 𝐷 𝑖 ),(1)" }, { "formula_coordinates": [ 4, 60.26, 335.18, 234.32, 17.56 ], "formula_id": "formula_1", "formula_text": "𝛼 𝑖 = 𝑎 𝑖 𝑎 𝑖 + 𝑏 𝑖 , 𝑎 𝑖 = 𝑐𝑜𝑠𝑖𝑛𝑒 (𝑚 𝑆 𝑖 , 𝑚 𝐼 𝑖 ), 𝑏 𝑖 = 𝑐𝑜𝑠𝑖𝑛𝑒 (𝑚 𝑆 𝑖 , 𝑚 𝐷 𝑖 ).(2)" }, { "formula_coordinates": [ 4, 97.52, 528.18, 197.07, 27.73 ], "formula_id": "formula_2", "formula_text": "𝑠 ′𝐼 𝑖 , 𝑠 ′𝐷 𝑖 = 𝑐ℎ𝑢𝑛𝑘 (𝐶𝐶 (𝑇 𝑆𝐴(𝑣 𝐼 𝑖 ),𝑇 𝑆𝐴(𝑣 𝐷 𝑖 ))), 𝑠 𝐼 𝑖 = 𝜎 (𝑠 ′𝐼 𝑖 ), 𝑠 𝐷 𝑖 = 𝜎 (𝑠 ′𝐷 𝑖 ),(3)" }, { "formula_coordinates": [ 4, 94.38, 621.91, 200.2, 27.73 ], "formula_id": "formula_3", "formula_text": "𝑐 ′𝐼 𝑖 , 𝑐 ′𝐷 𝑖 = 𝑐ℎ𝑢𝑛𝑘 (𝐶𝑠𝐶 (𝑇𝐶𝐴(𝑣 𝐼 𝑖 ),𝑇𝐶𝐴(𝑣 𝐷 𝑖 ))), 𝑐 𝐼 𝑖 = 𝜎 (𝑐 ′𝐼 𝑖 ), 𝑐 𝐷 𝑖 = 𝜎 (𝑐 ′𝐷 𝑖 ),(4)" }, { "formula_coordinates": [ 4, 72.46, 682.7, 222.12, 27.73 ], "formula_id": "formula_4", "formula_text": "𝑓 ′𝐼 𝑖 = 𝛼 𝑖 • 𝑠 𝐼 𝑖 ⊗ 𝑐 𝐼 𝑖 ⊗ 𝑓 𝐼 𝑖 , 𝑓 ′𝐷 𝑖 = (1 -𝛼 𝑖 ) • 𝑠 𝐷 𝑖 ⊗ 𝑐 𝐷 𝑖 ⊗ 𝑓 𝐷 𝑖 , 𝑓 𝑆 𝑖 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝑀𝐿𝑃 (𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝑜𝑛𝑐𝑎𝑡 (𝑓 ′𝐼 𝑖 , 𝑓 ′𝐷 𝑖 ))))(5)" }, { "formula_coordinates": [ 4, 326.11, 525.59, 232.63, 23.77 ], "formula_id": "formula_5", "formula_text": "𝑀𝑃 1 , 𝑀𝑃 2 , ..., 𝑀𝑃 𝑁 = 𝑀𝑎𝑥𝑃𝑜𝑜𝑙𝑖𝑛𝑔(𝑐ℎ𝑢𝑛𝑘 (𝐶𝑜𝑛𝑣 1×1 (𝑓 𝑆 𝑖 ))). 𝐿𝐺 𝑖 = 𝐶𝑜𝑛𝑣 1×1 (𝐷𝑆𝐶𝑜𝑛𝑣 3×3 (𝐶𝑜𝑛𝑐𝑎𝑡 (𝑀𝑃 1 , 𝑀𝑃 2 , ..., 𝑀𝑃 𝑁 ))).(6)" }, { "formula_coordinates": [ 4, 351.93, 670.47, 203.64, 21.79 ], "formula_id": "formula_6", "formula_text": "𝑓 𝑖 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝑀𝐿𝑃 (𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝑓 𝑖+1 ⊗ 𝐿𝐺 𝑖 ))), 𝑝 𝑖 = 𝐻𝑒𝑎𝑑 (𝑓 𝑖 ), (7" }, { "formula_coordinates": [ 4, 555.57, 678.91, 3.17, 7.94 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 82.57, 372.98, 212.01, 66.36 ], "formula_id": "formula_8", "formula_text": "𝑄 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝐶 (𝐶𝑜𝑛𝑣 1×1 (𝑓 𝐼 𝑖 ), 𝐶𝑜𝑛𝑣 1×1 (𝑓 𝐷 𝑖 ))), 𝐾 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝑜𝑛𝑣 1×1 (𝑝 𝑖 ⊗ 𝑓 𝑖 )), 𝑉 = 𝑅𝑒𝑎𝑟𝑟𝑎𝑛𝑔𝑒 (𝐶𝑜𝑛𝑣 1×1 (𝑝 𝑖 ⊗ 𝑓 𝑖 )), 𝑀𝑆-𝐴(𝑄, 𝐾, 𝑉 ) = 𝑠𝑜 𝑓 𝑡𝑚𝑎𝑥 ( 𝑄𝐾 𝑇 √︁ 𝑑 𝑘 )𝑉 ,(8)" }, { "formula_coordinates": [ 5, 119.9, 548.64, 174.68, 9.75 ], "formula_id": "formula_9", "formula_text": "𝑓 𝑜 𝑖 = 𝑀𝑆-𝐴(𝑄, 𝐾, 𝑉 ) + 𝑝 𝑖 ⊗ 𝑓 𝑖 .(9)" }, { "formula_coordinates": [ 5, 122.98, 633.13, 171.6, 8.96 ], "formula_id": "formula_10", "formula_text": "L (.) = L 𝑤𝑏𝑐𝑒 (.) + L 𝐼𝑜𝑈 (.).(10)" }, { "formula_coordinates": [ 5, 130.09, 684.14, 164.49, 26.34 ], "formula_id": "formula_11", "formula_text": "L 𝑚𝑠 = 4 ∑︁ 𝑖=1 𝜆 𝑖 • L (𝑝 𝑖 , 𝐺),(11)" }, { "formula_coordinates": [ 5, 362.01, 157.66, 193.31, 11.38 ], "formula_id": "formula_12", "formula_text": "𝑝 𝑙𝑜𝑤 = 𝐻𝑒𝑎𝑑 (𝑓 𝑙𝑜𝑤 ), 𝑓 𝑙𝑜𝑤 = 𝐶𝐶 (𝑓 𝑜 4 , 𝑓 𝑜 3 ). (12" }, { "formula_coordinates": [ 5, 555.32, 158.67, 3.42, 7.94 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 5, 330.34, 224.64, 224.98, 9.91 ], "formula_id": "formula_14", "formula_text": "L 𝑚𝑙 = L (𝑝 𝑙𝑜𝑤 , 𝐺) + 𝛽 1 • L (𝑝 𝑚𝑖𝑑 , 𝐺) + 𝛽 2 • L (𝑝 ℎ𝑖𝑔ℎ , 𝐺), (13" }, { "formula_coordinates": [ 5, 555.32, 225.65, 3.42, 7.94 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 354.7, 301.11, 204.04, 24.45 ], "formula_id": "formula_16", "formula_text": "L 𝑑𝑖𝑣 = L 𝐾𝐿 (𝑝 𝑙𝑜𝑤 , 𝑝 𝑚𝑖𝑑 ) + L 𝐾𝐿 (𝑝 𝑚𝑖𝑑 , 𝑝 ℎ𝑖𝑔ℎ ), L 𝐾𝐿 (𝐴, 𝐵) = 𝐾𝐿(𝐴||𝐵) + 𝐾𝐿(𝐵||𝐴).(14)" }, { "formula_coordinates": [ 5, 374.69, 349.05, 184.05, 9.91 ], "formula_id": "formula_17", "formula_text": "L 𝑎𝑙𝑙 = L 𝑚𝑠 + 𝛾 1 • L 𝑚𝑙 + 𝛾 2 • L 𝑑𝑖𝑣 .(15)" } ]
10.1109/cvpr.2014.81
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "Professional road cycling offers several interesting challenges in an analytics setting due to its unique properties. Notably, races have a variety of different formats (e.g. one-day races, stage races) and profiles (e.g. flat, hilly, or mountainous), each suiting riders of different characteristics. Although several past works have demonstrated the potential of using machine learning models to predict road cycling race results, these models rely on significant feature engineering efforts and are tailored to predicting specific outcomes, such as rider performance in a specific race.\nWe present a framework forming the foundation for a generalized prediction algorithm that does not depend on labour-intensive feature engineering efforts. Specifically, we introduce a method to train vector embeddings for riders and races based on historical race results.\nIn representation learning, vector embeddings are used to capture the key qualities of entities such as words, images, or songs. If trained effectively, these vector embeddings can then be used for a variety of downstream tasks. For example, word embeddings trained using a large corpus of text can be used for emotion recognition or sentence completion.\nLikewise, we show that our cycling embeddings capture the key characteristics of riders and races. The embeddings can be used in downstream prediction tasks and eliminate the need for a manual feature engineering process.\n2 Literature Review" }, { "figure_ref": [], "heading": "Machine Learning in Road Cycling", "publication_ref": [ "b9", "b21", "b17", "b10", "b11" ], "table_ref": [], "text": "There are several prior works which apply machine learning to road cycling.\nMultiple works focus on predicting the ProCyclingStats (\"PCS\") points, a system developed by the website procyclingstats.com to assign scores to riders based on the results achieved in certain races. For example, Janssens and Bogaert (2021) construct a random forest regression to predict the total PCS points scored by under-23 prospects in their first two years as professional athletes. They engineer a large set of features, including the riders' performances in particular under-23 races, and compare various methods to impute non-participated race results. This imputation method is used to detect the most promising young athletes (Janssens, Bogaert, and Maton 2022). Similarly, Van Bulck, Vande Weghe, and Goossens (2021) compare linear regression and random forest regression to predict the points scored by under-23 riders in their first three years as professionals. They also hand-craft a number of features summarizing the riders' performance at the youth level.\nOther works focus on predicting the outcomes of particular races. De Spiegeleer (2019) develops machine learning models to predict various outcomes of stages from the Tour de France, Giro d'Italia, and Vuelta a Espana, including their average speed, the difference between the average speed of the winner and that of a particular rider, and the head-to-head performance between two riders. The predictions are based on an extensive set of engineered features related to the terrain, weather, rider characteristics, and historical results. Mortirolo (2019) uses Bayesian Additive Regression Trees to simulate races and obtain predictions for the probabilities of specific race outcomes. The simulation uses over one hundred features, including ratings for riders' various abilities, indicators of riders' recent form, historical results from the past three years, and team-level indicators. Kholkine, De Schepper, et al. (2020) apply an XGBoost model to predict the outcomes of the Tour of Flanders using riders' performances in relevant build-up races. They also engineer several advanced features related to past results in similar races, historical weather data and overall team performance. Kholkine, Servotte, et al. (2021) also employ an XGBoost model within a learn-to-rank framework to predict the top ten riders in several one-day races using a suite of engineered features based on historical results and the riders' ages. Finally Demunter (2021) compares linear regression, random forest, XGBoost, and neural networks to predict the rankings of riders in a given race. Again, various features related to the rider's recent and historical results are developed and used as inputs for these models.\nTo the best of our knowledge, we present the first framework for a generalized prediction algorithm for road cycling which does not rely on a hand-crafted set of features for the particular outcome of interest." }, { "figure_ref": [], "heading": "Representation Learning", "publication_ref": [ "b16", "b19", "b5", "b2", "b13", "b20", "b7", "b6", "b12", "b1", "b15", "b22", "b14", "b18", "b0" ], "table_ref": [], "text": "Representation learning is the field of machine learning concerned with automatically learning meaningful and compact representations of data without requiring features for the data. These representations aim to capture the underlying structure and patterns in the data, enabling more effective performance on a variety of downstream applications. Some primary applications of representation learning include natural language processing, computer vision, and recommendation systems.\nIn natural language processing, representation learning has been utilized to learn word embeddings that capture semantic relationships between words. For example, the word2vec algorithm uses a skip-gram ap-proach to fit vector embeddings for words. These embeddings yield successful results on downstream tasks such as semantic and syntactic word relationship testing and sentence completion (Mikolov et al. 2013). Another common approach known as GloVe uses co-occurence statistics of words to obtain the vector embeddings. The resulting representations performed strongly on a variety of tasks including word analogies, word similarities, and named entity recognition (Pennington, Socher, and Manning 2014). Finally, pretrained vector embeddings using bidirectional encoder representations from transformers (BERT) can then be used as inputs into various downstream tasks and have yielded state-of-the-art performance on question answering and language inference (Devlin et al. 2019).\nSimilarly, representation learning has been successfully applied in the field of computer vision. One notable technique is the use of convolutional neural networks. These convolutional neural networks achieved pioneering performance on image classification tasks, such as handwritten digit recognition (Cireşan et al. 2011) and high-resolution image classification (Krizhevsky, Sutskever, and Hinton 2012). By pre-training large convolutional neural networks on large amounts of data, researchers have then achieved state-of-the-art performance on novel computer vision tasks, including image recognition (Simonyan and Zisserman 2015), object detection (Girshick et al. 2014), scene recognition and domain adaptation (Donahue et al. 2014).\nRepresentation learning advancements in natural language processing and computer vision have exploited observed relationships between words and local patterns in images. In the case of road cycling, we seek to extract representations for both races and riders and exploit historical interactions between these two types of entities. Most relevant to this context are past works on recommender systems surrounding collaborative filtering, which use historical interactions between users and items to recommend new items to a user. One common approach for such problems is to transform both items and users to the same vector space by assigning vector embeddings of the same dimension to both categories. Then, dot products between the vector embeddings of users and items can be used to infer their interaction (Koren 2008).\nAlthough we are not aware of such a representation learning approach being applied in road cycling, similar approaches have recently been tested in other sports, including soccer (Cintia and Pappalardo 2021;Magdaci 2021;Yılmaz and Ögüdücü 2022;Li et al. 2022), basketball (Papalexakis and Pelechrinis 2018), and cricket (Alaka, Sreekumar, and Shalu 2021)." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "We collect historical race results from the 2016-2022 UCI World Tour seasons from procyclingstats.com. Specifically, we consider the results of one-day races, and individual stages of stage races (i.e. multi-day races), except for team time trials. We do not consider overall classifications of stage races. Overall, our dataset includes results from 1069 race editions, 118 of which are one-day races.\nFor each result in our data, we define the normalized PCS score as the number of PCS points scored by the rider in the race, divided by the points earned by the winner of that race. For example, if the winner and runner-up of a race earn 500 and 300 PCS points respectively, they are assigned a normalized PCS score of 1 and 0.6 respectively.\nWe learn vector embeddings of dimension D for individual riders and races by directly optimizing these embeddings' ability to predict historical results. Specifically, we represent a rider's aptitude at a given race by the dot-product between that rider's embedding and that race's embedding. We then pass this dot-product through a sigmoid activation function to predict the normalized PCS score for that rider at that race. The vector embeddings are trained by minimizing the binary cross-entropy loss between these predictions and the actual normalized PCS scores, according to equation 1.\nL(R, S, y) = 1 N N ∑ i=1 y i log(σ (R r(i) • S s(i) )) + (1 -y i ) log(1 -σ (R r(i) • S s(i) )) (1)\nHere, y i records the normalized PCS points scored by rider r (i) in race s (i) , R is the matrix of rider vector embeddings, S is the matrix of race vector embeddings, and N is the number of results in our data. σ refers to the sigmoid function such that σ (x) = (1 + e -x ) -1 .\nWe train a vector embedding for each rider who has scored at least 25 (unnormalized) PCS points in our dataset. We also train a vector embedding for each race edition, except that for one-day races, we use the same embedding across all seasons. We do this since one-day races tend to suit similar riders across years, whereas stages can have very different characteristics across seasons. In total, we train unique embeddings for 973 races and 958 riders.\nThe results shown below are based on embeddings of dimension D = 5 trained using an Adam optimizer with a learning rate of 0.001 for 100 epochs. Reproducible code to implement our methods is available at https://github.com/baronet2/Bike2Vec." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "In this section, we analyze our learned embeddings to demonstrate that they capture valuable information about the characteristics of riders and races. In Figure 1, we plot the embeddings for each race in our dataset, coloured according to the race profile score, a measurement of the amount of climbing in the race developed by PCS. We performed a principal component analysis to reduce the dimensionality of the embeddings to two dimensions for visualization purposes. Clearly, the primary principal component is capturing significant information about the terrain of a race, with more mountainous races appearing on the right and flat races appearing on the left. Similarly, in Figure 2, we plot the principal components of the rider embeddings. Unlike races, riders are not labelled by PCS as belonging to a certain category. Therefore, to add interpretability, we perform k-means clustering on the rider embeddings and colour the riders by their assigned cluster.\nWe show a few examples of riders from each cluster in Table 1. There are clear similarities among the riders in each cluster, indicating that our embeddings are capturing the unique characteristics of each rider. For example, cycling fans would identify that cluster 1 is composed of sprinters and cluster 3 of climbers. Overall, the vector embeddings seem to accurately capture the distinguishing characteristics of both riders and races." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We present a novel vector embedding approach to represent road cycling riders and races, and implement the approach on seven seasons of data from professional men's road cycling races. We validate the resulting embeddings by showing that they contain useful information about the characteristics of races and riders. These embeddings can form the basis for a variety of downstream prediction tasks, removing the need for extensive manual feature engineering.\nAlthough we have demonstrated that our proposed vector embeddings contain valid information about the riders and races, we have yet to test the inclusion of these embeddings within a downstream prediction task. We leave this as a promising area for future work. Further, augmenting our race embeddings using features about the route, such as the elevation profile, could offer improved race embeddings and enable predictions on new races. Additionally, our current framework assigns a constant embeddings over the span of riders' careers. Future research could seek to incorporate a time-varying element to capture changes in rider skills due to age, physiology, injury, or other effects. Lastly, an interesting avenue for further exploration is extending our framework to women's cycling and comparing the results." } ]
Vector embeddings have been successfully applied in several domains to obtain effective representations of non-numeric data which can then be used in various downstream tasks. We present a novel application of vector embeddings in professional road cycling by demonstrating a method to learn representations for riders and races based on historical results. We use unsupervised learning techniques to validate that the resultant embeddings capture interesting features of riders and races. These embeddings could be used for downstream prediction tasks such as early talent identification and race outcome prediction.
Bike2Vec: Vector Embedding Representations of Road Cycling Riders and Races
[ { "figure_caption": "Figure 1 :1Figure 1: Visualization of race embeddings coloured by race profile score.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Visualization of rider embeddings coloured by cluster.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Examples of riders from each cluster. Furthermore, in Table2, we show some examples of rider similarities. That is, for each rider on the lefthand-side, we show the name of the other rider with the most similar embedding, according to Euclidean distance. Cycling fans would confirm that these rider pairings strongly reflect these riders' characteristics. For example, Tadej Pogacar and Primoz Roglic are both world-class climbers and time-trialists, Peter Sagan and Sonny Colbrelli are versatile sprinters who also perform well in cobbled or hilly classics, and Julian Alaphilippe and Marc Hirschi are both specialists at climbing short but steep hills.", "figure_data": "Cluster Examples of Riders1SAGAN Peter, KRISTOFF Alexander, VIVIANI Elia, EWAN Caleb, BENNETT Sam2VAN AVERMAET Greg, COLBRELLI Sonny, NAESEN Oliver, MOHORI Č Matej3ALAPHILIPPE Julian, VALVERDE Alejandro, ROGLI Č Primož, POGA ČAR Tadej4VAN AERT Wout, MATTHEWS Michael, STUYVEN Jasper, KWIATKOWSKI Michał5VAN DER POEL Mathieu, GILBERT Philippe, LAMPAERT Yves, ŠTYBAR Zdeněk", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Examples of most similar rider embeddings.", "figure_data": "", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Ethan Baron; Bram Janssens; Matthias Bogaert
[ { "authors": "Souridas Alaka; Rishikesh Sreekumar; Hrithwik Shalu", "journal": "", "ref_id": "b0", "title": "Efficient Feature Representations for Cricket Data Analysis using Deep Learning based Multi-Modal Fusion Model", "year": "2021-08" }, { "authors": "Paolo Cintia; Luca Pappalardo", "journal": "", "ref_id": "b1", "title": "Coach2vec: Autoencoding the Playing Style of Soccer Coaches", "year": "2021-06" }, { "authors": "Dan Cireşan; Claudiu", "journal": "AAAI Press", "ref_id": "b2", "title": "Flexible, High Performance Convolutional Neural Networks for Image Classification", "year": "2011-07" }, { "authors": "De Spiegeleer; Emiel", "journal": "", "ref_id": "b3", "title": "Predicting Cycling Results using Machine Learning", "year": "2019-06" }, { "authors": "Jarne Demunter", "journal": "", "ref_id": "b4", "title": "Predicting Ranking Multientrant Races: Road Cycling", "year": "2021-06" }, { "authors": "Jacob Devlin", "journal": "", "ref_id": "b5", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding", "year": "2019-10" }, { "authors": "Jeff Donahue", "journal": "Pmlr", "ref_id": "b6", "title": "DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition", "year": "2014-06" }, { "authors": "Ross Girshick", "journal": "IEEE Computer Society", "ref_id": "b7", "title": "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation", "year": "2014-06" }, { "authors": "Bram Janssens; Matthias Bogaert", "journal": "", "ref_id": "b8", "title": "Imputation of Non-participated Race Results", "year": "2021-09" }, { "authors": "Bram Janssens; Matthias Bogaert; Mathijs Maton", "journal": "Annals of Operations Research", "ref_id": "b9", "title": "Predicting the next Pogačar: a data analytical approach to detect young professional cycling talents", "year": "2022-01" }, { "authors": "Leonid Kholkine; Tom De Schepper", "journal": "Springer International Publishing", "ref_id": "b10", "title": "A Machine Learning Approach for Road Cycling Race Performance Prediction", "year": "2020-09" }, { "authors": "Leonid Kholkine; Thomas Servotte", "journal": "Frontiers in Sports and Active Living", "ref_id": "b11", "title": "A Learn-to-Rank Approach for Predicting Road Cycling Race Outcomes", "year": "2021-10" }, { "authors": "Yehuda Koren", "journal": "Association for Computing Machinery", "ref_id": "b12", "title": "Factorization Meets the Neighborhood: A Multifaceted Collaborative Filtering Model", "year": "2008-08" }, { "authors": "Alex Krizhevsky; Ilya Sutskever; Geoffrey E Hinton", "journal": "Curran Associates, Inc", "ref_id": "b13", "title": "ImageNet Classification with Deep Convolutional Neural Networks", "year": "2012-12" }, { "authors": "Yuesen Li", "journal": "Journal of Sports Sciences", "ref_id": "b14", "title": "Characterizing Player's Playing Styles Based on Player Vectors for Each Playing Position in the Chinese Football Super League", "year": "2022-07" }, { "authors": "Ofir Magdaci", "journal": "", "ref_id": "b15", "title": "Embedding the Language of Football Using NLP", "year": "2021-06" }, { "authors": "Tomas Mikolov", "journal": "", "ref_id": "b16", "title": "Efficient Estimation of Word Representations in Vector Space", "year": "2013-01" }, { "authors": " Mortirolo", "journal": "", "ref_id": "b17", "title": "Cycling prediction method", "year": "2019-07" }, { "authors": "Evangelos Papalexakis; Konstantinos Pelechrinis", "journal": "Association for Computing Machinery", "ref_id": "b18", "title": "THoops: A Multi-Aspect Analytical Framework for Spatio-Temporal Basketball Data", "year": "2018-10" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher Manning", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "GloVe: Global Vectors for Word Representation", "year": "2014-10" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "Computational and Biological Learning Society", "ref_id": "b20", "title": "Very Deep Convolutional Networks for Large-Scale Image Recognition", "year": "2015-04" }, { "authors": " Van Bulck; Arthur David; Dries Vande Weghe; Goossens", "journal": "Annals of Operations Research", "ref_id": "b21", "title": "Result-based talent identification in road cycling: Discovering the next Eddy Merckx", "year": "2021-10" }, { "authors": "Öznur Yılmaz; Şule Ilayda; Ögüdücü Gündüz", "journal": "Association for Computing Machinery", "ref_id": "b22", "title": "Learning Football Player Features Using Graph Embeddings for Player Recommendation System", "year": "2022-04" } ]
[ { "formula_coordinates": [ 4, 150.03, 169.15, 391.1, 29.69 ], "formula_id": "formula_0", "formula_text": "L(R, S, y) = 1 N N ∑ i=1 y i log(σ (R r(i) • S s(i) )) + (1 -y i ) log(1 -σ (R r(i) • S s(i) )) (1)" } ]
10.1007/978-3-030-58452-8_24
2023-09-29
[ { "figure_ref": [ "fig_6" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b6", "b7", "b8", "b9", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b13", "b18", "b19", "b20" ], "table_ref": [], "text": "N EURAL Radiance Fields (NeRF) [1] has demonstrated excellent results in reconstructing 3D scenes, and recent works [2], [3], [4], [5], [6] have aimed to extend its capabilities to editing 3D scenes. One of the essential editing operations is removing objects from a 3D scene, which has garnered significant interest from the research community [7], [8], [9], [10], [11]. However, the practical application of this task faces several challenges. One of the most significant obstacles is the accurate localization of unwanted objects. Although it is natural for humans to identify unwanted objects, asking users to label every view is impractical. Additionally, ensuring multiview consistency and plausible content after deletion without any ground truth is not trivial.\nSeveral works have tried to address the above problems but remain unsatisfactory. Object-NeRF [7] and ObjectSDF [8] decompose the NeRF training into background and objects branches, allowing them to render specified objects controlled by object ID. However, because of the lack of supervision for the removed part, neither of these works can guarantee a plausible completion at the removal area. NeRF-Object-Removal [9] and SPIn-NeRF [10] use the 2D inpainting method LaMa [12] to generate color and depth priors after deletion and reconstruct NeRF from these priors directly. Although editing quality has improved, NeRF-Object-Removal requires all views' masks to realize, while SPIn-NeRF uses a series of segmentation preliminaries [13], [14], [15], [16] which even involves network training to generate masks for each scene with intensive time. DFFs [17] applies pre-trained language models [18], [14] to enable text-prompt editing by training NeRF to align feature vectors extracted from language models, eliminating the need for masks. However, it has difficulty locating regions to remove if the pre-trained object detector does not work appropriately.\nIn this paper, we propose a novel pipeline called OR-NeRF that enables free object removal from 3D scenes using either points or text prompts on a single image, requiring less time for multiview segmentation and achieving better performance than previous methods. To spread the points prompt on a single view to other views, we introduce a point projection strategy that utilizes the COLMAP [19] sparse reconstruction to find correspondences from 2D points to 3D sparse point cloud and further projects 3D points to all 2D images with camera parameters. This results in precise sparse points annotations for all scene views, which can be directly input to a recent 2D segmentation model Segment-Anything (SAM) [20] to predict masks. Generated at approximately two frames per second on an RTX 3090 GPU, our algorithm outperforms previous works like SPIn-NeRF, requiring minutes. Following the approach of NeRF-Object-Removal and SPIn-NeRF, we use the 2D inpainting model LaMa to get color priors for the removal area. We develop our scene object removal algorithm using TensoRF [21] as the backbone with depth supervision and perceptual loss. TensoRF is a SOTA model for improving rendering quality considering time and performance tradeoffs. This approach enables us to reconstruct the 3D scene after object removal with superior editing quality compared to existing methods. We evaluate our method on various datasets and analyze its performance in multiview segmentation and scene object removal through quality and quantity analyses. In summary, our contributions are (1) A novel pipeline for efficient object removal from 3D scenes, allowing for both points and text prompts on a single image, and (2) Experimental results Or input text prompt: Statue Fig. 1. An overview of our OR-NeRF's framework. We start with sparse images and either points or text prompts. If a text prompt is used, we convert it into a points prompt by sampling points from the initial mask estimated using Grounded-SAM (Sec IV-A2). Next, we propagate the points annotations to all views by projecting them from 2D to 3D point cloud and back to 2D (Sec IV-A1). We utilize SAM to predict masks using these point annotations. LaMa is used to obtain color and depth priors. Finally, the scene after removal is reconstructed using Neural Radiance Fields supervised by color (Eq (3)), depth (Eq (5)), and perceptual (Eq (6)) cues simultaneously (Sec IV-B).\ndemonstrate that our method achieves better editing quality and requires less time for multiview segmentation than previous methods, as evidenced by both quality and quantity analyses." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Multiview Segmentation", "publication_ref": [ "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b15", "b9", "b15", "b0", "b9", "b12", "b13", "b11" ], "table_ref": [], "text": "Though segmentation in 2D [22], [23] is well studied, multiview segmentation for 3D scenes [24], [25] has received less attention despite its non-negligible importance for downstream applications like 3D editing. Several self-supervised methods [26], [27] have been proposed, but they often produce inaccurate masks and have difficulty handling complex scenes. To mitigate these challenges, semi-supervised strategies [28], [29], [16], [10] have emerged that require partial annotations, or reasonable prompts from users. Semantic NeRF [16] propagates partial labels to dense semantic segmentation by leveraging a few in-place annotations via predicting semantic labels with volume rendering. Like NeRF [1], SPIn-NeRF [10] further constructs a thorough pipeline to generate masks for all views with points prompt on a single image. They use oneshot segmentation [13] to estimate an initial mask, followed by a video segmentation [14], [12] to generate masks for all views by treating the image sequence as a video. Finally, they refine the masks using Semantic NeRF. However, the above approaches require network training, which consumes considerable resources and does not guarantee an accurate mask, as errors can accumulate with complicated frameworks." }, { "figure_ref": [], "heading": "B. Scene Object Removal", "publication_ref": [ "b29", "b30", "b31", "b32", "b33", "b4", "b1", "b2", "b3", "b34", "b35", "b10", "b7", "b6", "b36", "b8", "b9", "b37", "b16", "b38", "b39", "b40", "b41", "b42", "b43", "b7", "b6", "b36", "b8", "b11", "b16", "b42", "b10", "b13", "b17", "b44", "b45" ], "table_ref": [], "text": "NeRF has greatly facilitated the area of 3D scene editing and research [30], [31], [32], [33] focuses on various editing types emerging in large numbers. Works exist for texture editing [34], [5], geometry editing [2], [3], [4], and object-centred editing [35], [36], [11], [8], [7], such as removal [37], [9], [10], and even enabling multiple manipulations [38], [17], [39], [40], [41], [42], [43], [44]. Object-NeRF [8] and ObjSDF [7] decompose NeRF training into background and object branches, allowing for rendering specified objects controlled by assigned object IDs. However, they generate 'black holes' at the removal region as there is no supervision or priors for the deletion part during training. NeRF-In [37], NeRF-Object-Removal [9], and SPIn-NeRF utilize the 2D inpainting method LaMa [12] to obtain priors for the removal part and directly reconstruct the scene after deletion from these priors. Though achieving better rendering quality, these methods demand high preconditions, such as annotating or generating masks for all views, which rely on expensive time costs and hardware resources. Additionally, [17], [43], [11] combine pre-trained language models [14], [18], [45], [46] to enable text editing, thus bypassing the requirement for masks. Still, the rendering quality in the removal region is poor, as no algorithms are designed for learning pixel values after deletion." }, { "figure_ref": [], "heading": "III. BACKGROUND", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Neural Radiance Fields", "publication_ref": [], "table_ref": [], "text": "Given 3D location x = (x, y, z) and 2D viewing direction d = (θ, ϕ), NeRF models the 3D scene implicitly with an MLP network which gives a mapping function F Θ : (x, d) → (c, σ). The output c stands for the radiance and σ for volume density, respectively. To optimize weights Θ, the volume rendering approach is introduced as:\nC(r) = t f tn T (t)σ(r(t))c(r(t), d)dt,\nwhere\nT (t) = exp - t tn σ(r(s))ds .(1)\nIn Eq (1), C(r) represents the pixel value and is calculated by integrating the radiance value c along the ray r(t) = o + td starting from the camera position o with direction d pointing to the pixel, within near and far bounds t n and t f . The function T (t) denotes the accumulated transmittance along the ray from t n to t. NeRF trains the network with the above definitions by minimizing the total squared error between rendered pixels and ground truth." }, { "figure_ref": [ "fig_6" ], "heading": "B. SPIn-NeRF", "publication_ref": [ "b12", "b13", "b14", "b15", "b11", "b46", "b47", "b12", "b14", "b13", "b15", "b14", "b13", "b48", "b19", "b11", "b18", "b49" ], "table_ref": [], "text": "SPIn-NeRF proposes a comprehensive pipeline for removing objects from 3D scenes. In addition to a set of sparse view images with their corresponding camera parameters, SPIn-NeRF takes a few points on one view annotated by users, indicating the unwanted objects as a prompt. With these inputs, SPIn-NeRF first combines a series of segmentation methods [13], [14], [15], [16] to obtain masks for all views. Then, a 2D image inpainting model LaMa [12] is used to generate color and depth priors in the mask area. The scene after deletion can be reconstructed with a modified version of vanilla NeRF from these priors directly, which adds depth supervision [47] and perceptual loss [48] to constrain the geometry and appearance consistency across different views.\nIn the mask generation stage, an initial mask from the single-view annotation is obtained using the one-shot segmentation [13] method. The video segmentation approach [15], [14] that follows provides coarse masks for all views by wrapping images into a video sequence. Finally, the coarse masks are fine-tuned to generate proper masks for all views by fitting the Semantic NeRF [16]. This procedure even requires training the Semantic NeRF from scratch to refine coarse masks obtained from [15], [14], resulting in significant costs in terms of time and hardware. IV. METHOD Considering a set of sparse-view images with their corresponding camera poses waiting to be edited, our method requires users to provide either points or text prompts indicating the unwanted objects for only one image. Possible prompts can be a few points marked on the object or words describing the target. To begin with, we find the masks of unwanted objects in all images. We spread the initial points prompts to all images for points input according to 3D geometry match relationships (Sec IV-A1). While for text input, we first acquire Grounded-SAM [49] to make an initial mask for the annotated single view followed by sampling points in this initial mask to switch text prompts to the points-prompt pattern (Sec IV-A2).\nTo continue, we utilize the SAM model [20] to predict masks with points prompt and use masks to guide a 2D inpainting model LaMa [12] to generate color and depth priors. Finally, we describe our object-removing strategy, guaranteeing geometry and appearance consistency across all the views (Sec IV-B). Fig 1 shows an overview of our framework for removing objects from 3D scenes with points or text prompts.\nA. Multiview Segmentation 1) Points Prompt: Suppose we have a group of n images I = {I i } n i=1 and their corresponding camera parameters C = {C i } n i=1 gathered from a 3D scene. We aim to predict masks for all views I from only one-shot annotation. An intuitive approach to this question is to generate annotations for other images. We carefully investigate the 3D geometry matching relation in 3D scenes and find that a 2D point on a certain perspective can be spread to other views by projecting it back to 3D space and then to 2D planes under a certain camera pose. For 2D to 3D pass, we can refer to the sparse point cloud reconstructed by COLMAP [19] and its projected discrete points group D = {D i } n i=1 on all 2D images. This information is represented by a certain data structure in COLMAP's sparse reconstruction as a unique one-to-one mapping, which allows us to locate points in 3D space by simply querying with 2D coordinates. However, this introduces a new problem: finding a mapping for the user's arbitrary input is not guaranteed as the reconstruction is sparse. We can solve this question by making a query with the existing nearest points in the discrete points set D. Finally, for the 3D to 2D reverse pass, we reproject 3D points back to the 2D plane under a certain view through its corresponding camera matrices. Now, we can spread the initial annotation provided by users to all other views safely and quickly as this algorithm utilizes 3D information, which is self-consistent and does not involve any neural network training. Only matrices computation is needed, and the algorithm can achieve a speed of about two frames per second for generating masks.\nSpecifically, we leverage the 3D geometry correspondence to calculate all views' annotation P 2d = {P ij } n m i=1 j=1 from the only prompt P 1 provided by users and here P ij = (x ij , y ij ), while m stands for the number of points marked in an image. With P 2d , we can obtain masks M = {M i } n i=1 for all views easily from SAM model F S by making inferences as M = F S (I, P 2d ). To realise this, we first initialise M 1 with F S (I 1 , P 1 ). Then we acquire points\nP 3d = {(x k , y k , z k )} l k=1 in 3D space by querying 2D coordinates D * 1 = (M 1 ∩ D 1\n). Note l equals the number of points in D * 1 and we only refer to the points belonging to the mask M 1 as we need to constrain the points annotation for all views precisely match the unwanted objects. In practice, the nearest points are calculated after 3D points have been projected to 2D planes to ensure the amount and quality of prompts.\nConsidering the 3D to 2D situation, we begin with camera parameters. For each view I i , the associated camera parameters C i = {K i , P i } is composed of the intrinsics K and extrinsics P = [R|t]. Here, the extrinsic matrix P is represented by a 3 × 3 rotation matrix R (camera orientation) and a 3 × 1 translation vector t (camera position) that together transform the 3D point from the world coordinate system P w = [X w , Y w , Z w ] T to the camera coordinate system\nP c = [X c , Y c , Z c ] T = RP w + t.\nBy substituting P 3d to P w , we can switch 3D points P 3d from the world coordinate system to camera coordinate system for all views simply as P * 3d = {(x ik , y ik , z ik )} n l i=1 k=1 = R i P 3d + t i . Here P * 3d denotes the camera coordinate system form. And with one little step forward:\nP * 2d = {P ik } n l i=1 k=1 = { x ik z ik , y ik z ik } n l i=1 k=1 ,\nwhere (x, y, z)\n∈ P * 3d ,(2)\nwe project 3D points P * 3d back to all 2D views to get corresponding pixel coordinates P * 2d in images. Now, we need to filter the number of points in each image from k to m. To handle this issue, we spread the initial annotation P 1 to all views by performing the above 2D-3D-2D projection to D * 1 similarly to get a P ′ 2d = {P ij } n m i=1 j=1 and find the m nearest points to P ′ 2d in P * 2d to construct P 2d . We keep the number of points the same as the user input in each view to ensure mask quality. By far, we get all the annotations required for the prediction of SAM, and we can gain masks for all views by calling M = F S (I, P 2d ).\n2) Text Prompt: We leverage SAM's variety Grounded-SAM for the text prompt, which combines an object detector Grounding DINO [50] who can handle text input. A natural way to deal with text is by asking Grounded-SAM to predict all the views' masks with the same text input. However, we observe a considerable speed drop in inference when comparing Grounding DINO to SAM. Meanwhile, Grounded-SAM can fail to handle some 'difficult' views due to Grounding DINO's limited detection ability. Therefore, we consider a two-stage strategy where we first use Grounded-SAM to obtain an initial mask for the single view and then sample points from this mask. Finally, we use the points prompt method in Sec IV-A1 to generate masks for the remaining views. This design ensures high-quality masks while minimizing computational costs.\nRegarding m words T = {T j } m j=1 input from the user that describe the unwanted objects. For input words sequence T and images I pairs, Grounding DINO model F G takes the prompt T as labels and tries to find these labels' corresponding bounding boxes B = {B ij } n m i=1 j=1 in images I as B = F G (I, T ). As SAM is capable of two kinds of inputs, points or boxes, we can obtain the mask M 1 of unwanted objects in the user's annotated image I 1 simply by forwarding SAM with M 1 = F S (I 1 , B 1 ). With the one-shot mask M 1 , we sample a set of k points P = {P k = (x k , y k )} q k=1 from this mask to make the problem solvable by the points prompt method (Sec IV-A1). To implement this, we traverse the points in the mask from left to right and up to down and choose the top left, bottom right point, and center point of the mask to construct the points prompt P. Then, text prompt input has been converted into points prompt, and we let the algorithm used for points prompt in Sec IV-A1 generate masks for all views." }, { "figure_ref": [], "heading": "B. Scene Object Removal", "publication_ref": [ "b46", "b47" ], "table_ref": [], "text": "Once we get object masks for all views, we can reconstruct a 3D scene without unwanted objects through Neural Radiance Fields by treating 2D inpainting priors as ground truth. Recall Sec III-A, the network can be optimized by minimizing the color loss:\nL c = Σ r∈R || Ĉ(r) -C(r)|| 2 2 , (3\n)\nwhere R is the set of rays in each training batch, Ĉ(r) are the ground truth and C(r) are the rendered pixels by network outputs calculated through Eq (1), respectively. However, relying solely on color loss is inadequate, as LaMa does not consider the 3D context, leading to inconsistency across different views. To address this issue, we introduce depth constraints [47] into the training of Neural Radiance Fields. Depth values D(r) can be obtained through volume rendering easily as:\nD(r) = t f tn T (t)σ(r(t))zdt,\nwhere\nT (t) = exp - t tn σ(r(s))ds . (4\n)\nwhere z is the distance from the current 3D location to the camera position. Like RGB images, we render depth images for the original scene without deletion and use LaMa to get depth priors. Then, we add depth supervision to training as:\nL d = Σ r∈R || D(r) -D(r)|| 2 2 ,(5)\nwhere D(r) are the depth ground truth. We further discuss the difference between using the whole-depth image as supervision and only querying the depth in the mask area in Sec V-E.\nIn addition, we recognize that depth supervision alone only enforces geometric consistency across views, while the appearance may still exhibit inconsistency. To address this, we incorporate perceptual loss [48] to guide the network in learning a plausible color distribution within the masked region, matching the style of the inpainted color priors. We focus the perceptual loss specifically on the masked area. This is because color loss alone is sufficient for the non-masked area, as pixel values do not change after the deletion in this area. It is important to note that the perceptual loss is designed at the image level. In our implementation, we refer to the patch-level implementation from SPIn-NeRF, represented by the following equation:\nL p = 1 B Σ i∈B LPIPS( Î(r), I(r)) ,\nwhere\nI(r) = Σ r∈P C(r) ,(6)\nand adjust the patch sampling strategy to fit a variety of data used in our Experiments (Sec V-A). In Equation ( 6), we first sample a patch P from the mask and calculate the mean square error between the rendered pixels I(r) and the ground truth Î(r) for the pixels within the patch P. Batch training with a size of B can be employed. Finally, the training objective is to minimize the total loss L defined as:\nL = a * L c + b * L d + c * L p ,(7)\nwhere a, b, and c are tunable loss weights for the color, depth, and perceptual loss, respectively." }, { "figure_ref": [], "heading": "V. EXPERIMENTS A. Datasets", "publication_ref": [ "b50", "b51" ], "table_ref": [], "text": "We select 12 scenes from various commonly used 3D reconstruction datasets, including NeRF LLFF data, IBRNet data [51], and LLFF real-world data [52]. Our scene selection aims to cover a wide range of scene variations and different types of removal operations, such as slogans, providing a high degree of flexibility. Since the reconstruction datasets do not provide ground truth for evaluation, we incorporate the SPIn-NeRF dataset, which includes human-annotated object masks and scene capture after object removal. We use all 10 scenes from the SPIn-NeRF dataset to evaluate the quality of multiview segmentation. To evaluate scene object removal's performance, we select 8 scenes, excluding two duplicate scenes, to ensure a diverse layout of the objects. To conclude, we conducted experiments on 20 scenes, comprehensively evaluating our OR-NeRF pipeline." }, { "figure_ref": [], "heading": "B. Metrics", "publication_ref": [ "b9", "b52", "b53" ], "table_ref": [], "text": "We adopt the evaluation metrics commonly used in segmentation tasks, including pixel-wise accuracy (Acc) and intersection over union (IoU), to assess the performance of our multiview segmentation algorithm. We report peak signalto-noise ratio (PSNR), a widely used 3D reconstruction metric for the scene object removal component. Additionally, we include two metrics used by SPIn-NeRF [10]: the learned perceptual image patch similarity (LPIPS) [53] and the Fréchet inception distance (FID) [54]. These metrics compare the similarity between the ground-truth data and the rendering outputs produced by our method." }, { "figure_ref": [], "heading": "C. Experiments Settings 1) Multiview Segmentation:", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We conduct experiments using points and text prompts on our selected scenes and evaluate the results using metrics stated in Sec V-B. Since the implementation details of multiview segmentation were not explicitly provided in the SPIn-NeRF paper, we directly utilize the metrics mentioned in their paper. However, it should be noted that the paper does not specify which scenes were used for calculating these metrics. Therefore, we compare the performance of SPIn-NeRF with our scene-average results. Subsequently, we utilize the masks generated from the points prompt for all subsequent experiments.\n2) Scene Object Removal: We conducted experiments on all 20 scenes with ours and the SPIn-NeRF methods. Both vanilla NeRF and TensoRF architectures are tested with our method's implementation. We follow the implementation of the SPIn-NeRF to reproduce their results. For NeRF and TensoRF, we train the original scenes to render depth maps instead of disparity maps used in SPIn-NeRF. This decision is made to avoid errors from dividing by zero when calculating disparities. Table I compares mask generation between our method and SPIn-NeRF. Our approach outperforms SPIn-NeRF regarding accuracy and IoU. SPIn-NeRF's mask generation process involves a complex pipeline that introduces errors at each step and requires significant time and hardware resources. In contrast, our method leverages the simplicity of SAM and involves minimal matrix calculations. Consequently, our multiview segmentation algorithm delivers superior-quality results in less time. Table II shows our estimated time for mask generation compared to SPIn-NeRF." }, { "figure_ref": [ "fig_5", "fig_9", "fig_4" ], "heading": "D. Multiview Segmentation", "publication_ref": [], "table_ref": [ "tab_0", "tab_2", "tab_2", "tab_2" ], "text": "Note that we have excluded the 'book' scene from the average calculation. This decision was made because we have identified inaccuracies in the ground truth labels for this particular scene, as evident from Fig 4 . Furthermore, as depicted in Fig 4, our segmentation results exhibit precise coverage of the target objects with intricate details, such as the crossing chair legs in the '12' scene. However, it should be noted that there is a minor flaw in the 'trash' scene where our masks fail to cover all areas of the trash cans, explaining the low metrics in Table I. This does not significantly affect the subsequent experiments if refined with our strategy. demonstrate that the add-ons improve SPIn-NeRF performance. While the results involve complex numbers, we adopt Ours-TensoRF with perceptual loss as it performs best overall. Although Table III does not provide strong evidence for the efficacy of depth supervision and perceptual loss, we will discuss the real significance of these add-ons in the following section.\n2) Quality: We first compare the three methods' overall rendering quality in this part. Ours-NeRF and Ours-TensoRF produce clear outputs, while SPIn-NeRF suffers from blurry due to the noisy disparity maps, which provide inaccurate geometry supervision. This can be observed by Fig 6.\nNext, we discuss the impact of depth supervision. Although widely used in training, there is a lack of exploration of the difference between using the entire depth image as supervision and only applying depth loss in the masked area. Fig 5 indicates that full-depth supervision is necessary and irreplaceable, as both partial depth and direct training settings in all three architectures produce inconsistent depth results, resulting in different extents of restoring removed objects. However, it is worth noting that the depth loss does not show a visible difference in the rendered views, which aligns with the metrics presented in Table III.\nMoving on to the perceptual loss aspect, we conclude from Fig 7 that this loss has a positive effect but falls short of guaranteeing a plausible completion for the masked area. This also explains the relatively ineffective metrics in Table III, as our results exhibit a significant gap with the ground truth. Finally, part of our editing results are displayed in Fig 3." }, { "figure_ref": [], "heading": "VI. CONCLUSIONS AND DISCUSSIONS", "publication_ref": [ "b54", "b55", "b56", "b57", "b58", "b59", "b60" ], "table_ref": [], "text": "This paper presents a novel pipeline OR-NeRF for object removal from 3D scenes, requiring only points or text prompts on a single view. We emphasize the advantages of our method in terms of rendering quality and time efficiency. Potential limitations exist due to the inpainting model's capability and more robust 2D image inpainting techniques, such as diffusion [55], [56], [57], [58], [59], [60], [61] based methods can be applied to achieve more plausible completions after object removal." } ]
The emergence of Neural Radiance Fields (NeRF) for novel view synthesis has increased interest in 3D scene editing. An essential task in editing is removing objects from a scene while ensuring visual reasonability and multiview consistency. However, current methods face challenges such as timeconsuming object labeling, limited capability to remove specific targets, and compromised rendering quality after removal. This paper proposes a novel object-removing pipeline, named OR-NeRF, that can remove objects from 3D scenes with user-given points or text prompts on a single view, achieving better performance in less time than previous works. Our method spreads user annotations to all views through 3D geometry and sparse correspondence, ensuring 3D consistency with less processing burden. Then recent 2D segmentation model Segment-Anything (SAM) is applied to predict masks, and a 2D inpainting model is used to generate color supervision. Finally, our algorithm applies depth supervision and perceptual loss to maintain consistency in geometry and appearance after object removal. Experimental results demonstrate that our method achieves better editing quality with less time than previous works, considering both quality and quantity.
OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields
[ { "figure_caption": "Fig 1 shows an overview of our OR-NeRF framework.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "views and points prompt", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig.2. Comparison of mask generation between SPIn-NeRF[10] (first row) and ours (second row). Our method generates masks rapidly and precisely for all views in a single step, supporting points, and text input. In contrast, SPIn-NeRF exhibits slower speed, lower accuracy, and limited support for points prompt only, requiring three steps, including network training.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig 2 shows the difference in mask generation between our pipeline and SPIn-NeRF.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Editing results of our OR-NeRF method demonstrating various examples. Please zoom in to observe better.Points PromptText Prompt Ground Truth Imaegs", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Mask generation results of our OR-NeRF. The figure shows the masks generated for two scenes: 'book' (up) and '12' (below) from the SPIn-NeRF dataset. From left to right are the original image, ground truth mask, masks generated with points, and text prompt.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "E. Scene Object Removal 1 )1Quantity: TableIIIpresents our results for scene object removal. Regarding overall rendering quality, Ours-NeRF hibits a superior FID compared to SPIn-NeRF but performs worse regarding PSNR and LPIPS. On the other hand, Ours-TensoRF outperforms SPIn-NeRF in terms of FID and LPIPS scores but has a weakness in PSNR. Analyzing the impact of the loss models, it appears that the additional components for training Neural Radiance Fields do not have a significantly positive effect. Ours-NeRF and ours-TensoRF exhibit a similar pattern where depth supervision and perceptual loss increase PSNR but show no positive influence on FID and LPIPS.Interestingly, SPIn-NeRF behaves somewhat differently: removing perceptual loss and depth supervision from the SPIn-NeRF pipeline results in a subtle increase in PSNR compared to the original version. However, the FID and LPIPS scores", "figure_data": "", "figure_id": "fig_6", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .Fig. 6 .56Fig. 5. The effect of depth supervision. We can see from the figure that either without depth supervision (left) or training with partial depth (middle) leads to geometry inconsistency. While supervised by all-depth images (right) convergent to a consistent result.", "figure_data": "", "figure_id": "fig_7", "figure_label": "56", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The effect of perceptual loss. The top is Ours-TensoRF trained directly, and the bottom is Ours-TensoRF with perceptual loss. We can see from the figure that this loss has some influence but is still unsatisfactory.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "OF MASK GENERATION BETWEEN OUR METHOD AND SPIN-NERF. THE FIRST ROW INDICATES THE SCENE NAME IN THE SPIN-NERF DATASET, WHILE 'POINTS' AND 'TEXT' DENOTE THE PROMPTS MODE USED, RESPECTIVELY.", "figure_data": "1234791012trashMeanSPIn-NeRFpointsacc↑ IoU↑99.80 96.7799.82 99.73 99.79 99.81 99.78 99.87 99.30 99.51 96.47 97.48 98.50 97.43 96.29 95.47 91.73 88.6899.71↑ 95.42↑98.91 91.66textacc↑ IoU↑99.81 96.8199.82 99.73 99.80 99.81 99.78 99.86 99.25 99.51 96.51 97.47 98.51 97.43 96.41 95.41 91.19 88.6499.71↑ 95.38↑98.91 91.66", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "RESULTS ON SCENE OBJECT REMOVAL. THE FIRST ROW INDICATES THE METHOD NAME, WHILE ABBREVIATIONS IN THE SECOND ROW INDICATE LOSS MODULES. 'DIR' DENOTES TRAINING NEURAL RADIANCE FIELDS WITH LAMA PRIORS DIRECTLY, 'DP' DENOTES PARTIAL DEPTH, 'DA' DENOTES ALL DEPTH, AND 'LPIPS' DENOTES THE USE OF PERCEPTUAL LOSS. NOTABLY, PERCEPTUAL LOSS IS ALWAYS APPLIED WITH ALL-DEPTH SUPERVISION ENABLED.", "figure_data": "Ours-NeRFOurs-TensoRFSPIn-NeRFdirdpdalpipsdirdalpipsdirdalpipsPSNR↑14.0414.0414.1614.1613.9314.0414.0314.8514.8214.83FID↓61.1165.2164.7158.1553.2864.2959.7470.0270.0767.26LPIPS↓0.6834 0.6893 0.70220.67630.6370 0.64940.62730.6810 0.6752 0.6506Ours-NeRF+dirOurs-NeRF+dpOurs-NeRF+da", "figure_id": "tab_2", "figure_label": "III", "figure_type": "table" } ]
Youtan Yin; Zhoujie Fu; Fan Yang; Guosheng Lin; Zhoujie Youtan Yin; Fan Fu; Guosheng Yang; Lin
[ { "authors": "B Mildenhall; P P Srinivasan; M Tancik; J T Barron; R Ramamoorthi; R Ng", "journal": "", "ref_id": "b0", "title": "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis", "year": "2020" }, { "authors": "Y Peng; Y Yan; S Liu; Y Cheng; S Guan; B Pan; G Zhai; X Yang", "journal": "", "ref_id": "b1", "title": "CageNeRF: Cage-based Neural Radiance Field for Generalized 3D Deformation and Animation", "year": "2022" }, { "authors": "T Xu; T Harada", "journal": "", "ref_id": "b2", "title": "Deforming Radiance Fields with Cages", "year": "2022" }, { "authors": "Y.-J Yuan; Y.-T Sun; Y.-K Lai; Y Ma; R Jia; L Gao", "journal": "", "ref_id": "b3", "title": "NeRF-Editing: Geometry Editing of Neural Radiance Fields", "year": "2022" }, { "authors": "F Xiang; Z Xu; M Hašan; Y Hold-Geoffroy; K Sunkavalli; H Su", "journal": "", "ref_id": "b4", "title": "NeuTex: Neural Texture Mapping for Volumetric Neural Rendering", "year": "2021" }, { "authors": "B Yang; C Bao; J Zeng; H Bao; Y Zhang; Z Cui; G Zhang", "journal": "", "ref_id": "b5", "title": "NeuMesh: Learning Disentangled Neural Mesh-Based Implicit Field for Geometry and Texture Editing", "year": "2022" }, { "authors": "B Yang; Y Zhang; Y Xu; Y Li; H Zhou; H Bao; G Zhang; Z Cui", "journal": "", "ref_id": "b6", "title": "Learning Object-Compositional Neural Radiance Field for Editable Scene Rendering", "year": "2021" }, { "authors": "Q Wu; X Liu; Y Chen; K Li; C Zheng; J Cai; J Zheng", "journal": "", "ref_id": "b7", "title": "Object-Compositional Neural Implicit Surfaces", "year": "2022" }, { "authors": "S Weder; G Garcia-Hernando; A Monszpart; M Pollefeys; G Brostow; M Firman; S Vicente", "journal": "", "ref_id": "b8", "title": "Removing Objects From Neural Radiance Fields", "year": "2022" }, { "authors": "A Mirzaei; T Aumentado-Armstrong; K G Derpanis; J Kelly; M A Brubaker; I Gilitschenski; A Levinshtein", "journal": "", "ref_id": "b9", "title": "SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields", "year": "2023" }, { "authors": "R Goel; D Sirikonda; S Saini; P J Narayanan", "journal": "", "ref_id": "b10", "title": "Interactive Segmentation of Radiance Fields", "year": "2023" }, { "authors": "R Suvorov; E Logacheva; A Mashikhin; A Remizova; A Ashukha; A Silvestrov; N Kong; H Goka; K Park; V Lempitsky", "journal": "", "ref_id": "b11", "title": "Resolution-robust Large Mask Inpainting with Fourier Convolutions", "year": "2022" }, { "authors": "Y Hao; Y Liu; Z Wu; L Han; Y Chen; G Chen; L Chu; S Tang; Z Yu; Z Chen; B Lai", "journal": "", "ref_id": "b12", "title": "EdgeFlow: Achieving Practical Interactive Segmentation with Edge-Guided Flow", "year": "2021" }, { "authors": "M Caron; H Touvron; I Misra; H Jegou; J Mairal; P Bojanowski; A Joulin", "journal": "", "ref_id": "b13", "title": "Emerging Properties in Self-Supervised Vision Transformers", "year": "2021" }, { "authors": "T Zhou; F Porikli; D J Crandall; L Van Gool; W Wang", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b14", "title": "A survey on deep learning technique for video segmentation", "year": "2023" }, { "authors": "S Zhi; T Laidlow; S Leutenegger; A J Davison", "journal": "", "ref_id": "b15", "title": "In-Place Scene Labelling and Understanding with Implicit Scene Representation", "year": "2021" }, { "authors": "S Kobayashi; E Matsumoto; V Sitzmann", "journal": "", "ref_id": "b16", "title": "Decomposing NeRF for Editing via Feature Field Distillation", "year": "2022" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark; G Krueger; I Sutskever", "journal": "", "ref_id": "b17", "title": "Learning Transferable Visual Models From Natural Language Supervision", "year": "2021" }, { "authors": "J L Schönberger; T Price; T Sattler; J.-M Frahm; M Pollefeys", "journal": "", "ref_id": "b18", "title": "A vote-and-verify strategy for fast spatial verification in image retrieval", "year": "2016" }, { "authors": "A Kirillov; E Mintun; N Ravi; H Mao; C Rolland; L Gustafson; T Xiao; S Whitehead; A C Berg; W.-Y Lo; P Dollár; R Girshick", "journal": "", "ref_id": "b19", "title": "Segment Anything", "year": "2023" }, { "authors": "A Chen; Z Xu; A Geiger; J Yu; H Su", "journal": "", "ref_id": "b20", "title": "TensoRF: Tensorial Radiance Fields", "year": "2022" }, { "authors": "Z Qiu; T Yao; T Mei", "journal": "IEEE Transactions on Multimedia", "ref_id": "b21", "title": "Learning deep spatio-temporal dependence for semantic video segmentation", "year": "2018" }, { "authors": "L Wang; C Jung", "journal": "IEEE Transactions on Multimedia", "ref_id": "b22", "title": "Example-based video stereolization with foreground segmentation and depth propagation", "year": "2014" }, { "authors": "L Zhao; H Zhou; X Zhu; X Song; H Li; W Tao", "journal": "IEEE Transactions on Multimedia", "ref_id": "b23", "title": "Lif-seg: Lidar and camera image fusion for 3d lidar semantic segmentation", "year": "2023" }, { "authors": "A H Abdulnabi; B Shuai; Z Zuo; L.-P Chau; G Wang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b24", "title": "Multimodal recurrent neural networks with information transfer layers for indoor scene labeling", "year": "2018" }, { "authors": "Z Fan; P Wang; Y Jiang; X Gong; D Xu; Z Wang", "journal": "", "ref_id": "b25", "title": "NeRF-SOS: Any-View Self-supervised Object Segmentation on Complex Scenes", "year": "2022" }, { "authors": "X Liu; J Chen; H Yu; Y.-W Tai; C.-K Tang", "journal": "", "ref_id": "b26", "title": "Unsupervised Multi-View Object Segmentation Using Radiance Field Propagation", "year": "2022" }, { "authors": "M Wallingford; A Kusupati; A Fang; V Ramanujan; A Kembhavi; R Mottaghi; A Farhadi", "journal": "", "ref_id": "b27", "title": "Neural Radiance Field Codebooks", "year": "2023" }, { "authors": "X Fu; S Zhang; T Chen; Y Lu; L Zhu; X Zhou; A Geiger; Y Liao", "journal": "", "ref_id": "b28", "title": "Panoptic NeRF: 3D-to-2D Label Transfer for Panoptic Urban Scene Segmentation", "year": "2022" }, { "authors": "C Bao; Y Zhang; B Yang; T Fan; Z Yang; H Bao; G Zhang; Z Cui", "journal": "", "ref_id": "b29", "title": "SINE: Semantic-driven Image-based NeRF Editing with Prior-guided Editing Field", "year": "2023" }, { "authors": "S Benaim; F Warburg; P E Christensen; S Belongie", "journal": "", "ref_id": "b30", "title": "Volumetric Disentanglement for 3D Scene Manipulation", "year": "2022" }, { "authors": "A Mikaeili; O Perel; D Cohen-Or; A Mahdavi-Amiri", "journal": "", "ref_id": "b31", "title": "SKED: Sketch-guided Text-based 3D Editing", "year": "2023" }, { "authors": "E Sella; G Fiebelman; P Hedman; H Averbuch-Elor", "journal": "", "ref_id": "b32", "title": "Vox-E: Text-guided Voxel Editing of 3D Objects", "year": "2023" }, { "authors": "Z Chen; K Yin; S Fidler", "journal": "", "ref_id": "b33", "title": "AUV-Net: Learning Aligned UV Maps for Texture Transfer and Synthesis", "year": "2022" }, { "authors": "K Rematas; R Martin-Brualla; V Ferrari", "journal": "", "ref_id": "b34", "title": "Sharf: Shapeconditioned Radiance Fields from a Single View", "year": "2021" }, { "authors": "H.-X Yu; L Guibas; J Wu", "journal": "", "ref_id": "b35", "title": "Unsupervised Discovery of Object Radiance Fields", "year": "2022" }, { "authors": "H.-K Liu; I.-C Shen; B.-Y Chen", "journal": "", "ref_id": "b36", "title": "NeRF-In: Free-Form NeRF Inpainting with RGB-D Priors", "year": "2022" }, { "authors": "V Lazova; V Guzov; K Olszewski; S Tulyakov; G Pons-Moll", "journal": "", "ref_id": "b37", "title": "Control-NeRF: Editable Feature Volumes for Scene Rendering Manipulation", "year": "2022" }, { "authors": "B Wang; L Chen; B Yang", "journal": "", "ref_id": "b38", "title": "DM-NeRF: 3D Scene Geometry Decomposition and Manipulation from 2D Images", "year": "2023" }, { "authors": "S Liu; X Zhang; Z Zhang; R Zhang; J.-Y Zhu; B Russell", "journal": "", "ref_id": "b39", "title": "Editing Conditional Radiance Fields", "year": "2021" }, { "authors": "J Zhu; Y Huo; Q Ye; F Luan; J Li; D Xi; L Wang; R Tang; W Hua; H Bao; R Wang", "journal": "", "ref_id": "b40", "title": "I$ 2$-SDF: Intrinsic Indoor Scene Reconstruction and Editing via Raytracing in Neural SDFs", "year": "2023" }, { "authors": "W Ye; S Chen; C Bao; H Bao; M Pollefeys; Z Cui; G Zhang", "journal": "", "ref_id": "b41", "title": "IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis", "year": "2023" }, { "authors": "A Mirzaei; Y Kant; J Kelly; I Gilitschenski", "journal": "", "ref_id": "b42", "title": "LaTeRF: Label and Text Driven Object Radiance Fields", "year": "2022" }, { "authors": "Z Kuang; F Luan; S Bi; Z Shu; G Wetzstein; K Sunkavalli", "journal": "", "ref_id": "b43", "title": "PaletteNeRF: Palette-based Appearance Editing of Neural Radiance Fields", "year": "2023" }, { "authors": "B Li; K Q Weinberger; S Belongie; V Koltun; R Ranftl", "journal": "", "ref_id": "b44", "title": "Language-driven Semantic Segmentation", "year": "2022" }, { "authors": "V Tschernezki; I Laina; D Larlus; A Vedaldi", "journal": "", "ref_id": "b45", "title": "Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations", "year": "2022" }, { "authors": "K Deng; A Liu; J.-Y Zhu; D Ramanan", "journal": "", "ref_id": "b46", "title": "Depth-supervised NeRF: Fewer Views and Faster Training for Free", "year": "2022" }, { "authors": "J Johnson; A Alahi; L Fei-Fei", "journal": "", "ref_id": "b47", "title": "Perceptual losses for real-time style transfer and super-resolution", "year": "2016" }, { "authors": "", "journal": "", "ref_id": "b48", "title": "Grounded-sam", "year": "2023" }, { "authors": "S Liu; Z Zeng; T Ren; F Li; H Zhang; J Yang; C Li; J Yang; H Su; J Zhu", "journal": "", "ref_id": "b49", "title": "Grounding dino: Marrying dino with grounded pretraining for open-set object detection", "year": "2023" }, { "authors": "Q Wang; Z Wang; K Genova; P Srinivasan; H Zhou; J T Barron; R Martin-Brualla; N Snavely; T Funkhouser", "journal": "", "ref_id": "b50", "title": "Ibrnet: Learning multi-view image-based rendering", "year": "2021" }, { "authors": "B Mildenhall; P P Srinivasan; R Ortiz-Cayon; N K Kalantari; R Ramamoorthi; R Ng; A Kar", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b51", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2019" }, { "authors": "R Zhang; P Isola; A A Efros; E Shechtman; O Wang", "journal": "", "ref_id": "b52", "title": "The Unreasonable Effectiveness of Deep Features as a Perceptual Metric", "year": "2018" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "", "ref_id": "b53", "title": "GANs trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "L Zhang; M Agrawala", "journal": "", "ref_id": "b54", "title": "Adding Conditional Control to Text-to-Image Diffusion Models", "year": "2023" }, { "authors": "A Haque; M Tancik; A A Efros; A Holynski; A Kanazawa", "journal": "", "ref_id": "b55", "title": "Instruct-NeRF2NeRF: Editing 3D Scenes with Instructions", "year": "2023" }, { "authors": "A Raj; S Kaza; B Poole; M Niemeyer; N Ruiz; B Mildenhall; S Zada; K Aberman; M Rubinstein; J Barron; Y Li; V Jampani", "journal": "", "ref_id": "b56", "title": "DreamBooth3D: Subject-Driven Text-to-3D Generation", "year": "2023" }, { "authors": "B Poole; A Jain; J T Barron; B Mildenhall", "journal": "", "ref_id": "b57", "title": "DreamFusion: Text-to-3D using 2D Diffusion", "year": "2023" }, { "authors": "Z Zhou; S Tulsiani", "journal": "", "ref_id": "b58", "title": "SparseFusion: Distilling View-conditioned Diffusion for 3D Reconstruction", "year": "2023" }, { "authors": "U Singer; S Sheynin; A Polyak; O Ashual; I Makarov; F Kokkinos; N Goyal; A Vedaldi; D Parikh; J Johnson; Y Taigman", "journal": "", "ref_id": "b59", "title": "Text-To-4D Dynamic Scene Generation", "year": "2023" }, { "authors": "S Cao; W Chai; S Hao; Y Zhang; H Chen; G Wang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b60", "title": "Difffashion: Reference-based fashion design with structure-aware transfer by diffusion models", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 83.45, 224.52, 150.49, 26.29 ], "formula_id": "formula_0", "formula_text": "C(r) = t f tn T (t)σ(r(t))c(r(t), d)dt," }, { "formula_coordinates": [ 3, 130.57, 247.95, 169.46, 31.31 ], "formula_id": "formula_1", "formula_text": "T (t) = exp - t tn σ(r(s))ds .(1)" }, { "formula_coordinates": [ 4, 48.96, 163.5, 250.56, 24.16 ], "formula_id": "formula_2", "formula_text": "P 3d = {(x k , y k , z k )} l k=1 in 3D space by querying 2D coordinates D * 1 = (M 1 ∩ D 1" }, { "formula_coordinates": [ 4, 48.96, 354.64, 133.93, 11.22 ], "formula_id": "formula_3", "formula_text": "P c = [X c , Y c , Z c ] T = RP w + t." }, { "formula_coordinates": [ 4, 120.79, 431.08, 106.04, 36.16 ], "formula_id": "formula_4", "formula_text": "P * 2d = {P ik } n l i=1 k=1 = { x ik z ik , y ik z ik } n l i=1 k=1 ," }, { "formula_coordinates": [ 4, 198.82, 451.92, 101.21, 29.14 ], "formula_id": "formula_5", "formula_text": "∈ P * 3d ,(2)" }, { "formula_coordinates": [ 4, 377.82, 398.51, 181.34, 13.14 ], "formula_id": "formula_6", "formula_text": "L c = Σ r∈R || Ĉ(r) -C(r)|| 2 2 , (3" }, { "formula_coordinates": [ 4, 559.16, 401.35, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 4, 346.12, 531.57, 118.35, 26.29 ], "formula_id": "formula_8", "formula_text": "D(r) = t f tn T (t)σ(r(t))zdt," }, { "formula_coordinates": [ 4, 393.93, 555, 165.24, 31.31 ], "formula_id": "formula_9", "formula_text": "T (t) = exp - t tn σ(r(s))ds . (4" }, { "formula_coordinates": [ 4, 559.16, 555, 3.87, 8.64 ], "formula_id": "formula_10", "formula_text": ")" }, { "formula_coordinates": [ 4, 376.84, 645.6, 186.2, 13.14 ], "formula_id": "formula_11", "formula_text": "L d = Σ r∈R || D(r) -D(r)|| 2 2 ,(5)" }, { "formula_coordinates": [ 5, 106.18, 298.99, 133.86, 22.31 ], "formula_id": "formula_12", "formula_text": "L p = 1 B Σ i∈B LPIPS( Î(r), I(r)) ," }, { "formula_coordinates": [ 5, 143.66, 312.85, 156.37, 20.83 ], "formula_id": "formula_13", "formula_text": "I(r) = Σ r∈P C(r) ,(6)" }, { "formula_coordinates": [ 5, 112.02, 431.32, 188, 9.65 ], "formula_id": "formula_14", "formula_text": "L = a * L c + b * L d + c * L p ,(7)" } ]
10.1145/3583780.3614938
2023-09-04
[ { "figure_ref": [ "fig_0", "fig_0", "fig_4", "fig_4" ], "heading": "INTRODUCTION", "publication_ref": [ "b18", "b48", "b15", "b23", "b36", "b41", "b35", "b46", "b22", "b2", "b3", "b7", "b11", "b26", "b29", "b49", "b10", "b31", "b5", "b13", "b38", "b16", "b50", "b1", "b50", "b16", "b16", "b38", "b16", "b8", "b11", "b49" ], "table_ref": [], "text": "Knowledge graphs (KGs) are widely used to store structured information and facilitate a broad range of downstream applications, such as question answering [18,48], dialogue systems [15], recommender systems [23,36,41], and information extraction [35,46]. A typical KG represents facts as triples in the form of (head entity, relation, tail entity), e.g., (Alice, IsBornIn, France). Despite their size, KGs suffer from incompleteness [22]. Therefore, knowledge graph completion (KGC), which is aimed at automatically predicting missing information, is a fundamental task for KGs. To address the KGC task, knowledge graph embedding (KGE) methods have been proposed and attracted increasing attention [2,3,7,11,26,29,49].\nPrevious KGE methods focus on transductive settings, requiring all entities to be observed during training. In real-world scenarios, however, KGs evolve dynamically since out-of-knowledgegraph (OOKG) entities emerge frequently [10]. For example, about 200 new entities are added to DBPedia on a daily basis [31]. Fig. 1 shows an example of the OOKG entity problem. Given the observed KG, \"sun\" is the newly added entity and there exists the auxiliary connection between \"sun\" and the known entity (i.e., (sun, sur-roundedBy, planets)). Based on observed and auxiliary facts, our goal is to embed OOKG entities and predict missing facts (e.g., (sun, attract, mass)). So far, to represent newly emerging entities, a timeconsuming retraining process over the whole KG is unavoidable for most conventional embedding methods. To address this issue, an inductive KGE framework is needed. Some previous work [5,13,38] represents OOKG entities using their observed neighborhood structures. These frameworks suffer from a data sparsity problem [16,50]. To address this sparsity issue, GEN [1] and HRFN [50] combine meta-learning frameworks with graph neural networks (GNNs) to simulate unseen entities during meta-training. But they utilize triples between unseen entities, which may be missing or extremely sparse in real-world scenarios. The VN network [16] alleviates the sparsity problem by inferring additional virtual neighbors (VNs) of the OOKG entities with logic rules and symmetric path rules.\nDespite these advances, current inductive knowledge embedding methods face the following two challenges:\nChallenge 1: Identifying inter-rule correlations. Previous methods for inductive knowledge embedding mainly focus on modeling one or two hop local neighborhood structures, or mining rules for the OOKG entities. Other complex patterns helpful for the predictions of missing facts, such as inter-rule correlations, are ignored. As shown in Fig. 1, the extracted logic rule (sun, surroundedBy, planets) ∧ (planets, composedOf , mass) → (sun, attract, mass) describes the principle of the solar system. Given the fact that the solar system, and atom system are correlated (since the \"nucleus\" is the scale-down \"sun\" in the atom), the missing fact (nucleus, Surround-edBy, electrons) and new rule (nucleus, surroundedBy, electrons) ∧ (electrons, composedOf , charges) → (nucleus, attract, charges) are obtained easily through the analogy between the solar and atom system. In this work, such correlations are extracted and modeled to facilitate inductive KGE methods. By identifying inter-rule correlations, our proposed method is able to discover most (more than 80%) of symmetric path (SP) rules used by VN network [16] and other useful patterns in knowledge graphs (KGs) to further improve embedding learning (see §4. 1.3).\nChallenge 2: Capturing the interactions among rule mining, rule inference, and embedding. LAN [38] utilizes constant logic rule confidences to measure neighboring relations' usefulness, while VN network [16] employs the heuristic rule mining method (i.e., AMIE+ [8]). In that case, prior work fails to capture interactions among rule mining, rule inference, and embedding. In fact, these three processes (i.e., rule mining, rule inference, and embedding) benefit and complement each other. Specifically, rules can infer missing facts more accurately with refined embeddings, while predicted facts help to learn the embeddings of higher quality [11]. Besides, rule learning using KG embeddings can transform the mining process from discrete graph search into calculations in continuous spaces, reducing the search space remarkably [49]. In this work, we design an iterative framework for rule mining, rule inference, and embedding to incorporate the relations among the above three stages, as Fig. 2 illustrates.\nTo address the two challenges listed above, we propose an inductive knowledge embedding framework, named virtual neighbor network with inter-rule correlations (VNC), to iteratively infer virtual neighbors for the OOKG entities with logic rules and inter-rule correlations. As Fig. 2 illustrates, VNC is composed of three main stages: (i) rule mining, (ii) rule inference, and (iii) embedding. In the rule mining process, to capture useful complex patterns in KG, both logic rules and inter-rule correlations are extracted from KGs, and assigned confidence scores via calculations over relation embeddings. To alleviate the data sparsity problem, virtual neighbors (VNs) of entities are inferred utilizing the deductive capability of rules. By solving a convex rule-constrained problem, soft labels of VNs are optimized. Next, the KG with softly predicted VNs is input to the GNN-based encoder, which consists of structure-aware and query-aware layers. Moreover, entity embeddings obtained by aggregating neighbors in the encoder are taken as the initialization for the embedding-based decoder. Finally, optimal entity and relation embeddings are derived by minimizing the global loss over observed and softly labeled fact triples. The above three processes are conducted iteratively during training.\nOur contributions can be summarized as follows: (i) We propose an inductive knowledge embedding paradigm, named VNC, to address the OOKG entity problem. (ii) We develop an embedding-enhanced rule mining scheme to identify logic rules and inter-rule correlations simultaneously. (iii) We design an iterative framework to explore the interactions among rule mining, rule inference, and embedding. (iv) Experimental results show that the proposed VNC achieves state-of-the-art performance in both link prediction and triple classification tasks." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b34", "b40", "b2", "b3", "b25", "b43", "b20", "b33", "b45", "b21", "b24", "b47", "b7", "b11", "b26", "b49", "b6", "b39", "b42", "b37", "b40", "b1", "b5", "b13", "b38", "b13", "b38", "b1", "b50", "b16", "b5", "b28", "b4", "b34", "b13", "b38", "b16", "b5", "b5", "b13", "b7", "b11", "b1", "b9", "b11" ], "table_ref": [], "text": "Knowledge graph completion. Knowledge graph completion (KGC) methods have been extensively studied and mainly fall under the embedding-based paradigm [34,40]. The aim of knowledge graph embedding (KGE) methods is to map the entities and relations into continuous vector spaces and then measure the plausibility of fact triples using score functions. Early work designs shallow models solely relying on triples in the observed KGs [2,3,25]. One line of recent works focus on devising more sophisticated triple scoring functions, including TransH [43], TransR [20], RotatE [33], DistMult [45], and Analogy [21]. Another line of recent methods is to incorporate useful information beyond triples, including relation paths [24,47] and logic rules [7,11,26,49]. Besides, deep neural network based methods [6,39,42] and language model based methods [37,40] also show promising performance.\nInductive knowledge embedding. Despite the success in KGC problem, the above KGE methods still focus on the transductive settings, requiring all the test entities to be seen during training. Motivated by the limitations of traditional KGE methods, recent works [1,5,13,38] take the known neighbors of the emerging entities as the inputs of inductive models. Hamaguchi et al. [13] employ the graph neural network (GNN) and aggregate the pretrained representations of the existing neighbors for unseen entities. To exploit information of redundancy and query relations in the neighborhood, LAN [38] utilizes a logic attention network as the aggregator. GEN [1] and HRFN [50] design meta-learning frameworks for GNNs to simulate the unseen entities during meta-training. However, they utilize unseen-to-unseen triples, which are unavailable in the OOKG entity problem. VN network [16] alleviates the data sparsity problem by inferring virtual neighbors for the OOKG entities. In addition, InvTransE and InvRotatE [5] represent OOKG entities with the optimal estimations of translational assumptions. Another type of inductive methods represent unseen entities via learning entity-independent semantics, including rule based [28] and GNN based [4,34] methods. However, the above methods focus on a different inductive KGC task (i.e., completing an entirely new KG during testing), and are not able to take advantage of embeddings of known entities or inter-rule correlations. In our experiments, we also conduct a comprehensive comparison between our proposed model and entity-independent methods (see §6.3). The most closely related work is MEAN [13], LAN [38], VN network [16], InvTransE and InvRotatE [5]. These previous inductive embedding methods ignore inter-rule correlations, and do not capture interactions among rule mining, rule inference, and embedding. In our proposed model VNC, to model useful complex patterns in graphs, logic rules and inter-rule correlations are identified simultaneously. We design an iterative framework to incorporate interactions among rule mining, rule inference, and embedding. 𝑘 and add (𝑒 𝑗 , 𝑟 -1 𝑘 , 𝑒 𝑖 ) to the original KG K. Definition 3.2 (Out-of-knowledge-graph entity problem). Following [5,13], we formulate the out-of-knowledge-graph (OOKG) entity problem as follows. The auxiliary triple set AUX contains the unseen entities E 𝑢 = E 𝑎𝑢𝑥 /E 𝑜 , and each triple in AUX contains exactly one OOKG entity and one observed entity. And O is observed during training, while the auxiliary triple set AUX connecting OOKG and observed entities is only accessible at test time. Note that, no additional relations are involved in AUX. Given AUX and O, the goal is to correctly identify missing fact triples that involve the OOKG entities. Definition 3.3 (Logic rules). For logic rules, following [7,11], we consider a set of first-order logic rules with different confidence values for a give KG, represented as\nF logic = {(𝑓 logic 𝑚 , 𝜆 logic 𝑚 )} 𝑀 𝑚=1 , where 𝑓 logic 𝑚\nis the 𝑚-th logic rule. 𝜆 logic 𝑚 ∈ [0, 1] denotes its confidence value, and rules with higher confidence values are more likely to hold. Here, 𝑓 logic 𝑚 is in the form of body → head. In this paper, we restrict rules to be Horn clause rules, where the rule head is a single atom, and the rule body is a conjunction of one or more atoms. For example, such kind of logic rule can be:\n(𝑥, surroundedBy, 𝑦) ∧ (𝑦, composedOf , 𝑧) → (𝑥, attract, 𝑧), (1) where 𝑥, 𝑦, 𝑧 are entity variables. Similar to previous rule learning work [9,11], we focus on closed-path (CP) rules to balance the expressive power of mined rules and the efficiency of rule mining.\nIn a CP rule, the sequence of triples in the rule body forms a path from the head entity variable to the tail entity variable of the rule head. By replacing all variables with concrete entities in the given KG, we obtain a grounding of the rule. For logic rule 𝑓 logic 𝑚 , we denote the set of its groundings as\nG logic 𝑚 = {𝑔 logic 𝑚𝑛 } 𝑁 𝑚\n𝑛=1 . Definition 3.4 (Inter-rule correlations). In addition to logic rules, we also consider a set of inter-rule correlations with different confidence levels, denoted as\nF corr = {(𝑓 corr 𝑣 , 𝜆 corr 𝑣 )} 𝑉 𝑣=1\n, where 𝑓 corr 𝑣 is the 𝑣-th inter-rule correlation and 𝜆 corr 𝑣 is the corresponding confidence value. Based on the logic rule 𝑓 logic 𝑚 , we define the corresponding inter-rule correlations as:\n𝑓 corr 𝑣 mpq : 𝑓 logic 𝑚 path 𝑞 𝑓 logic 𝑚 ,𝑓 ′logic mp ---------------→ 𝑓 ′logic mp ,(2)\nwhere\n𝑓 ′𝑙𝑜𝑔𝑖𝑐 𝑚𝑝\nis the 𝑝-th \"incomplete\" logic rule in the same form as 𝑓" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "𝑙𝑜𝑔𝑖𝑐 𝑚", "publication_ref": [], "table_ref": [], "text": "but with one missing triple in the rule body. For example, as Fig. 1 shows, the rule for the atom system are incomplete since (𝑥, surroundedBy, 𝑦) in the logic rule (𝑥, surroundedBy, 𝑦) ∧ (𝑦, composedOf , 𝑧) → (𝑥, attract, 𝑧) is missing. Note that the rules with only rule head missing are not regarded as the \"incomplete\" rules, because the missing rule head can be directly inferred by extracted logic rules. The 𝑞-th inter-rule path between the logic rule 𝑓 \n(𝑥 1 , 𝑟 1 , 𝑥 2 ) ∧ (𝑥 2 , 𝑟 2 , 𝑥 3 ) ∧ • • • ∧ (𝑥 𝑘 , 𝑟 𝑘 , 𝑥 𝑘+1 )\n, where 𝑟 𝑖 ∈ R denotes a relation in KG and 𝑥 𝑖 is the entity variable. To represent the correlations between rules, we assume that the inter-rule path only exists between entities of the same position in two rules. For example, in Fig. 1, the inter-rule path is (sun, scaleDown, nucleus) indicating that the nucleus is the scaled-down sun in the atom system. Similar to the logic rules, we obtain the set of groundings G corr 𝑣 = {𝑔 corr vw } 𝑊 𝑣 𝑤=1 for 𝑓 corr 𝑣 by replacing variables with concrete entities. Definition 3.5 (Virtual neighbors). To address the data sparsity problem, we introduce virtual neighbors into the original KG. As mentioned above, virtual neighbors are inferred by the extracted rules (i.e., logic rules and inter-rule correlations). Specifically, if a triple (𝑒 ′ 𝑖 , 𝑟 ′ 𝑘 , 𝑒 ′ 𝑗 ) inferred by rules does not exist in either the observed triple set O or auxiliary triple set AUX, we suppose that 𝑒 ′ 𝑖 and 𝑒 ′ 𝑗 are the virtual neighbors to each other. In our paper, we denote the set containing such kind of triples as VN = {𝑡 𝑣𝑛 }, where 𝑡 𝑣𝑛 is a triple with the virtual neighbors." }, { "figure_ref": [ "fig_4" ], "heading": "METHOD", "publication_ref": [ "b45" ], "table_ref": [], "text": "In this section, we describe the VNC framework, our proposed method for the OOKG entity problem. As illustrated in Fig. 2, the framework has three stages: rule mining ( §4.1), rule inference ( §4.2), and embedding ( §4.3). In the rule mining stage, Given the knowledge graph, the rule pool is first generated by searching plausible paths, and confidence values are calculated using the current relation embeddings R. Then, in the rule inference stage, a new triple set with virtual neighbors VN = {𝑡 vn } is inferred from rule groundings. And each predicted triple 𝑡 vn is assgined a soft label 𝑠 (𝑡 vn ) ∈ [0, 1] by solving a rule-constrained optimization problem. The knowledge graph with virtual neighbors is inputted into GNNbased encoder consisting of both structure and query aware layers. Next, with the entity embeddings E = H 𝑂 , where H 𝑂 is the output of GNN layers, the embedding-based decoder projects relations into embeddings R and calculate the truth level 𝜙 (•) for each fact triple as follows (take DistMult [45] as an example):\n𝜙 (𝑒 𝑖 , 𝑟 𝑘 , 𝑒 𝑗 ) = e 𝑇 𝑖 R 𝑘 e 𝑗 ,(3)\nwhere e 𝑖 , e 𝑗 are the normalized entity embeddings for entity 𝑒 𝑖 and 𝑒 𝑗 respectively, and R 𝑘 is a diagonal matrix for relation 𝑟 𝑘 . These three stages are conducted iteratively during training (see §4.4)." }, { "figure_ref": [], "heading": "Rule mining", "publication_ref": [ "b9", "b49", "b19", "b27", "b49" ], "table_ref": [], "text": "Given the observed knowledge graph, rule mining stage first generates a pool of logic rules by finding possible paths. Then, based on the complete logic rules, inter-rule correlations are discovered by searching incomplete rules and inter-rule paths. Finally, the confidence values are computed using relation embeddings.\n4.1.1 Rule pool generation. Before computing confidence scores, rules should be extracted from the observed KG.\nFor logic rules, we are only interested in closed-path (CP) rules. Therefore, given the rule head, the search for candidate logic rules is reduced to finding plausible paths for rule bodies. Specifically, one of fact triples in the observed KG K (e.g., (𝑒 1 , 𝑟, 𝑒 2 ) ∈ O) is first taken as the candidate rule head, and then the possible paths between the head entity and tail entity of the rule head (e.g., (𝑒 1 , 𝑟 1 , 𝑒 3 ) ∧ (𝑒 3 , 𝑟 2 , 𝑒 2 )) is extracted. In this way, the candidate logic rule (𝑥, 𝑟 1 , 𝑧) ∧ (𝑧, 𝑟 2 , 𝑦) → (𝑥, 𝑟, 𝑦) is induced from the given KG. For computational efficiency, we restrict the length of paths in rule bodies to at most 2 (i.e., the length of rules is restricted to at most 3). Note that, there may still exist numerous redundant and low quality rules in the above extraction process. Therefore, following [9,49], further filtering is conducted, and only rules with support > 1, head coverage > 𝛼 𝐻𝐶 , and standard confidence > 𝛼 𝑆𝐶 are selected, where 𝛼 𝐻𝐶 and 𝛼 𝑆𝐶 are preset thresholds.\nBased on mined logic rules, there are two steps for generating possible inter-rule correlations: (i) Finding incomplete rules. To this end, our aim is to identify all the \"incomplete\" rules for the mined logic rules. Specifically, given the 𝑚-th logic rule 𝑓 logic 𝑚 in K, a set of \"incomplete\" logic rules {𝑓 ′logic 𝑚𝑝 } in the same form as 𝑓 logic 𝑚 but with one missing triple in the rule body is recognized in this step. For example, for the logic rule of length 2 (e.g., (𝑥, 𝑟 1 , 𝑦) → (𝑥, 𝑟, 𝑦)), there exists only one \"incomplete\" logic rule (e.g., (𝑥, 𝑟 1 , 𝑦) → (𝑥, 𝑟, 𝑦) with (𝑥, 𝑟 1 , 𝑦) missing). (ii) Searching plausible inter-rule paths. To extract inter-rule paths, we first obtain groundings of logic rules and \"incomplete\" rules by replacing variables with concrete entities. For example, a grounding of logic rule (𝑥, 𝑟 1 , 𝑧) ∧ (𝑧, 𝑟 2 , 𝑦) → (𝑥, 𝑟, 𝑦) can be (𝑒 1 , 𝑟 1 , 𝑒 2 ) ∧ (𝑒 2 , 𝑟 2 , 𝑒 3 ) → (𝑒 1 , 𝑟, 𝑒 3 ), and a grounding of the corresponding \"incomplete\" rules can be\n(𝑒 ′ 1 , 𝑟 1 , 𝑒 ′ 2 ) ∧ (𝑒 ′ 2 , 𝑟 2 , 𝑒 ′ 3 ) → (𝑒 ′ 1 , 𝑟, 𝑒 ′ 3 ), where 𝑒 𝑖 , 𝑒 ′ 𝑖 ∈ E 𝑜 and (𝑒 ′ 1 , 𝑟 1 , 𝑒 ′ 2 ) ∉ O.\nThen, the paths between entities of the same position in logic and \"incomplete\" rules (e.g., paths between 𝑒 1 and 𝑒 ′ 1 ) are searched. Here, we estimate the reliability of inter-rule paths using the path-constraint resource allocation (PCRA) algorithm [19], and keep paths with Reliability > 𝛼 𝑃𝐶𝑅𝐴 , where 𝛼 𝑃𝐶𝑅𝐴 is the threshold for the path reliability. For computational efficiency, the length of inter-rule paths is limited to at most 3. In the next step, similar to mining logic rules, we filter out inter-rule correlations of low quality with support, head coverage, and standard confidence. For each logic rule, the rule body and rule head can be considered as two associated paths. Inspiring by previous works [27,49], the confidence level of each logic rule can be measured by the similarity between paths of the rule body and rule head. To be specific, suppose the path of the rule body path body :\n(𝑥 1 , 𝑟 1 , 𝑥 2 ) ∧ (𝑥 2 , 𝑟 2 , 𝑥 3 ) ∧ • • • ∧ (𝑥 𝑘 , 𝑟 𝑘 , 𝑥 𝑘+1 )\n, and the path of the rule head path head : (𝑥 1 , 𝑟, 𝑥 𝑘+1 ), the corresponding confidence level 𝜆 𝑙𝑜𝑔𝑖𝑐 𝑚 is calculated as follows:\n𝜆 logic 𝑚 = sim(path body , path head ),(4)\nwhere path body and path head are embeddings for the paths of the rule body and rule head, respectively. And sim(•) is the similarity function. In VNC, we consider a variety of methods based on translating or bilinear operations in the embedding stage. Thus, we define two kinds of similarity functions and path representations for different embedding methods. For translational decoders (e.g., TransE), the path representations and similarity function in Eq. 4 are defined as follows:\npath body = r 1 + r 2 + • • • + r 𝑘 , path head = r, 𝜆 logic 𝑚 = ||path body -path head || 2 ,(5)\nwhere r 𝑖 and r are vector embeddings for relation 𝑟 𝑖 and 𝑟 , and similarity function is defined by the 𝐿 2 -norm. For bilinear decoders (e.g., DistMult), the path representations and similarity function in Eq. 4 are defined as follows:\npath body = M 𝑟 1 + M 𝑟 2 + • • • + M 𝑟 𝑘 , path head = M 𝑟 , 𝜆 𝑙𝑜𝑔𝑖𝑐 𝑚 = ||path body -path head || 𝐹 ,(6)\nwhere M 𝑟 𝑖 and M 𝑟 are matrix embeddings for relation 𝑟 𝑖 and 𝑟 , and similarity function is defined by the Frobenius norm.\nOn this basis, we calculate confidence scores of inter-rule correlations. Specifically, for inter-rule correlation in Eq. 2, we consider the confidences of logic rule and \"incomplete\" rule simultaneously, and define the confidence level 𝜆 corr 𝑣 mpq for inter-rule correlation 𝑓 corr 𝑣 mpq as follows:\n𝜆 corr 𝑣 mpq = 𝜆 logic 𝑚 • 𝜆 ′logic mp ,(7)\nwhere 𝜆 tail entity of the missing triple). For example, the confidence 𝜆 ′logic 𝑚𝑝\nfor \"incomplete\" rule 𝑓\n′logic 𝑚𝑝 : (𝑥 1 , 𝑟 1 , 𝑥 2 ) ∧ (𝑥 2 , 𝑟 2 , 𝑥 3 ) → (𝑥 1 , 𝑟, 𝑥 3 ) (with (𝑥 1 , 𝑟 1 , 𝑥 2 )\nmissing) is computed as (for bilinear decoders):\n𝜆 ′logic 𝑚𝑝 = ∥M 𝑟 + M 𝑟 -1 2 -M 𝑟 1 ∥ 𝐹 , where M 𝑟 -1" }, { "figure_ref": [ "fig_6", "fig_6", "fig_6" ], "heading": "𝑖", "publication_ref": [ "b16" ], "table_ref": [], "text": "is the matrix embedding for the reverse version of the relation 𝑟 𝑖 . Since unreliable paths are filtered out during rule pool generation, the reliability of inter-rule path is not considered.\n4.1.3 Discussion: relation to symmetric path rules. In VN network [16], to capture long-distance semantic similarities between entities, symmetric path (SP) rules in KGs are identified. In fact, many symmetric path rules can be transformed into inter-rule correlations. For example, the SP rule shown in Fig. 3(a) can be represented by the inter-rule correlation in Fig. 3(b), since symmetric paths in the rule body and head share several entities and relations. Motivated by this, as Fig. 3(c) and 3(d) shows, we count the number of the shared rules (blue bars) used by VN network and VNC on FB15K Subject 20 and WN18 Subject 20. The results indicate that the VNC framework is capable of extracting most of the symmetric path rules (more than 80%), and identifying abundant graph patterns to further alleviate the data sparsity problem, and improve the embedding quality." }, { "figure_ref": [], "heading": "Rule inference", "publication_ref": [], "table_ref": [], "text": "In the rule inference stage, given the extracted rules, our goal is to infer a new triple set with virtual neighbors VN and assign a soft label 𝑠 (𝑡 𝑣𝑛 ) to each predicted triple 𝑡 𝑣𝑛 ∈ VN ." }, { "figure_ref": [], "heading": "Rule modeling.", "publication_ref": [ "b12", "b11", "b12" ], "table_ref": [], "text": "To predict a new triple 𝑡 𝑣𝑛 ∈ VN , we replace variables in extracted rules with concrete entities to obtain rule groundings. To model rule groundings, we adopt t-norm based fuzzy logics [12]. The key idea here is to compute the truth level of a grounding rule using the truth levels of its constituent triples and logical connectives (e.g., ∧ and →). Following [11,12], logical connectives associated with the logical conjunction (∧), disjunction(∨), and negation (¬) are defined as follows: where 𝑎 and 𝑏 denote logical expressions, which can be the atom triple or multiple triples combined by logical connectives. 𝐼 (•) is the truth level function. If 𝑎 = (𝑒 1 , 𝑟 1 , 𝑒 2 ) is a single triple, 𝐼 (𝑎) is defined by Eq. 3, i.e., 𝐼 (𝑎) = 𝜙 (𝑒 1 , 𝑟 1 , 𝑒 2 ). For combined multiple triples, we can calculate the truth value using Eq. 8 recursively. For example, for a rule grounding 𝑎 → 𝑏, the truth value can be computed as:\n𝐼 (𝑎 ∧ 𝑏 ) = 𝐼 (𝑎) • 𝐼 (𝑏 ), 𝐼 (𝑎 ∨ 𝑏 ) = 𝐼 (𝑎) + 𝐼 (𝑏 ) -𝐼 (𝑎) • 𝐼 (𝑏 ), 𝐼 (¬𝑎) = 1 -𝐼 (𝑎),(8)\n𝐼 (𝑎 → 𝑏) = 𝐼 (¬𝑎 ∨ 𝑏) = 𝐼 (𝑎) • 𝐼 (𝑏) -𝐼 (𝑎) + 1." }, { "figure_ref": [], "heading": "Soft label prediction.", "publication_ref": [ "b8", "b11" ], "table_ref": [], "text": "In this stage, our goal is to assign a soft label 𝑠 (𝑥 𝑣𝑛 ) ∈ [0, 1] for each triple 𝑡 𝑣𝑛 ∈ VN , using the current KG embeddings (i.e., E and R) and rule groundings (i.e., G 𝑙𝑜𝑔𝑖𝑐 and G 𝑐𝑜𝑟𝑟 ). To this end, we establish and solve a rule-constrained optimization problem. Here, the optimal soft label 𝑠 (𝑡 vn ) should keep close to truth level 𝐼 (𝑡 vn ), while constrained by the rule groundings. For the first characteristic, we minimize a square loss over the soft label 𝑠 (𝑡 𝑣𝑛 ) and truth level 𝐼 (𝑡 vn ). For the second characteristic, we impose rule constraints on the predicted soft labels S = {𝑠 (𝑡 vn )}. To be specific, given a rule 𝑓 𝑚 and soft labels S, rule groundings 𝑔 mn is expected to be true, i.e., 𝐼 (𝑔 mn |S) = 1 with confidence 𝜆 𝑚 . Here, the conditional truth level 𝐼 (𝑔 𝑚𝑛 |S) can be calculated recursively using the logical connectives in Eq. 8 is a logic rule grounding and 𝑔 ′logic ℎ is a grounding for the corresponding \"incomplete\" logic rule, the conditional truth level 𝐼 (𝑔 corr 𝑣𝑤 |S) can be computed as:\n𝐼 (𝑔 corr 𝑣𝑤 |S) = 𝐼 (𝑔 logic 𝑏 ) • 𝐼 (𝑔 ′logic ℎ |S) -𝐼 (𝑔 logic 𝑏 ) + 1,(9)\nwhere 𝐼 (𝑔 \n1 2 • ∑︁ 𝑡𝑣𝑛 ∈VN (𝑠 (𝑡 𝑣𝑛 ) -𝐼 (𝑡 𝑣𝑛 ) ) 2 + 𝐶 • ∑︁ 𝑚,𝑛 𝜉 logic 𝑚𝑛 + ∑︁ 𝑣,𝑤 𝜉 corr 𝑣𝑤 such that 𝜆 logic 𝑚 (1 -𝐼 (𝑔 logic 𝑚𝑛 |𝑆 ) ) ≤ 𝜉 logic 𝑚𝑛 𝜆 corr 𝑣 (1 -𝐼 (𝑔 corr 𝑣𝑤 |𝑆 ) ) ≤ 𝜉 corr 𝑣𝑤 𝜉 logic 𝑚𝑛 ≥ 0, 𝜉 corr 𝑣𝑤 ≥ 0, 0 ≤ 𝑠 (𝑡 𝑣𝑛 ) ≤ 1, (10\n)\nwhere 𝐶 is the constant penalty parameter, and 𝜆 and inter-rule correlation 𝑓 corr 𝑣 respectively. Note that, for the optimization problem in Eq. 10, all the constraints are linear functions w.r.t 𝑠 (𝑡 vn ), and this kind of the optimization problem is convex [11]. Therefore, we can obtain the closed-form solution for this problem:\n𝑠 (𝑡 𝑣𝑛 ) = 𝐼 (𝑡 vn ) + 𝐶 • ∑︁ 𝑚,𝑛 𝜆 logic 𝑚 ∇ 𝑠 (𝑡vn ) 𝐼 (𝑔 logic 𝑚𝑛 |𝑆 ) + ∑︁ 𝑣,𝑤 𝜆 corr 𝑣 ∇ 𝑠 (𝑡vn ) 𝐼 (𝑔 corr 𝑣𝑤 |𝑆 ) 1 0 ,(11)\nwhere ∇ 𝑠 (𝑡 vn ) 𝐼 (𝑔 " }, { "figure_ref": [], "heading": "Embedding", "publication_ref": [], "table_ref": [], "text": "In the embedding stage, the knowledge graph with softly labeled virtual neighbors is inputted into the GNN-based encoder and embedding-based decoder. In this way, entities and relations in KG are projected into embeddings E and R." }, { "figure_ref": [], "heading": "GNN-based encoder.", "publication_ref": [ "b16", "b38", "b30", "b38", "b44", "b45", "b2", "b6", "b21" ], "table_ref": [], "text": "Similar to previous works [16,38], our encoder consists of several structure aware layers and one query aware layer. To model connectivity structures of the given KG, we adopt weighted graph convolutional network (WGCN) [30] as the structure aware layers. In each layer, different relation types are assigned distinct attention weights. The 𝑙-th structure aware layer can be formulated as follows:\na (𝑙 ) 𝑖 = W (𝑙 ) • ∑︁ (𝑒 𝑖 ,𝑟,𝑒 𝑗 ) ∈O∪VN 𝛼 (𝑙 ) 𝑟 h (𝑙 -1) 𝑗 , h (𝑙 ) 𝑖 = tanh a (𝑙 ) 𝑖 + h (𝑙 -1) 𝑖 W (𝑙 ) ,(12)\nwhere 𝛼 𝑟 are the attention weights for relation 𝑟 . h\n(𝑙 )\n𝑖 is the embedding of entity 𝑒 𝑖 at the 𝑙 th layer. W (𝑙 ) is the connection matrix for the 𝑙 th layer. Here, we randomly initialize the input entity embedding h (0) 𝑖 during training. Besides the structure information, given the query relation in each inputted triple, an ideal aggregator is able to focus on the relevant facts in the neighborhood. To this end, the importances of neighbors are calculated based on the neural network mechanism [38]. Specifically, given a query relation 𝑟 𝑞 ∈ R, the importance of the neighbor 𝑒 𝑗 to entity 𝑒 𝑖 is calculated as:\n𝛼 NN 𝑗 |𝑖,𝑞 = softmax(𝛽 𝑗 |𝑖,𝑞 ) = exp(𝛽 𝑗 |𝑖,𝑞 ) (𝑒 𝑖 ,𝑟𝑞 ,𝑒 𝑗 ′ ) ∈O∪VN exp(𝛽 𝑗 ′ |𝑖,𝑞 )\n, where the unnormalized attention weight 𝛽 𝑗 |𝑖,𝑞 can be computed as:\n𝛽 𝑗 |𝑖,𝑞 = LeakyReLU(u • [W 𝑒 h 𝑖 ; W 𝑞 z 𝑞 ; W 𝑒 h 𝑗 ])\n, where u, W 𝑒 , and W 𝑞 are the attention parameters, and z 𝑞 is the relation-specific parameter for query relation 𝑟 𝑞 . LeakyReLU(•) is the activation function of the leaky rectified linear unit [44]. On this basis, we can formulate the query aware layer as follows:\nh 𝑂 𝑖 = ∑︁ (𝑒 𝑖 ,𝑟,𝑒 𝑗 ) ∈ O∪V N 𝛼 NN 𝑗 |𝑖,𝑞 • h 𝐼 𝑗 ,(13)\nwhere h 𝐼 𝑗 is the input embedding for the entity 𝑒 𝑗 from the last structure aware layer. h 𝑂 𝑖 is the output embedding for the entity 𝑒 𝑖 for the decoder. Note that, in the testing process, we apply the encoder on the auxiliary triples, and initialize the input representation h (0) 𝑖 ′ for the OOKG entity 𝑒 𝑖 ′ as the zero vector. 4.3.2 Embedding-based decoder. Given entity embeddings from the GNN-based encoder (i.e., E = H 𝑂 , where H 𝑂 is the output of the encoder), the decoder aims to learn relation embeddings R, and compute the truth level 𝜙 (𝑡) for each triple 𝑡. We evaluate various embedding methods in our experiments, including DistMult [45], TransE [2], ConvE [6], and Analogy [21] (see §7.2)." }, { "figure_ref": [], "heading": "Training algorithm", "publication_ref": [ "b17" ], "table_ref": [], "text": "To refine the current KG embeddings, a global loss over facts with hard and soft labels is utilized in the VNC framework. In this stage, we randomly corrupt the head or tail entity of an observed triple to form a negative triple. In this way, in addition to triples with soft labels VN = {𝑡 𝑣𝑛 }, we collect the observed and negative fact triples with hard labels, i.e., L = {𝑥 𝑙 , 𝑦 𝑙 }, where 𝑦 𝑙 ∈ {0, 1} is the hard label of the triples. To learn the optimal KG embeddings E and R, a global loss function over L and VN is:\nmin E,R 1 | L | ∑︁ L 𝑙 (𝐼 (𝑡 𝑙 ), 𝑦 𝑙 ) + 1 | V N | ∑︁ VN 𝑙 (𝐼 (𝑡 𝑣𝑛 ), 𝑠 (𝑡 𝑣𝑛 ) ),(14)\nwhere we adopt the cross entropy 𝑙 (𝑥, 𝑦) = -𝑦 log 𝑥 -(1 -𝑦) log (1 -𝑥). 𝐼 (•) is the truth level function. We use Adam [17] to minimize the global loss function. In this case, the resultant KG embeddings fit the observed facts while constrained by rules. Algorithm 1 summarizes the training process of VNC. Before training," }, { "figure_ref": [], "heading": "Algorithm 1 Training algorithm of VNC.", "publication_ref": [], "table_ref": [], "text": "Require: Triples with hard labels L; randomly initialized entity and relation embeddings E and R; parameters Θ for encoder and decoder. Ensure: The extracted rule set F logic and F corr ; trained encoder and decoder; optimal embeddings E and R; 1: Generate rule pools, and filter out rules of low quality; 2: while Training process not terminated do 3:\nCompute rule confidences 𝜆 logic and 𝜆 corr , and form rule sets F logic and F corr (Eq. 4 and 7); Calculate the conditional truth level 𝐼 (𝑔| S) (Eq. 9); 7:\nObtain the optimal soft labels S = {𝑠 (𝑡 vn ) } (Eq. 10 and 11); 8:\nObtain embeddings E and R (Eq. 3, 12, and 13); Update E, R and Θ; 11: end while rule pools are generated by finding plausible paths, and rules of low quality are filtered out (line 1). In each training step, we compute rules 𝜆 logic and 𝜆 corr using current relation embeddings to form rule sets F logic and F corr (line 3). Then, in the rule inference stage, we infer new triples VN = {𝑡 vn } using rule groundings, and assign a soft label 𝑠𝑡 vn to each predicted fact triples by solving a rule constrained optimization problem (line 4-6). Next, the knowledge graph with virtual neighbors is inputted into the GNN-based encoder and embedding-based decoder. In this way, relations and entities are mapped into embeddings (line 7). Finally, the overall loss over fact triples with hard and soft labels is obtained (line 8-9), and embeddings as well as model parameters are updated (line 10)." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [ "b11", "b2", "b32", "b38", "b13", "b13", "b38", "b14", "b38" ], "table_ref": [], "text": "Research questions. We aim to answer the following research questions: (RQ1) Does VNC outperform state-of-the-art methods on the OOKG entity problem? (See §6.1- §6.3.) (RQ2) How do the inter-rule correlations and iterative framework contribute to the performance? (See §7.1.) (RQ3) What is the influence of the decoder, embedding dimension, and penalty parameter on the performance? (See §7.2.) (RQ4) Is VNC able to identify useful inter-rule correlations in the knowledge graph? (See §7.3.)\nDatasets. We evaluate VNC on three widely used datasets: YAGO37 [11], FB15K [2], and WN11 [32]. For link prediction, we use three benchmark datasets: YAGO37 and FB15K. We create Subject-R and Object-R from each benchmark dataset, varying OOKG entities' proportion (𝑅) as 5%, 10%, 15%, 20%, and 25% following [38]. For triple classification, we directly use the datasets released in [13] based on WN11, including Head-𝑁 , Tail-𝑁 and Both-𝑁 , where 𝑁 = {1000, 3000, 5000} testing triples are randomly sampled to construct new datasets. Tab. 1 gives detailed statistics of the datasets.\nBaselines. We compare the performance of VNC against the following baselines: (i) MEAN [13] utilizes the graph neural network (GNN) and generates embeddings of OOKG entities with simple pooling functions. (ii) LSTM [38] is a simple extension of MEAN, where the LSTM network [14] is used due to its large expressive capability. (iii) LAN [38] uses a logic attention network as " }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_8", "fig_8" ], "heading": "Link prediction (RQ1)", "publication_ref": [], "table_ref": [], "text": "Tab. 2 and Fig. 4 show the experimental outcomes for the link prediction task. Based on the experimental results, we have the following observations: (i) Link predictions for OOKG entities are challenging, and for most baselines, the Hits@n and MRR are less than 0.7. In contrast, the proposed VNC is able to effectively infer missing fact triples for unseen entities. (ii) The proposed model VNC consistently outperforms state-of-the-art baselines and VN network over all the datasets. Compared to the baseline models, VNC achieves considerable increases in all metrics, including MR, MRR, Hits@10, Hits@3, and Hits@1. That is, identifying inter-rule correlations and capturing interactions among rule mining, rule inference, and embedding substantially enhance the performance for OOKG entities. (iii) When the ratio of the unseen entities increases and observed KGs become sparser, VNC is still able to accurately predict missing triples for OOKG entities. In Fig. 4, we show the results of link prediction experiments on datasets with different sample rates 𝑅. As the number of unseen entities increases, VNC still maintains the highest Hits@10 scores, indicating its robustness on sparse KGs. In summary, recognizing inter-rule correlations in KGs and designing an iterative framework for rule and embedding learning is able to strengthen the performance." }, { "figure_ref": [], "heading": "Triple classification (RQ1)", "publication_ref": [], "table_ref": [], "text": "To further evaluate VNC, we conduct triple classification on the WN11 dataset. Based on the evaluation results in Tab. 3, we observe that VNC achieves state-of-the-art results on the triple classification task. With shallow pooling functions, MEAN and LSTM lead to the lowest accuracy. Meanwhile, other baseline models are hindered by the data sparsity problem, and ignoring complex patterns in graphs and interactions between rule and embedding learning. In contrast, VNC infers virtual neighbors for OOKG entities and mine logic rules and inter-rule correlations from KGs in an iterative manner, which results in the highest accuracies over all the datasets." }, { "figure_ref": [], "heading": "Comparisons with entity-independent methods (RQ1)", "publication_ref": [], "table_ref": [], "text": "In addition to the entity-specific baselines, we compare VNC against entity-independent methods . Tab. 4 shows the evaluation results on FB15K Subject-10. We draw the following conclusions: (i) In comparison with entity-independent methods, the state-of-the-art entity-specific frameworks perform better, demonstrating the importance of embeddings of known entities. Compared to DRUM, GraIL, and TACT, entity-specific embedding models, including GEN, InvTransE, InvRotatE, VN network, and VNC, utilize pretrained embeddings of observed entities and attain huge performance enhancements. (ii) VNC outperforms both entity-independent and entity-specific embedding methods, and achieves the best performance. This is, for OOKG entities, identifying inter-rule correlations in KGs and aggregating embeddings of neighborhood entities facilitate predictions of missing facts. In summary, extracting interrule correlations iteratively and integrating with embeddings of observed entities benefits the OOKG entity problem." }, { "figure_ref": [], "heading": "ANALYSIS 7.1 Ablation studies (RQ2)", "publication_ref": [ "b8", "b49" ], "table_ref": [], "text": "To evaluate the effectiveness of each component in the VNC framework, we conduct ablation studies on the link prediction task. The results are shown in Tab. 5. When only employing the GNN-based encoder and embedding-based decoder (\"no rules\"), all metrics suffer a severe drop. In the \"hard rules\" setting, virtual neighbors are directly inferred by logic rules instead of soft label predictions. Compared to the \"no rules\" settings, predicting virtual neighbors with hard logic rules effectively alleviates the data sparsity problem.\nTo examine the necessity of the iterative framework, we extract logic rules and learn knowledge embeddings simultaneously in the \"soft rules\" setting. The results show that the iterative framework captures interactions among rule mining, rule inference, and embedding, and gains considerable improvements over the model with hard logic rules. Moreover, compared with the \"soft rules\" setting, VNC further improves the performance by identifying interrule correlations in KG. In short, both inter-rule correlations and the iterative framework contribute to the improvements in performance. We also consider two model variants, VNC (AMIE+) and VNC (IterE), with different rule mining frameworks AMIE+ [8] and IterE [49], respectively. VNC (AMIE+) mines logic rules with AMIE+, and keeps confidence scores of logic rules unchanged during the training process. VNC (IterE) assumes the truth values of triples existing in KGs to be 1, and then calculates soft labels recursively using Eq. 8 instead of solving the optimization problem in Eq. 10. The results in Tab. 5 show that the proposed iterative framework in VNC outperforms other rule mining methods, indicating the effectiveness of VNC." }, { "figure_ref": [], "heading": "Influence of decoder (RQ3)", "publication_ref": [ "b2", "b6", "b21", "b45" ], "table_ref": [], "text": "To assess the impact of various decoders on performance, we examine four types of embedding-based decoders, including TransE [2], ConvE [6], Analogy [21], and DistMult [45], regarding their effectiveness in the link prediction task. According to the results in Tab. 6, VNC using the TransE decoder demonstrates the lowest performance, while VNC with DistMult achieves the highest performance. In comparison to translational models, the bilinear scoring function-based decoder is more compatible with our framework." }, { "figure_ref": [], "heading": "Case studies (RQ4)", "publication_ref": [ "b16" ], "table_ref": [], "text": "For RQ4, we conduct case studies on VNC, and Tab. 7 shows examples of the inter-rule correlations on YAGO37. In the first example, from the logic rule and inter-rule path, it is easy to find that \"George\" is the director and producer of \"Young Bess\" and \"Cass Timberlane\". Similarly, the second example shows that the children usually have the same citizenship as their parents. Note that, the above missing facts can not be directly inferred by either logic rules or symmetric path rules [16]. Thus, by identifying useful inter-rule correlations, VNC is able to model complex patterns in the knowledge graph and facilitate embedding learning." }, { "figure_ref": [], "heading": "CONCLUSION AND FUTURE WORK", "publication_ref": [], "table_ref": [], "text": "In this paper, we focus on predicting missing facts for out-ofknowledge-graph (OOKG) entities. Previous work for this task still suffers from two key challenges: identifying inter-rule correlations, and capturing the interactions within rule and embedding learning. To address these problems, we propose a novel framework, named VNC, that infers virtual neighbors for OOKG entities by iteratively extracting logic rules and inter-rule correlations from knowledge graphs. We conduct both link prediction and triple classification, and experimental results show that the proposed VNC achieves state-of-the-art performance on four widely-used knowledge graphs. Besides, the VNC framework effectively alleviates the data sparsity problem, and is highly robust to the proportion of the unseen entities. For future work, we plan to incorporate more kinds of complex patterns in knowledge graphs. In addition, generalizing the VNC framework to the unseen relations is also a promising direction." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key R&D Program of China (2020YFB1406704, 2022YFC3303004), the Natural Science Foundation of China (62272274, 61972234, 62072279, 62102234, 62202271), the Natural Science Foundation of Shandong Province (ZR2021QF129, ZR2022QF004), the Key Scientific and Technological Innovation Program of Shandong Province (2019JZZY010129), the Fundamental Research Funds of Shandong University, the China Scholarship Council under grant nr. 202206220085, the Hybrid Intelligence Center, a 10-year program funded by the Dutch Ministry of Education, Culture and Science through the Netherlands Organization for Scientific Research, https://hybrid-intelligence-centre.nl, and project LESSEN with project number NWA.1389.20.183 of the research program NWA ORC 2020/21, which is (partly) financed by the Dutch Research Council (NWO)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Our code and data are available at https: //github.com/" } ]
Recent work on knowledge graph completion (KGC) focuses on acquiring embeddings of entities and relations in knowledge graphs. These embedding methods necessitate that all test entities be present during the training phase, resulting in a time-consuming retraining process for out-of-knowledge-graph (OOKG) entities. To tackle this predicament, current inductive methods employ graph neural networks (GNNs) to represent unseen entities by aggregating information of the known neighbors, and enhance the performance with additional information, such as attention mechanisms or logic rules. Nonetheless, Two key challenges continue to persist: (i) identifying inter-rule correlations to further facilitate the inference process, and (ii) capturing interactions among rule mining, rule inference, and embedding to enhance both rule and embedding learning. In this paper, we propose a virtual neighbor network with interrule correlations (VNC) to address the above challenges. VNC consists of three main components: (i) rule mining, (ii) rule inference, and (iii) embedding. To identify useful complex patterns in knowledge graphs, both logic rules and inter-rule correlations are extracted from knowledge graphs based on operations over relation embeddings. To reduce data sparsity, virtual networks for OOKG entities are predicted and assigned soft labels by optimizing a rule-constrained problem. We also devise an iterative framework to capture the underlying interactions between rule and embedding learning. Experimental results on both link prediction and triple classification tasks show that the proposed VNC framework achieves state-of-the-art performance on four widelyused knowledge graphs.
Iteratively Learning Representations for Unseen Entities with Inter-Rule Correlations
[ { "figure_caption": "Figure 1 :1Figure 1: An example of the OOKG entity problem. Our aim is to predict missing facts of the OOKG entities.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "1 (1Knowledge graph). A knowledge graph K can be regarded as a multi-relational graph, consisting of a set of observed fact triples, i.e., O = {𝑡 𝑜 }, where 𝑡 𝑜 = (𝑒 𝑖 , 𝑟 𝑘 , 𝑒 𝑗 ). Each fact triple consists of two entities 𝑒 𝑖 , 𝑒 𝑗 ∈ E 𝑜 , and one type of relation 𝑟 𝑘 ∈ R, where E 𝑜 and R are the entity and relation sets respectively. For each triple (𝑒 𝑖 , 𝑟 𝑘 , 𝑒 𝑗 ) ∈ K, we denote the reverse version of relation 𝑟 𝑘 as 𝑟 -1", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "follows: path 𝑞 (𝑓 logic 𝑚 , 𝑓 ′logic mp ) :", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4. 1 . 212Confidence computation. Given the generated rule pool and current relation embedding R, the confidence computation assigns a score 𝜆 𝑚 for each extracted rule 𝑓 𝑚 .", "figure_data": "", "figure_id": "fig_3", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "logicFigure 2 :2Figure 2: An overview of VNC. VNC has three main stages: rule mining, rule inference, and embedding.", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (a) and (b) are examples of SP rules and inter-rule correlations, respectively; (c) and (d) demonstrate the intersection between SP rules and inter-rule correlations.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "logic𝑚 and 𝜆 corr 𝑣 are the confidence values for logic rule 𝑓 logic 𝑚", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4 :4Find rule groundings G logic and G corr ; 5:Infer V N = {𝑡 vn } and compute the truth level 𝐼 (𝑡 vn ) for each triple 𝑡 vn (Eq. 3); 6:", "figure_data": "", "figure_id": "fig_8", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "9 :9Compute the global loss over L and V N (Eq. 14); 10:", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Link prediction results on YAGO37 and FB15K.", "figure_data": "", "figure_id": "fig_10", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ". Specifically, for each logic rule grounding 𝑔 logic 𝑚𝑛 : (𝑒 𝑖 , 𝑟 𝑏 , 𝑒 𝑗 ) → (𝑒 𝑖 , 𝑟 ℎ , 𝑒 𝑗 ), where (𝑒 𝑖 , 𝑟 𝑏 , 𝑒 𝑗 ) ∈ O and (𝑒 𝑖 , 𝑟 ℎ , 𝑒 𝑗 ) ∈ VN , the conditional truth level 𝐼 (𝑔 𝐼 (𝑒 𝑖 , 𝑟 𝑏 , 𝑒 𝑗 ) • 𝑠 (𝑒 𝑖 , 𝑟 ℎ , 𝑒 𝑗 ) -𝐼 (𝑒 𝑖 , 𝑟 𝑏 , 𝑒 𝑗 ) + 1, where 𝐼 (𝑒 𝑖 , 𝑟 𝑏 , 𝑒 𝑗 ) is the truth level defined in Eq. 3 computed using the current embedding, while 𝑠 (𝑒 𝑖 , 𝑟 ℎ , 𝑒 𝑗 ) is a soft label to infer. Similarly, for each grounding of inter-rule correlations", "figure_data": "logic 𝑚𝑛 |S) is cal-culated as: 𝐼 (𝑔𝑔 𝑐𝑜𝑟𝑟 𝑣𝑤 : 𝑔 𝑏 logic→ 𝑔 ′logic𝑙𝑜𝑔𝑖𝑐 ℎ , where 𝑔 𝑏", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Descriptive statistics of the datasets. DRUM[28] designs an end-to-end rule mining framework via the connection between tensor completion and the estimation of confidence scores. (ii) GraIL[34] is a GNN framework that reasons over local subgraphs and learns entity-independent relational semantics. (iii) TACT[4] incorporates seven types of semantic correlations between relations with the existing inductive methods. For entity-independent methods, the training sets are considered as the original KGs while training sets with auxiliary facts are regarded as the new KGs during testing.Evaluation metrics. For the link prediction task, we report filtered Mean Rank (MR), Mean Reciprocal Rank (MRR), and Hits at 𝑛 (Hits@𝑛), where filtered metrics are computed after removing all the other positive triples that appear in either training, validation, or test set during ranking. For the triple classification task, models are measured by classifying a fact triple as true or false, and Accuracy is applied to assess the proportion of correct triple classifications. 𝐶 is maintained at 1. The GNN-based encoder comprises two structure-aware layers and one query-aware layer, while Dist-Mult serves as the decoder. For WN18, 𝛼 𝐻𝐶 and 𝛼 𝑆𝐶 are 0.3; for FB15K, they are 0.5. Optimal values for 𝛼 𝐻𝐶 and 𝛼 𝑆𝐶 on YAGO37 and WN11 are 0.01. Across all datasets, 𝛼 𝑃𝐶𝑅𝐴 is set to 0.01.", "figure_data": "DatasetEntities Relations Training ValidationTestYAGO37 123,18937989,13250,000 50,000FB15K14,9511,345483,14250,000 59,071WN1138,69611112,5812,609 10,544the aggregator to capture information of redundancy and query re-lations in the neighborhood. (iv) GEN [1] develops a meta-learningframewrok to simulate the unseen entities during meta-training.(v) VN network [16] infers additional virtual neighbors for OOKGentities to alleviate the data sparsity problem. (vi) InvTransE andInvRotatE [5] obtain optimal estimations of OOKG entity embed-dings with translational assumptions. Besides, we also compareVNC with the following entity-independent embedding methods.(i) Implementation details. We fine-tune hyper-parameters basedon validation performance. Encoders and decoders have 200 dimen-sions. Learning rate, dropout, and regularization are set to 0.02, 0.2,and 0.01.", "figure_id": "tab_4", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Link prediction results on YAGO37 and FB15K. Significant improvements over the best baseline are marked with * (t-test, 𝑝 < 0.05).", "figure_data": "", "figure_id": "tab_5", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Triple classification results on WN11. Significant improvements over the best baseline are marked with * (ttest, 𝑝 < 0.05). .3 83.3 84.0 75.2 69.2 83.0 73.3 68.2 LSTM 87.0 83.5 81.8 82.9 71.4 63.1 78.5 71.6 65.8 LAN 88.8 85.2 84.2 84.7 78.8 74.3 83.3 76.9 70.6 GEN 88.6 85.1 84.6 84.1 77.9 74.4 85.1 76.2 73.9 InvTransE 88.2 87.8 83.2 84.4 80.1 74.4 86.3 78.4 74.6 InvRotatE 88.4 86.9 84.1 84.6 80.1 74.9 84.2 75.0 70.6 VN network 89.1 85.9 85.4 85.5 80.6 76.8 84.1 78.5 73.1 VNC 90.6 * 88.9 * 86.7 * 86.9 * 82.3 * 78.3 * 87.7 * 79.6 * 76.2 *", "figure_data": "SubjectObjectBothModel1000 3000 5000 1000 3000 5000 1000 3000 5000MEAN87.3 84", "figure_id": "tab_6", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Link prediction results on FB15K Subject-10. Significant improvements over the best baseline are marked with * (t-test, 𝑝 < 0.05).", "figure_data": "Model MR MRR Hits@10 Hits@3 Hits@1DRUM 249 41.659.446.831.7GraIL 241 41.960.147.332.1TACT 238 42.660.247.132.9VNC 151 * 54.3 *75.9 *60.8 *41.6 *", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation studies on FB15K Subject-10.", "figure_data": "ModelMR MRR Hits@10 Hits@3 Hits@1VNC151 54.375.960.841.6No rules251 40.961.947.331.5Hard rules192 45.267.652.635.4Soft rules164 53.374.258.540.1VNC (AMIE+) 191 48.971.355.637.2VNC (IterE) 172 52.573.758.839.7", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Influence of the decoders on FB15K Subject-10.", "figure_data": "DecoderMR MRR Hits@10 Hits@3 Hits@1VN network175 46.370.152.634.5VNC (TransE)204 47.471.253.634.5VNC (ConvE)171 53.174.659.841.2VNC (Analogy) 163 52.974.860.140.7VNC (DistMult) 151 54.375.960.841.6", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Examples of inter-rule correlations on YAGO37. Soft label 𝑠 (George, directed, Cass Timberlane) = 0.98 Logic rule (George, directed, Young Bess) → (George, created, Young Bess) Incomplete rule (George, directed, Cass Timberlane) → (George, created, Cass Timberlane) Inter-rule path (Young Bess, isLocatedIn, United States)∧ (United States, isLocatedIn -1 , Cass Timberlane)", "figure_data": "Soft label𝑠 (Sigmar, isCitizenOf, Germany) = 0.92Logic rule(Thorbjørn, hasChild, Kjell) ∧ (Kjell, isCitizenOf,Norway) → (Thorbjørn, isCitizenOf, Norway)Incomplete rule (Franz, hasChild, Sigmar ) ∧ (Sigmar, isCitizenOf,Germany) → (Franz, isCitizenOf, Germany)Inter-rule path(Germany, hasNeighbor, Denmark)∧ (Denmark, dealWith, Norway)", "figure_id": "tab_10", "figure_label": "7", "figure_type": "table" } ]
Zihan Wang; Yongquan He; Pengjie Ren; Maarten De Rijke
[ { "authors": "", "journal": "YAGO37 FB", "ref_id": "b0", "title": "-10 Object-10 Subject-10 Object-10 Model MR MRR Hits@10 Hits@3 Hits@1 MR MRR Hits@10 Hits@3 Hits@1 MR MRR Hits@10 Hits@3 Hits@1 MR MRR Hits@10 Hits@3 Hits@1", "year": null }, { "authors": "Jinheon Baek; Dong Bok Lee; Sung Ju Hwang", "journal": "", "ref_id": "b1", "title": "Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph Link Prediction", "year": "2020" }, { "authors": "Antoine Bordes; Nicolas Usunier; Alberto García-Durán; Jason Weston; Oksana Yakhnenko", "journal": "NeurIPS", "ref_id": "b2", "title": "Translating Embeddings for Modeling Multi-relational Data", "year": "2013" }, { "authors": "Antoine Bordes; Jason Weston; Ronan Collobert; Yoshua Bengio", "journal": "", "ref_id": "b3", "title": "Learning Structured Embeddings of Knowledge Bases", "year": "2011" }, { "authors": "Jiajun Chen; Huarui He; Feng Wu; Jie Wang", "journal": "", "ref_id": "b4", "title": "Topology-Aware Correlations Between Relations for Inductive Link Prediction in Knowledge Graphs", "year": "2021" }, { "authors": "Damai Dai; Hua Zheng; Fuli Luo; Pengcheng Yang; Tianyu Liu; Zhifang Sui; Baobao Chang", "journal": "", "ref_id": "b5", "title": "Inductively Representing Out-of-Knowledge-Graph Entities by Optimal Estimation Under Translational Assumptions", "year": "2021-08-06" }, { "authors": "Tim Dettmers; Pasquale Minervini; Pontus Stenetorp; Sebastian Riedel", "journal": "", "ref_id": "b6", "title": "Convolutional 2D Knowledge Graph Embeddings", "year": "2018" }, { "authors": "Boyang Ding; Quan Wang; Bin Wang; Li Guo", "journal": "", "ref_id": "b7", "title": "Improving Knowledge Graph Embedding Using Simple Constraints", "year": "2018" }, { "authors": "Luis Galárraga; Christina Teflioudi; Katja Hose; Fabian M Suchanek", "journal": "VLDB J", "ref_id": "b8", "title": "Fast rule mining in ontological knowledge bases with AMIE+", "year": "2015" }, { "authors": "Luis Antonio Galárraga; Christina Teflioudi; Katja Hose; Fabian M Suchanek", "journal": "WWW", "ref_id": "b9", "title": "AMIE: association rule mining under incomplete evidence in ontological knowledge bases", "year": "2013" }, { "authors": "David Graus; Daan Odijk; Maarten De Rijke", "journal": "Journal of the Association for Information Science and Technology", "ref_id": "b10", "title": "The Birth of Collective Memories: Analyzing Emerging Entities in Text Streams", "year": "2018-06" }, { "authors": "Shu Guo; Quan Wang; Lihong Wang; Bin Wang; Li Guo", "journal": "", "ref_id": "b11", "title": "Knowledge Graph Embedding With Iterative Guidance From Soft Rules", "year": "2018" }, { "authors": "Petr Hájek", "journal": "Trends in Logic", "ref_id": "b12", "title": "Metamathematics of Fuzzy Logic", "year": "1998" }, { "authors": "Takuo Hamaguchi; Hidekazu Oiwa; Masashi Shimbo; Yuji Matsumoto", "journal": "", "ref_id": "b13", "title": "Knowledge Transfer for Out-of-Knowledge-Base Entities : A Graph Neural Network Approach", "year": "2017" }, { "authors": "William L Hamilton; Zhitao Ying; Jure Leskovec", "journal": "NeurIPS", "ref_id": "b14", "title": "Inductive Representation Learning on Large Graphs", "year": "2017" }, { "authors": "He He; Anusha Balakrishnan; Mihail Eric; Percy Liang", "journal": "", "ref_id": "b15", "title": "Learning Symmetric Collaborative Dialogue Agents with Dynamic Knowledge Graph Embeddings", "year": "2017" }, { "authors": "Yongquan He; Zhihan Wang; Peng Zhang; Zhaopeng Tu; Zhaochun Ren", "journal": "", "ref_id": "b16", "title": "VN Network: Embedding Newly Emerging Entities with Virtual Neighbors", "year": "2020" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b17", "title": "Adam: A Method for Stochastic Optimization", "year": "2015" }, { "authors": "Yunshi Lan; Jing Jiang", "journal": "", "ref_id": "b18", "title": "Query Graph Generation for Answering Multihop Complex Questions from Knowledge Bases", "year": "2020" }, { "authors": "Yankai Lin; Zhiyuan Liu; Huan-Bo Luan; Maosong Sun; Siwei Rao; Song Liu", "journal": "", "ref_id": "b19", "title": "Modeling Relation Paths for Representation Learning of Knowledge Bases", "year": "2015" }, { "authors": "Yankai Lin; Zhiyuan Liu; Maosong Sun; Yang Liu; Xuan Zhu", "journal": "", "ref_id": "b20", "title": "Learning Entity and Relation Embeddings for Knowledge Graph Completion", "year": "2015" }, { "authors": "Hanxiao Liu; Yuexin Wu; Yiming Yang", "journal": "", "ref_id": "b21", "title": "Analogical Inference for Multi-relational Embeddings", "year": "2017" }, { "authors": "Bonan Min; Ralph Grishman; Li Wan; Chang Wang; David Gondek", "journal": "", "ref_id": "b22", "title": "Distant Supervision for Relation Extraction with an Incomplete Knowledge Base", "year": "2013" }, { "authors": "Shanlei Mu; Yaliang Li; Wayne Xin Zhao; Siqing Li; Ji-Rong Wen", "journal": "ACM Trans. Inf. Syst", "ref_id": "b23", "title": "Knowledge-Guided Disentangled Representation Learning for Recommender Systems", "year": "2021" }, { "authors": "Arvind Neelakantan; Benjamin Roth; Andrew Mccallum", "journal": "", "ref_id": "b24", "title": "Compositional Vector Space Models for Knowledge Base Completion", "year": "2015" }, { "authors": "Maximilian Nickel; Hans-Peter Volker Tresp; Kriegel", "journal": "", "ref_id": "b25", "title": "A Three-Way Model for Collective Learning on Multi-Relational Data", "year": "2011" }, { "authors": "Guanglin Niu; Yongfei Zhang; Bo Li; Peng Cui; Si Liu; Jingyang Li; Xiaowei Zhang", "journal": "", "ref_id": "b26", "title": "Rule-Guided Compositional Representation Learning on Knowledge Graphs", "year": "2020" }, { "authors": "Pouya Ghiasnezhad Omran; Kewen Wang; Zhe Wang", "journal": "", "ref_id": "b27", "title": "Scalable Rule Learning via Learning Representation", "year": "2018" }, { "authors": "Ali Sadeghian; Mohammadreza Armandpour; Patrick Ding; Daisy Zhe Wang", "journal": "", "ref_id": "b28", "title": "DRUM: End-To-End Differentiable Rule Mining On Knowledge Graphs", "year": "2019" }, { "authors": "Sejr Michael; Thomas N Schlichtkrull; Peter Kipf; Rianne Bloem; Van Den; Ivan Berg; Max Titov; Welling", "journal": "", "ref_id": "b29", "title": "Modeling Relational Data with Graph Convolutional Networks", "year": "2018" }, { "authors": "Chao Shang; Yun Tang; Jing Huang; Jinbo Bi; Xiaodong He; Bowen Zhou", "journal": "", "ref_id": "b30", "title": "End-to-End Structure-Aware Convolutional Networks for Knowledge Base Completion", "year": "2019" }, { "authors": "Baoxu Shi; Tim Weninger", "journal": "", "ref_id": "b31", "title": "Open-World Knowledge Graph Completion", "year": "2018" }, { "authors": "Richard Socher; Danqi Chen; Christopher D Manning; Andrew Y Ng", "journal": "NeurIPS", "ref_id": "b32", "title": "Reasoning With Neural Tensor Networks for Knowledge Base Completion", "year": "2013" }, { "authors": "Zhiqing Sun; Zhi-Hong Deng; Jian-Yun Nie; Jian Tang", "journal": "", "ref_id": "b33", "title": "RotatE: Knowledge Graph Embedding by Relational Rotation in Complex Space", "year": "2019" }, { "authors": "K Komal; Etienne Teru; Will Denis; Hamilton", "journal": "", "ref_id": "b34", "title": "Inductive Relation Prediction by Subgraph Reasoning", "year": "2020" }, { "authors": "Guanying Wang; Wen Zhang; Ruoxu Wang; Yalin Zhou; Xi Chen; Wei Zhang; Hai Zhu; Huajun Chen", "journal": "", "ref_id": "b35", "title": "Label-Free Distant Supervision for Relation Extraction via Knowledge Graph Embedding", "year": "2018" }, { "authors": "Hongwei Wang; Fuzheng Zhang; Jialin Wang; Miao Zhao; Wenjie Li; Xing Xie; Minyi Guo", "journal": "ACM Trans. Inf. Syst", "ref_id": "b36", "title": "Exploring High-Order User Preference on the Knowledge Graph for Recommender Systems", "year": "2019" }, { "authors": "Liang Wang; Wei Zhao; Zhuoyu Wei; Jingming Liu", "journal": "", "ref_id": "b37", "title": "SimKGC: Simple Contrastive Knowledge Graph Completion with Pre-trained Language Models", "year": "2022" }, { "authors": "Peifeng Wang; Jialong Han; Chenliang Li; Rong Pan", "journal": "", "ref_id": "b38", "title": "Logic Attention Based Neighborhood Aggregation for Inductive Knowledge Graph Embedding", "year": "2019" }, { "authors": "Shen Wang; Xiaokai Wei; Cícero Nogueira Dos Santos; Zhiguo Wang; Ramesh Nallapati; Andrew O Arnold; Bing Xiang; Philip S Yu; Isabel F Cruz", "journal": "", "ref_id": "b39", "title": "Mixed-Curvature Multi-Relational Graph Neural Network for Knowledge Graph Completion", "year": "" }, { "authors": "Xiaozhi Wang; Tianyu Gao; Zhaocheng Zhu; Zhengyan Zhang; Zhiyuan Liu; Juanzi Li; Jian Tang", "journal": "Trans. Assoc. Comput. Linguistics", "ref_id": "b40", "title": "KEPLER: A Unified Model for Knowledge Embedding and Pre-trained Language Representation", "year": "2021" }, { "authors": "Xiang Wang; Xiangnan He; Yixin Cao; Meng Liu; Tat-Seng Chua", "journal": "", "ref_id": "b41", "title": "KGAT: Knowledge Graph Attention Network for Recommendation", "year": "2019" }, { "authors": "Zihan Wang; Zhaochun Ren; Chunyu He; Peng Zhang; Yue Hu", "journal": "", "ref_id": "b42", "title": "Robust Embedding with Multi-Level Structures for Link Prediction", "year": "2019" }, { "authors": "Zhen Wang; Jianwen Zhang; Jianlin Feng; Zheng Chen", "journal": "", "ref_id": "b43", "title": "Knowledge Graph Embedding by Translating on Hyperplanes", "year": "2014" }, { "authors": "Bing Xu; Naiyan Wang; Tianqi Chen; Mu Li", "journal": "", "ref_id": "b44", "title": "Empirical Evaluation of Rectified Activations in Convolutional Network", "year": "2015" }, { "authors": "Bishan Yang; Wen-Tau Yih; Xiaodong He; Jianfeng Gao; Li Deng", "journal": "", "ref_id": "b45", "title": "Embedding Entities and Relations for Learning and Inference in Knowledge Bases", "year": "2015" }, { "authors": "Tianchi Yang; Linmei Hu; Chuan Shi; Houye Ji; Xiaoli Li; Liqiang Nie", "journal": "ACM Trans. Inf. Syst", "ref_id": "b46", "title": "HGAT: Heterogeneous Graph Attention Networks for Semi-supervised Short Text Classification", "year": "2021" }, { "authors": "Maoyuan Zhang; Qi Wang; Wukui Xu; Wei Li; Shuyuan Sun", "journal": "", "ref_id": "b47", "title": "Discriminative Path-Based Knowledge Graph Embedding for Precise Link Prediction", "year": "2018" }, { "authors": "Richong Zhang; Yue Wang; Yongyi Mao; Jinpeng Huai", "journal": "ACM Trans. Inf. Syst", "ref_id": "b48", "title": "Question Answering in Knowledge Bases: A Verification Assisted Model with Iterative Training", "year": "2019" }, { "authors": "Wen Zhang; Bibek Paudel; Liang Wang; Jiaoyan Chen; Hai Zhu; Wei Zhang; Abraham Bernstein; Huajun Chen", "journal": "", "ref_id": "b49", "title": "Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning", "year": "2019" }, { "authors": "Yufeng Zhang; Weiqing Wang; Wei Chen; Jiajie Xu; An Liu; Lei Zhao", "journal": "", "ref_id": "b50", "title": "Meta-Learning Based Hyper-Relation Feature Modeling for Out-of-Knowledge-Base Embedding", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 53.47, 575.02, 241.56, 26 ], "formula_id": "formula_0", "formula_text": "F logic = {(𝑓 logic 𝑚 , 𝜆 logic 𝑚 )} 𝑀 𝑚=1 , where 𝑓 logic 𝑚" }, { "formula_coordinates": [ 3, 445.99, 214.65, 68.55, 11.72 ], "formula_id": "formula_1", "formula_text": "G logic 𝑚 = {𝑔 logic 𝑚𝑛 } 𝑁 𝑚" }, { "formula_coordinates": [ 3, 415.18, 255.07, 95.06, 11.4 ], "formula_id": "formula_2", "formula_text": "F corr = {(𝑓 corr 𝑣 , 𝜆 corr 𝑣 )} 𝑉 𝑣=1" }, { "formula_coordinates": [ 3, 370.43, 307.49, 188.31, 22.1 ], "formula_id": "formula_3", "formula_text": "𝑓 corr 𝑣 mpq : 𝑓 logic 𝑚 path 𝑞 𝑓 logic 𝑚 ,𝑓 ′logic mp ---------------→ 𝑓 ′logic mp ,(2)" }, { "formula_coordinates": [ 3, 343.32, 335.19, 22.27, 12.45 ], "formula_id": "formula_4", "formula_text": "𝑓 ′𝑙𝑜𝑔𝑖𝑐 𝑚𝑝" }, { "formula_coordinates": [ 3, 318.4, 447.8, 239.8, 19.92 ], "formula_id": "formula_5", "formula_text": "(𝑥 1 , 𝑟 1 , 𝑥 2 ) ∧ (𝑥 2 , 𝑟 2 , 𝑥 3 ) ∧ • • • ∧ (𝑥 𝑘 , 𝑟 𝑘 , 𝑥 𝑘+1 )" }, { "formula_coordinates": [ 4, 133.38, 236.73, 161.2, 9.75 ], "formula_id": "formula_6", "formula_text": "𝜙 (𝑒 𝑖 , 𝑟 𝑘 , 𝑒 𝑗 ) = e 𝑇 𝑖 R 𝑘 e 𝑗 ,(3)" }, { "formula_coordinates": [ 4, 317.96, 96.21, 239.2, 24.28 ], "formula_id": "formula_7", "formula_text": "(𝑒 ′ 1 , 𝑟 1 , 𝑒 ′ 2 ) ∧ (𝑒 ′ 2 , 𝑟 2 , 𝑒 ′ 3 ) → (𝑒 ′ 1 , 𝑟, 𝑒 ′ 3 ), where 𝑒 𝑖 , 𝑒 ′ 𝑖 ∈ E 𝑜 and (𝑒 ′ 1 , 𝑟 1 , 𝑒 ′ 2 ) ∉ O." }, { "formula_coordinates": [ 4, 318.4, 290.2, 239.8, 19.91 ], "formula_id": "formula_8", "formula_text": "(𝑥 1 , 𝑟 1 , 𝑥 2 ) ∧ (𝑥 2 , 𝑟 2 , 𝑥 3 ) ∧ • • • ∧ (𝑥 𝑘 , 𝑟 𝑘 , 𝑥 𝑘+1 )" }, { "formula_coordinates": [ 4, 379.02, 329.21, 179.72, 13.8 ], "formula_id": "formula_9", "formula_text": "𝜆 logic 𝑚 = sim(path body , path head ),(4)" }, { "formula_coordinates": [ 4, 360.39, 436.91, 198.35, 28.38 ], "formula_id": "formula_10", "formula_text": "path body = r 1 + r 2 + • • • + r 𝑘 , path head = r, 𝜆 logic 𝑚 = ||path body -path head || 2 ,(5)" }, { "formula_coordinates": [ 4, 372.07, 516.67, 186.67, 42.22 ], "formula_id": "formula_11", "formula_text": "path body = M 𝑟 1 + M 𝑟 2 + • • • + M 𝑟 𝑘 , path head = M 𝑟 , 𝜆 𝑙𝑜𝑔𝑖𝑐 𝑚 = ||path body -path head || 𝐹 ,(6)" }, { "formula_coordinates": [ 4, 399.3, 638.07, 159.44, 14.61 ], "formula_id": "formula_12", "formula_text": "𝜆 corr 𝑣 mpq = 𝜆 logic 𝑚 • 𝜆 ′logic mp ,(7)" }, { "formula_coordinates": [ 5, 53.53, 274.83, 240.07, 24.48 ], "formula_id": "formula_13", "formula_text": "′logic 𝑚𝑝 : (𝑥 1 , 𝑟 1 , 𝑥 2 ) ∧ (𝑥 2 , 𝑟 2 , 𝑥 3 ) → (𝑥 1 , 𝑟, 𝑥 3 ) (with (𝑥 1 , 𝑟 1 , 𝑥 2 )" }, { "formula_coordinates": [ 5, 53.66, 299.32, 150.84, 16.54 ], "formula_id": "formula_14", "formula_text": "𝜆 ′logic 𝑚𝑝 = ∥M 𝑟 + M 𝑟 -1 2 -M 𝑟 1 ∥ 𝐹 , where M 𝑟 -1" }, { "formula_coordinates": [ 5, 112.9, 676.99, 181.62, 32.72 ], "formula_id": "formula_15", "formula_text": "𝐼 (𝑎 ∧ 𝑏 ) = 𝐼 (𝑎) • 𝐼 (𝑏 ), 𝐼 (𝑎 ∨ 𝑏 ) = 𝐼 (𝑎) + 𝐼 (𝑏 ) -𝐼 (𝑎) • 𝐼 (𝑏 ), 𝐼 (¬𝑎) = 1 -𝐼 (𝑎),(8)" }, { "formula_coordinates": [ 5, 368.31, 538.07, 165.28, 8.43 ], "formula_id": "formula_16", "formula_text": "𝐼 (𝑎 → 𝑏) = 𝐼 (¬𝑎 ∨ 𝑏) = 𝐼 (𝑎) • 𝐼 (𝑏) -𝐼 (𝑎) + 1." }, { "formula_coordinates": [ 6, 85.69, 186.44, 208.9, 14.09 ], "formula_id": "formula_17", "formula_text": "𝐼 (𝑔 corr 𝑣𝑤 |S) = 𝐼 (𝑔 logic 𝑏 ) • 𝐼 (𝑔 ′logic ℎ |S) -𝐼 (𝑔 logic 𝑏 ) + 1,(9)" }, { "formula_coordinates": [ 6, 90.57, 334.65, 200.91, 64.57 ], "formula_id": "formula_18", "formula_text": "1 2 • ∑︁ 𝑡𝑣𝑛 ∈VN (𝑠 (𝑡 𝑣𝑛 ) -𝐼 (𝑡 𝑣𝑛 ) ) 2 + 𝐶 • ∑︁ 𝑚,𝑛 𝜉 logic 𝑚𝑛 + ∑︁ 𝑣,𝑤 𝜉 corr 𝑣𝑤 such that 𝜆 logic 𝑚 (1 -𝐼 (𝑔 logic 𝑚𝑛 |𝑆 ) ) ≤ 𝜉 logic 𝑚𝑛 𝜆 corr 𝑣 (1 -𝐼 (𝑔 corr 𝑣𝑤 |𝑆 ) ) ≤ 𝜉 corr 𝑣𝑤 𝜉 logic 𝑚𝑛 ≥ 0, 𝜉 corr 𝑣𝑤 ≥ 0, 0 ≤ 𝑠 (𝑡 𝑣𝑛 ) ≤ 1, (10" }, { "formula_coordinates": [ 6, 291.48, 362.79, 3.04, 7.06 ], "formula_id": "formula_19", "formula_text": ")" }, { "formula_coordinates": [ 6, 90.67, 487.29, 203.86, 50.23 ], "formula_id": "formula_20", "formula_text": "𝑠 (𝑡 𝑣𝑛 ) = 𝐼 (𝑡 vn ) + 𝐶 • ∑︁ 𝑚,𝑛 𝜆 logic 𝑚 ∇ 𝑠 (𝑡vn ) 𝐼 (𝑔 logic 𝑚𝑛 |𝑆 ) + ∑︁ 𝑣,𝑤 𝜆 corr 𝑣 ∇ 𝑠 (𝑡vn ) 𝐼 (𝑔 corr 𝑣𝑤 |𝑆 ) 1 0 ,(11)" }, { "formula_coordinates": [ 6, 368.7, 116.99, 189.98, 37.01 ], "formula_id": "formula_21", "formula_text": "a (𝑙 ) 𝑖 = W (𝑙 ) • ∑︁ (𝑒 𝑖 ,𝑟,𝑒 𝑗 ) ∈O∪VN 𝛼 (𝑙 ) 𝑟 h (𝑙 -1) 𝑗 , h (𝑙 ) 𝑖 = tanh a (𝑙 ) 𝑖 + h (𝑙 -1) 𝑖 W (𝑙 ) ,(12)" }, { "formula_coordinates": [ 6, 510.57, 161.25, 8.05, 6.25 ], "formula_id": "formula_22", "formula_text": "(𝑙 )" }, { "formula_coordinates": [ 6, 360.49, 266.37, 194.14, 18.21 ], "formula_id": "formula_23", "formula_text": "𝛼 NN 𝑗 |𝑖,𝑞 = softmax(𝛽 𝑗 |𝑖,𝑞 ) = exp(𝛽 𝑗 |𝑖,𝑞 ) (𝑒 𝑖 ,𝑟𝑞 ,𝑒 𝑗 ′ ) ∈O∪VN exp(𝛽 𝑗 ′ |𝑖,𝑞 )" }, { "formula_coordinates": [ 6, 330.71, 297.6, 171.77, 9.43 ], "formula_id": "formula_24", "formula_text": "𝛽 𝑗 |𝑖,𝑞 = LeakyReLU(u • [W 𝑒 h 𝑖 ; W 𝑞 z 𝑞 ; W 𝑒 h 𝑗 ])" }, { "formula_coordinates": [ 6, 380.8, 354.72, 177.94, 22.04 ], "formula_id": "formula_25", "formula_text": "h 𝑂 𝑖 = ∑︁ (𝑒 𝑖 ,𝑟,𝑒 𝑗 ) ∈ O∪V N 𝛼 NN 𝑗 |𝑖,𝑞 • h 𝐼 𝑗 ,(13)" }, { "formula_coordinates": [ 6, 346.81, 630.78, 211.87, 21.1 ], "formula_id": "formula_26", "formula_text": "min E,R 1 | L | ∑︁ L 𝑙 (𝐼 (𝑡 𝑙 ), 𝑦 𝑙 ) + 1 | V N | ∑︁ VN 𝑙 (𝐼 (𝑡 𝑣𝑛 ), 𝑠 (𝑡 𝑣𝑛 ) ),(14)" } ]
2023-05-17
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b6", "b7", "b4" ], "table_ref": [], "text": "A Bayesian network (BN) is a directed acyclic graph (DAG) where nodes of the graph are random variables while the directed edges represent dependent relationships in the form of probability dependencies among the variables. The entire graph thus represents a compact representation of a joint probability distribution over the set of random variables, where each node and its parents are associated with a conditional probability distribution (CPD). Reasons for adopting Bayesian network models over other machine learning models such as random forests, gradient-boost models, or neural networks usually aim towards 1) Interpretability 2) Uncertainty Quantification 3) Small Data Requirements, and 4) efficient computational complexity. Furthermore, since the entire joint probability distribution is described completely by the network, generating new data from the network is also a common application.\nOne challenge of modeling with BNs is in constructing this DAG that accurately and robustly characterizes the underlying relationships in data Scutari [2010]. Learning a BN is typically conducted in two phases, whereby the first phase constructs the topology (network structure) of the network and the second phase estimates the parameters of the CPDs given the fixed structure. As parameter estimation is considered a well-studied problem, especially with the availability of data, learning the structure is more challenging. A long literature study of BN structure learning includes usually two different types of algorithms: 1) constraint-based which look for conditional independencies (CIs)in the data and build the DAG consistent to these CIs and 2) score-based which convert the problem to Author's status: Any statements, ideas, and/or opinions expressed in this paper are strictly those of the author and do not necessarily reflect those of PwC Switzerland an optimization problem, where a score is used to measure the fitness of a DAG to the observed data with a search strategy employed to maximize the score over the space of possible DAGs (see e.g. Scutari [2015] for more on constraint-based algorithms, and Kasza and Solomon [2011] for more on scorebased approaches). No matter the methodology or combination of methodologies employed, finding an optimal structure has been shown to be NP-hard. In light of this, we will focus on the problem of generating network structures for the data, as defining CPDs with the use of data is a much easier problem once conditional dependencies have been defined." }, { "figure_ref": [], "heading": "A. Motivation and contributions", "publication_ref": [ "b3", "b0", "b8", "b1" ], "table_ref": [], "text": "In the past few years, there has been a new genre of building supervised machine learning models called Tsetlin machines. First introduced by O.C. Granmo in Granmo [2018], Tsetlin machines have been gaining attraction in some machine learning communities for their many computational and modeling benefits. Mostly notably, TMs capture patterns in data using conjunctive clauses in propositional logic and are thus intrinsically interpretable learning systems, and they are also typically fast to train, benefiting from parallelization and many recent regularization advancements in training algorithms such as in Abeyrathna et al. [2021], Sharma et al. [2021], even without the use of GPUs.\nBut the most useful benefit of Tsetlin machines for this study presented in this paper is the ease in which global importance features can be extracted from the propositional logic of an underlying learned Tsetlin machine on data. In Blakely and Granmo [2021] a simple method for extracting global importance features was introduced and compared with popular approaches such as SHAP for gradient boost models showing comparable results without the need of a post-hoc approach. We take advantage of this powerful approach to derive a fairly straightforward method for constructing Bayesian network topologies of data.\nThe motivations of learning Bayesian Network structures from data using TMs is multifold:\n• Bayesian Networks can offer a compact visual overview of data dependencies • Through a very powerful reinforcement learning mechanism, TMs can often learn literal structure already with small data\n• Contributes to the validation and verification of TMs in building predictive models • Provides a way of building hybrid predictive models We will touch on all these points throughout this paper, with an exposition of the approach using both algorithms and examples. The main contribution of this paper is to demonstrate that TM models can provide an alternative approach to building and investigating Bayesian network structures, or at minimum provide additional computational medium for a hybrid-type approach using other classical resources." }, { "figure_ref": [], "heading": "B. Paper structure", "publication_ref": [], "table_ref": [], "text": "The paper will be organized as follows. We first review Tsetlin Machines with a focus on constructing interpretable features of the data from the resulting propositional logic expressions that are learned in training. We then provide a simple algorithm for constructing a Bayesian Network given (unlabeled) data using a modified version of the standard TM learning algorithm. To explore validation of the method, we will propose an approach where we first sample from a known Bayesian network and then apply the algorithm to the sampled data to build a network and verify how similar the networks are. We then proceed to a larger statistical approach to understand how well our approach does in generating networks that are close to the ground truth. To understand the proximity to the ground truth network, we apply a kernel graph based approach to compare a TM generated network with the proposed network from which the underlying data was simulated. We do this on varying number of node size networks, and also on varying the models used in generating the networks.\nFinally, in the last sections, we apply our approach to two real-world data sets, namely a Supply Chain data set and a Bank Customer Churn data set from an anonymous bank. The motivation here is to understand how well our method does in building partial dependencies on certain features in the data which are well known in other models for such types of industrial data sets. Furthermore, in the Supply Chain dataset, only 100 samples are available, with over 20 features, making it very challenging for traditional machine learning predictive models such as neural networks and gradient boost approach which typically require lots of data." }, { "figure_ref": [ "fig_0" ], "heading": "II. TSETLIN MACHINE ARCHITECTURE", "publication_ref": [ "b2" ], "table_ref": [], "text": "In this paper we employ a recently developed architecture for TM learning originally proposed in Glimsdal and Granmo [2021]. The architecture shares an entire pool of clauses between all output classes, while introducing a system of weights for each class type. The learning of weights is based on increasing the weight of clauses that receive a Type Ia feedback (due to true positive output) and decreasing the weight of clauses that receive a Type II feedback (due to false positive output). This architectural design allows to determine which clauses are inaccurate and thus must team up to obtain high accuracy as a team (low weight clauses), and which clauses are sufficiently accurate to operate more independently (high weight clauses). The weight updating procedure is given in more detail in Appendix A. Here we illustrate the overall scheme in Figure 1. Notice that each clause in the shared pool is related to each output by using a weight dependant on the output class and the clause. The weights that are learned during the TM learning steps and are multiplied by the output of the clause for a given input. Thus clause outputs are related to a set of weights for each output class (in this case three classes). One of the clauses also contains four literals, the pattern learned for that clause. In our methodology for building candidate networks, both the frequency of literals across the various clauses and the weights will play an important role in determining the global feature strength in learning data outputs, which we give a brief overview of next." }, { "figure_ref": [], "heading": "III. GLOBAL INTERPRETABILITY", "publication_ref": [ "b1" ], "table_ref": [], "text": "In this section we give a brief overview of the expressions for global interpretability in TMs that have been modified from Blakely and Granmo [2021]. In this paper, global strength indicators for each feature were derived based on output class prediction. This required constructing the strength based on the polarity of the clause as we wanted to understand strength of a feature dependent on an output of a class.\nWe represent a set of d f literals for the i-th feature f i as S fi = (l 1 , l 2 , . . . , l d f ). and map all the literals from all the features of the data into one macro set of literals which we will denote as F L = ∪ i S fi . Thus the set of literals I n c learned from any TM learning process for class output n and clause c is a subset of F L . To formulate our global feature importance expressions, associated with each clause c is an integer weight w n [c]. These weights will be used to express the most salient features in predicting an output for a feature f i .\ny i ∈ {y 1 , y 2 , . . . , y n }, y i = {0, 1} and the upper index i, which refers to a particular output variable. For simplicity in exposition, we assume that the corresponding literal index sets I i j , Īi j , for each output variable, i = 1, . . . , n, and clause, j = 1, . . . , m, have been found under some performance criteria of the learning procedure, described in Section A. With these sets fixed, we can now assemble the closed form expressions.\nWe denote by I n c the set of literals (combined with negated literals) learned from the TM for class output n and clause c. Associated with each clause c is an integer weight w n [c], for all clauses c = 1, . . . , C. Specifically, we compute feature strength for a given output variable and kth bit of feature f u as follows:\ng[f i ] ← n c |w n [c]| l k ∈S f i |l k ∈I n c (1)\nFor any feature f i , this expression sums up the absolute value of the weights w n [c] over all class outputs 1 ≤ n ≤ N and all clauses c ∈ C such that the literals belong to both the feature's set of literals S fi and the clause c ∈ C. Once computed for all features f i of the data, trained on an output variable y, we can assess the most important features in any prediction of a collection of input variables by ranking the values of each g[f i ]. In other words, for any given feature, the frequency of its set of literals inclusion across all clauses multiplied by the weight of that clause for the output class polarity governed by the governs its global importance score. This score thus reflects how often the feature is part of a pattern that is important to making a certain class prediction. Notice that we are interested in indices pertaining any polarity (and thus weight) of the clauses C n c (X) since these are the clauses that contain references to features that are beneficial for predicting R n .\nFigure 2 shows a graphical representation of how the feature strengths are compiled after a TM training round. We look in each clause and count how many literals from the different features exist and place them in a map, incrementing by the absolute value of the weight for that clause. This way, the highest valued strengths feature the most amount of literals spread out over as many clauses. The weights matter because they are summed across the frequencies of literals. A very small weight compared to the others will contribute very little (as that clause made little contribution to a prediction). More relevant clauses (and thus literals), will be more represented with the larger weights.\nThis gives us quick access to feature strength in absolute terms without regard to any output prediction class, which, as we'll see is relevant for when finding possible parent or child nodes in the Bayesian network candidate structures.\nWith the feature strength indicator defined, our next step is to introduce an approach to building a candidate Bayesian network structure from data." }, { "figure_ref": [], "heading": "IV. METHOD", "publication_ref": [], "table_ref": [], "text": "Formally, a Bayesian network is a pair B = (G, P ), where G is a DAG that encodes a joint probability distribution P over a vector of random variables F = (f 1 , . . . , f M ) with each node n i of the graph representing a variable in F. The DAG can be represented as a vector G = (P n1 , . . . , P n M ) where each P ni is a subset of the index set V = {1, . . . , M } and specifies the parents of node n i in the graph." }, { "figure_ref": [ "fig_1" ], "heading": "Fig. 2. Assembling global feature strengths from contributing features in clauses", "publication_ref": [], "table_ref": [], "text": "There are typically three types of nodes (features) n i in a Bayesian network which we will need to identify a priori in our network generating algorithm\n• Parameters: These are the predictors for which we'd like to build the model. They have no dependencies, thus no children, and are often parameters such as age, gender/sex, Location/Geography, time/day, etc. • Observed features: These are features which are observed and could have conditional dependencies, or be dependencies for other nodes. • Unobserved features: These are features which are not directly observed and are being inferred from evidence or predicted. They could have conditional dependencies, or be dependencies for other nodes. These are typically labels or also could be latent variables.\nWe now turn to the application of constructing Bayesian Network representations of data from the global features derived from weighted TMs. The approach is essentially a search problem that consists of two parts: scoring the top features for each targeted feature in the data using TM learning, and then a search algorithm that creates parent to child relationships based on most salient features found from the TMs.\nWe assume a data set X comprised of a set of M features F = f 1 , . . . , f M . Each feature f i we also assume can be represented by d fi literals. For example, a feature with values low, medium, high would be represented by three literals (l 1 , l 2 , l 3 ) with each l k ∈ (0, 1). Low would be (1, 0, 0), medium (1, 1, 0), and high (1, 1, 1). Starting with f 1 , the goal is to traverse all variables of the data, where each variable is independently treated as an output variable of an independent TM model. For each variable, we learn which top K features f j , f j+1 , . . . , f j+K ∈ F have the most impact in predicting the variable as an output for a model. These top K features are then proposed as candidate parent or child nodes of the targeted output variable.\nTo do this we apply R rounds of learning for each variable f i as an output target, and apply the global feature strength indicator 1 to all other features of the data set X. After each round, we aggregate all the feature strengths together and propose the top K features as parent nodes of f i .\nIt is important to note that in learning a variable f i , a balanced set of training data X I ⊂ X with regard to the output values of f i is helpful to ensure a fair assessment and reinforcement of literals for each feature.\nThe construction of the Bayesian network candidate is done in three simple steps. The first step is to compute the most relevant features for all the the parameters, observed, and unobserved features in the the data. Algorithm 1 demonstrates the first step in achieving a straightforward approach to compute a list of most relevant features for each type of variable f i . Algorithm 2 will then apply the second step of the approach to each feature and relevant feature mapping.\nAlgorithm 1 Learning the most salient global features for each variable 1: Input X, N , 2: Initialize an empty map S i :\nf → Z + for each f i ∈ F 3: for each variable f i ∈ F do 4:\nSample training set w.r.t outputs values of f i 5:\nLearn N rounds of > 0 epochs 6:\nfor each round in N do 7:\nExtract the top K features and add to the map S i 8:\nend for 9: end for 10: return S i Algorithm 1 begins by initializing a map for each variable f i that will be used to store the aggregation of the feature strengths g[f • ] when applying a learning round of a TM model. Next, we traverse all the variables of the data set X, in no particular order, whereby fixing each variable f i as an output variable when applying the TM learning rounds. Each possible value of the the variable f i is thus treated as an output class y = (y 1 , y 2 , . . . , y d f ) where d f we recall is the dimension of the literal set S fi . In learning the output class given an insample training set of X, we essentially learn which features were most salient when making a prediction for all output classes combined. The top K features strengths g[f i ] are added to the map as\nS i := S i + g[f i ].\nAfter M rounds of learning, we sort the aggregate feature strengths by rank of highest to lowest. These ranked features are candidates for parent nodes of f j . To this end, we add the top P features to a node list for feature f j . This is represented in Figure 3, where we see that for a feature node (output) A, there are three features that provide high relevance f x , f y , f z , and after three independent rounds of training, f x has the largest aggregate strength. We the propose the feature f x as a parent node for A.\nThe output of Algorithm 1 is a map from each features f i to a list of the top K features. For each feature, sorted by strength, the map will look like the table in I, with K = 3. Before we introduce Algorithm 2, we define the notion of lower strength direction. For an edge between two nodes n i and n j respective of features f i and f j , the lower strength direction is the direction from the lower total feature strength to the higher. Namely, we sum up the feature strengths computed for each feature in regards to all the features, and the one with the lowest strength pointing in direction to the highest strength is the lowest strength direction. We will utilize this to remove competing edges between two nodes.\nOnce the map table for each node is constructed, this algorithm is applied to build the first candidate network structure. It first passes over each parameter in the data X independently of all the observed and any unobserved features. We find the features with the most impact in predicting that parameter and construct a parent node from the parameter to the global feature. After all the parameter features have been processed, we then proceed with finding any child nodes of the observable features. We then end with doing the same process with any unobserved features or labels/latent variables.\nIn choosing each parent (for parameter nodes) or child node, beginning with the top feature, the algorithm begins by proposing the most frequently assigned top feature as the first node in the network. This feature being the most explanatory power for much of the data and thus a node and an edge is drawn between each of these pairings between the top feature node. We then choose the second most frequented feature in the map of top features and propose it as a parent/child node, introducing edges between it and the nodes proposing that feature as a parent node. Each time we present a new edge, we must first verify that no cycle is introduced and that edge doesn't already exist. To ensure this, we measure the strengths end for 23: end procedure of each feature relative to the node child node, and choose the edge which has a higher feature strength.\nContinuing this until all top feature nodes are exhausted reveals the first candidate for the network. The final step in generating the network is to validate the network in that no cycles have been introduced. We again apply the lowest strength direction to ensure that maximal strengths on all edges remain. After Algorithm 2 has been applied, we apply a depthfirst search algorithm to locate any cycles between 3 or more nodes. If a cycle is detected, we loop through all the edges in the cycle and remove the edge that has the lowest strength direction in the cycle that was introduced. Formally, this is given in Algorithm 3\nAlgorithm 3 Cycle detection 1: Φ X , e ∈ E 2: for each node n i ∈ Φ do 3: if cycle of length K exists then 4: φ → {n i , n i,1 , . . . , n i,K } 5:\nfor each possible edge in cycle e ∈ φ do 6:\nif if e has minimum strength then end if 11: end for After removing any cycles from the network, a first generated DAG of the data has been produced. We can deduce from the three algorithmic steps that the following properties are ensured.\n• All designated parameters of the model have no dependencies (i.e. no child nodes) introduced. This is due to two properties designed in Algorithm 2, namely 1) that parameter nodes only propose parent nodes from their feature strength list, and 2) that all other nodes will only propose all their feature nodes as child nodes (dependencies) • All edges in the network are designated to find and connect conditional dependencies. This is due to the global interpretability property of TMs, that when learning on any labeled feature, the clauses will assemble literals from features that design patterns which are beneficial towards predicting that particular label value.\n• No cycles are introduced in the network. This is ensured by the last step 3 Before we begin our numerical examples and validation of the network generation, we first comment on some practical tips when applying the above algorithms. In practice, step 5 in Algorithm 1 is quite crucial as it allows as to control the \"information\" that is distributed throughout the network. We will give strategies on this step in the following sections, but overall, we have found the best results when N , the number of iterations of the TM learning procedure, is quite large (say N > 20). Secondly, we also recommend using multiple TM models to boost robustness when building the map of important features. This will help ensure that the most crucial global features for each node are chosen. Now we move to the empirical section where we first validate our approach, and then offer two numerical examples on data where no known networks have been built before and the data comes from real-world industry scenarios." }, { "figure_ref": [ "fig_3" ], "heading": "V. CAR INSURANCE", "publication_ref": [ "b6" ], "table_ref": [], "text": "To clearly demonstrate how the Algorithm 1 and Algorithm 2 builds a candidate Bayesian network model, in this section we will go over a full example where we have sampled data directly from a well-known Bayesian network. The network is used for car insurance risk estimation of the expected claim costs for a car insurance policy holder (see Scutari [2010] for more info on the model). Figure 4 shows the network which contains 27 nodes, of which 15 are observable, and over 1400 parameters. The hidden nodes (unobserved) are shaded and output nodes are shown with bold ovals.\nWe begin by simulating 5000 independent samples from the full joint probability definition of the network. For each variable, we select a balanced training set with nearly identical samples of all outputs of that variable to ensure the most robust literal pattern design in the clauses as possible. In our first variable, GoodStudent, a two-level factor with levels False and True, following Algorithm 1 with 20 rounds of training, we've collected the top features and them in a list of candidate. Figure 5 showing the top features when predicting GoodStudent variable using the balance training set. Clearly, Age is shown to be the most prominent in this sample and is Continuing along, very every node, the top 5 features are computed using 20 rounds of learning for the next several nodes. RiskAversion having PropCost, Age, and CarValue as top parent candidates. The table in II shows in order of computation, the top 3 features for every node which completes the first stage in building a candidate Bayesian network structure. In the Top Feature columns, we use bold font to highlight that the top feature node is a child of the targeted node, namely that the targeted node has dependencies in the TopFeatures and italic font to denote that they are an ancestor.\nComparing the table of the top features chosen for each node with the original Bayesian network, we see further details in the accuracy of choosing potential parent node candidate. Especially for nodes with two or more parent nodes, we notice high accuracy in choosing those nodes as top features after several rounds of TM learning. For example, SeniorTrain has RiskAversion and Age as two parent nodes, and were computed as the top two features. OtherCar has SocioEcon as a parent node and is also a top feature. In the case a node has no parents, namely, it's a prior probability that is used for connected nodes, the feature strength selection process yields similar results, but not as robust as with nodes predicting their parent nodes.\nFor example, Age is the parent node providing observable prior information for two of its top features. ThisCarCost is the only node which had top features of which none were either true parent nodes of grandparent nodes in the underlying network, nor were child nodes for any feature. SocioEcon, with MakeModel and RiskAversion as its top two features, were not parent nodes in the underlying network, but were child nodes, thus having immediate dependencies for those nodes.\nWith the table of nodes their top features estimated, we can now apply algorithm 2 that essentially proposes the child or parent node candidates and then prunes the network. The second pass of the algorithm builds the structure while also respecting both parameter and observed node types. We begin by choosing the two parameters Age, and Mileage and find the parent nodes by taking the TopFeature, in this case RiskAversion for Age which suggests that RiskAversion was typically the most prominent feature in predicting the age of the driver given all other 26 variables. Next we consider our observed variables, in this case GoodStudent, followed by choosing the first top feature which in this case is Age. With the next candidate node chosen as Age, the algorithm then selects the top feature extracted RiskAversion. Now selecting RiskAversion as the next candidate node, we see that Age is selected as the top feature. which creates a scenario in which a root node is also a child node. In the case of an edge introduced in the opposite direction, the higher feature is chosen, and in this case RickAversion is directed towards Age as it has the higher feature direction. Thus Age becomes the parent node for RiskAversion. This is shown in Figure 8. With the first three nodes being introduced, GoodStudent, Age, and RiskAversion with Age determined as the parent for both GoodStudent and RiskAversion, the next step is to introduce a new node candidate randomly, SocioEcon. Computing the most prominent feature yields MakeModel with a second being RiskAversion.\nFigure 8 depicts the candidate network after a first round of the 1. Notice that Age is the dependency on many nodes at the bottom of the network structure. This corresponds to ground truth network, except for the missing of several parameter nodes which we introduce next.\nCompleting the Algorithm 2 over all observed nodes, we arrive to a network close to ground truth, with the only additional edges on the graph being AntiLock being partially dependent on CarValue and Mileage being partially dependent on Age. Both of these edges could be argued to be included in the final network, but as we are wanting to see how close we are to the original network from which we simulated the data, we will need an approach to compare \"distance\" between Fig. 8. First round of the algorithm yields a network close to ground truth graphs. The best method for this is in so called \"graph kernels\" which are functions that compute an inner product on graphs." }, { "figure_ref": [], "heading": "A. Comparison", "publication_ref": [ "b10" ], "table_ref": [ "tab_3", "tab_4" ], "text": "To compare the proximity to the ground truth network, we apply a kernel graph based approach to compare a TM generated network with the proposed network from which the underlying data was simulated. We would like to understand how similar two nodes are in the graph, comparing both outgoing edges and incoming edges, as well as how similar the node's neighbors are.\nGraph kernels are used widely in many disciplines and typically follow three steps, which we will use in our comparison study. For more on graph kernels, Vishwanathan et al. [2008] contains an in-depth study with various applications.\n• Compute Node Similarity: This step computes the pairwise similarity between nodes in two graphs, iterating over all nodes in both graphs and calculating the similarity between corresponding nodes. • Compute Edge Similarity: Computes the pairwise similarity between edges in the two graphs similar to node similarity, it involves iterating over all edges in both graphs and calculating the similarity between corresponding edges and their direction. • Combine Similarity: Lastly, we combine the node similarity matrix and edge similarity matrix into an overall graph similarity score by averaging the similarities. Since the graph kernel computation depends on the number of nodes and edges of a graph, we normalize the values of the averaging of similarities to get a value between 0 and 1, with 0 being the value comparing a null graph with any nonempty graph, and 1 being all the nodes and edges with their directions are identical. We apply our network structure generating algorithm to four different popular networks. In order to create a robust framework for the comparison, we run our algorithm using 5 different randomly chosen Tsetlin Machine parameterizations. We do this to remove any chance that we could be \"overfitting\" our models for generating the networks. We use a simple strategy of randomly choosing the number of clauses to be a random integer between 1 and 3 times the total number of literals for the data simulated by the network. The threshold parameter is then chosen as a random number between 10 and the number of clauses, coupled with the specificity between 5 and 15. Finally, the maximum number of literals per clause is chosen to be a random value between 3 and the total number literals for the data. Table III shows the randomly chosen parameterizations, with L being the total number of literals for the data.\nThe four networks we simulate data from are as follows:\n• Asia: A small simplified version of a network that may be used to diagnose a doctor's new patients. Each node represents a facet relating to a patient's condition, and each directional edge roughly corresponds to causality. • Child: A larger network modeling uncertainty of birth asphyxia given lung metrics, CO2, x-rays, and other variables (see 9) • Insurance: The network from section V • Diabetes: A large network used for the detection of early signs and symptoms of diabetes. Such models are important to control the side effects of diabetes and Bayesian network models have been used extensively in the literature. The strategy for our study is to generate 10 networks from each model, and compute the similarity score using the kernel graph similarity computation. The average of these scores is then recorded for each model. Table IV shows the results for the four different networks.\nWe see that typically, the medium (20 -50) node networks tend to do best in performance on average. The smallest network had the worse performers in terms of models, with the Child and the Insurance networks the best performers across two of the parameterizations. The very large network (¿ 100 nodes) performs well on high specificity with high amount of literals allowed per clause. It seems that on average, high specificity parameter coupled with allowing more literals per clause seems to work better than being more restrictive on the number of literals per clause (models III and V). This is mostly likely because for very challenging target nodes, allowing more information via literals in the clauses allows for more complex patterns. While generating networks based on a ground-truth model plays an important role in this study, we would also like to see the performance of the network generating algorithm using data that is unlabeled and has no ground-truth for reference.\nHere we rely on intuition and logic along with ensuring the networks generated satisfy the properties of a DAG to derive how well the algorithm does." }, { "figure_ref": [ "fig_7", "fig_0", "fig_8" ], "heading": "VI. SUPPLY CHAIN ANALYSIS", "publication_ref": [ "b9" ], "table_ref": [], "text": "In our final example, we consider a supply chain data set collected from a Fashion and Beauty startup that is based on the supply chain of makeup products Singh [2023]. Supply chain analytics is the process of collecting, modeling, and interpreting data related to the movement of products and services from suppliers to customers.\nThe challenge with this particular data set is that is it quite small: only 100 observations from the supply chain exist, namely one data point for each product. With 23 different variables accounting for the supply chain for each of the products, creating a predictive model for effectively understanding the sensitivities of revenue generated, the impact of how stock and MC attribute to profits, and many of the other variables could be challenging for traditional models such as gradient boost, neural network, or random forests where larger data sets are typically required. Using our TM formulation for building network structures of the data, we will see that even though we only have one observation for each product, achieving a coherent network structure is still quite readily obtainable.\nFigure 10 shows some of the major continuous variables featured in the supply chain data set along with their value frequency. The easiest way to transform the continuous variables into observations easily handled by both TMs and Bayesian networks is to simply attribute the values to low, medium, high, very high. For the categorical variables, such as the ones shown in figure 11, we simply find the frequency each variable and rank them from least frequency to most frequent. The least frequent value will be a 1, second most frequent a 2, and so on. With so few observations, obtaining near optimal hyper parameters for our TM model will be challenging. We propose a simple way around this by building an ensemble model, which is a collection of smaller models each with different number of clauses, thresholds, and specificity values. Our global feature impact model will then be an aggregation of all the literals in each clause in each model.\nWe begin by applying our ensemble learning approach on each of the discretized continuous variables. For this, we set the variable as the target label, thus giving four different values to learn on low, medium, high, very high. Being the first variable we target, is a variable that is most likely determined at the origin of the supply chain. Applying our 5 independent TM models to learn the high impact features, we discover that Availability, and Production volumes are the two features which score very high in prediction impact. Namely, to most accurately predict price, these two variables have the most impact. A third is Stock levels which we will note in our node to feature table VI.\nWith the highest impact being Availability, we now target this variable to learn its most salient features. After one round of training with each of the 5 models, we see that Order quantities and Defect rates play the largest roles, with Manufacturing costs a close third. We then consider Order quantities, and see that Stock levels and Price are the highest predictors with Revenue generated very strong as well.\nContinuing through the rest of the variables, we finalize our node to feature table VI and can start building an initial directed network where each node is a variable from the supply chain.\nIn our first iteration of the Algorithm 1, a collection of vertices and edges are produced. The second iteration, Algorithm 2 runs over the network and looks for competing edges and eliminates any edges with a lower strength direction. This gives the final network as shown in Figure 12. To ensure there are no cycles in the graph, we do a depth first search on all the nodes. In any case there is a cycle, we simply eliminate the edge with the lowest strength direction in the cycle. In analyzing the final network, we see that the TM to BN algorithm has discovered that Product type, Shipping costs, Shipping carriers, Routes, Customer Demographics, and Location are some of the main input parameters into the supply chain. This seems to be consistent with the logistics of supply chains and how they operate in the sense that they are degrees of freedom that can be used to optimize costs in the supply chain. Namely, they will have an impact on costs, maintenance, stock, and lead times.\nThe direct conditionals on Revenue generated has been discovered to be Price, Costs, Number of Products sold, Stock levels and Inspection Results, which from a quick sanity check registers well with typical revenue models. A failed Inspection Results could lead to higher costs, or manufacturing costs which would impact the revenue generated.\nLastly we see that another important observable node in the supply chain Number of products sold is dependent on location, Production Volumes and Lead times. In return, it is computed that Number of products sold influences the defect rate as perhaps defect rates scale with the number of customers using the product.\nWe have also shown two competing directions between the Availability variable and defect rate. Here, the feature strength in the direction from defect rate to Availability is significantly higher than the other direction, making the availability of the product conditionally dependent on the defect rate.\nOn a final note, we mention that although we now seem to have a candidate for a network structure, a fully constructed Bayesian network still requires conditional probability tables across all the parent nodes and their children. If we do not have expert knowledge into these probability distributions, the easiest way is to estimate them empirically. With the network proposal built, generating conditional probability tables for a particular variable node v can be done simply by computing the frequency of each child node's literals of v, and ensuring they are normalized across all combinations of literals for that child." }, { "figure_ref": [ "fig_9" ], "heading": "VII. BANK CUSTOMER CHURN", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "In this last example we will build a candidate network for Bank Customer Churn data. The data in this example is the customer data of account holders from an Anonymous Multinational Bank and the aim of the data is to not only predict the customer churn, but understand when predicted, why does it occur. Every bank wants to hold there customers for sustaining their business for a long period of time. But as customers might leave their banking institution for a variety of reasons, generating a Bayesian network for gaining better insights into customer retention could be useful, e.g. when a customer has a high probability of leaving given their current conditions. Reasons why a banking institution might want to minimize customer turnover is because it is typically much more expensive to sign in a new client than keeping an existing one. It is advantageous for banks to know what leads a client towards the decision to leave the company. Churn prevention allows companies to develop loyalty programs and retention campaigns to keep as many customers as possible.\nThe data contains 15 features, of which 4 are parameters, 10 observable, and 1 label (namely whether the customer has left bank). The features are as follows:\n• CreditScore: can have an effect on customer churn, since a customer with a higher credit score is less likely to leave the bank. • Geography: a customer's location can affect their decision to leave the bank. • Gender: it's interesting to explore whether gender plays a role in a customer leaving the bank. • Age: this is certainly relevant, since older customers are less likely to leave their bank than younger ones. • Tenure: refers to the number of years that the customer has been a client of the bank. Normally, older clients are more loyal and less likely to leave a bank. • Balance: also a very good indicator of customer churn, as people with a higher balance in their accounts are less likely to leave the bank compared to those with lower balances. • NumOfProducts: refers to the number of products that a customer has purchased through the bank.\n• HasCrCard: denotes whether or not a customer has a credit card. This column is also relevant, since people with a credit card are less likely to leave the bank. • IsActiveMember: active customers are less likely to leave the bank. • EstimatedSalary: as with balance, people with lower salaries are more likely to leave the bank compared to those with higher salaries. • Exited: whether or not the customer left the bank.\n• Complain: customer has complaint or not.\n• Satisfaction Score: Score provided by the customer for their complaint resolution. • Card Type: type of card hold by the customer.\n• Points Earned: the points earned by the customer for using credit card. We can identify already the three parameters (predictors) being Geography, Gender, and Age as they have no dependencies. All other features clearly have some kind of partial dependencies, or might also supply conditions for other features. For the observed variables, we look first at CreditScore, EstimatedSalary, and SatisfactionScore which have a few very strong partial dependencies in predicting churn as we'll see.\nTable VII shows the top features of each observed feature.\nWe see that CreditScore has dependencies on Age and Tenure, which is typically in line whith how credit models work for many retail banking institutions. EstimatedSalary is geographically dependent as some countries have higher economic power. For the Satisfaction Score, it seems that it is most dependent on NumOfProducts a customer has purchased through the bank, and how often they use their credit card.\nOur first candidate network for the Bank customer churn data is shown in Figure 13. We see that the label (unobserved) node Exited, namely whether the customer will leave the banking institution is mostly dependent on their CreditScore and EstimatedSalary. In turn, these features are highly dependent on Balance in the account, Age, and Tenure at the institution. This makes sense in most models for customer churn as normally, older clients are more loyal and less likely to leave a bank." }, { "figure_ref": [], "heading": "VIII. CONCLUSION", "publication_ref": [ "b6" ], "table_ref": [], "text": "We proposed an algorithm for constructing a Bayesian network using feature strength indicators derived from the clause weights and literals of a Tsetlin machine. We showed through both empirical evidence in comparing with groundtruth networks as well as through examples where dependencies on certain features in the data are widely know that TMs generating initial networks through data is a very promising area of research. The strengths are numerous, which one being that the method is entirely interpretable which how parent and child nodes are generated, and also in the shear speed of the approach -candidate networks can be generated in milliseconds to seconds, even for large data sets with many features ( 100). While this first proposal is not without drawbacks or challenges, it has shown to be an attractive basis for further research and methodical development. One area that seems like a reasonable direction would be in a hybrid-based structure learning. Consider for example a score based approach such as in the Chow-Liu algorithm Scutari [2010] where instead of mutual information is used to construct a spanning tree, feature strengths derived from TMs are used instead to construct the first sequence of trees.\nAnother drawback is the fact that a good foundation for generating a network should start with knowing the parameters of the model being generated, namely which features of the data do not have any dependencies. This can be challenging for many large data sets without expert witnesses. Furthermore, this approach cannot introduce non-labeled latent variables as all the features that are candidate nodes need to be present in the data.\nWe demonstrated that building a Bayesian network with TMs can also reveal deeper interpretable insights into the data. In our Supply Chain example, even with only 100 data observations from a supply chain, we were able to build a network with a good understanding of the inter dependencies in the network. For the Bank Customer Churn, our network generating algorithms were able to create a network that fit quite well the narrative to how customer exit models behave in general.\nOur future work will be in several areas in improving the approach. First, we would like to improve on algorithms 1 and 2 in both filtering for Top Features in a more constructive way by perhaps working with a newly developed Type-III feedback system (paper to appear). This method seeks to remove in a more direct way literals from features which do not add any value to clause feedback and thus predictions overall. The idea here would be that more literals would be penalized and thus the final Top Features would have truly more impact. The second area is to better design how competing directions in the network are resolved. Here we simply take the higher feature strength between two competing connections. Lastly, our next applications of interest will be in designing temporal/dynamic networks on time-dependent data, where nodes can be connected via lags in the data." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Availability ProdVolumes Availability Order quantities Defect rates Availability Supplier name Availability Location Shipping costs Lead time Order quantities Location Price ProdVolumes Lead time Availability ProdVolumes ProdVolumes" }, { "figure_ref": [], "heading": "APPENDIX", "publication_ref": [ "b5" ], "table_ref": [], "text": "In this subsection, we give some more details of the TM with weights model. The learning of weights is based on increasing the weight of clauses that receive true positive output (Type Ia feedback) and decreasing the weight of a clause that receive false positive outputs (penalizing Type II feedback). The motivation is to determine which clauses are inaccurate and thus must team up to obtain high accuracy as a team (low weight clauses), and which clauses are sufficiently accurate to operate more independently (high weight clauses). The weight updating procedure is summarized in Algorithm 4. Here, w i is the weight of clause C i at the n th training round (ignoring polarity to simplify notation). The first step of a training round is to calculate the clause output. The weight of a clause is only updated if the clause output C i is 1 and the clause has been selected for feedback (P i = 1). Then the polarity of the clause and the class label y decide the type of feedback given. That is, like a regular TM, positive polarity clauses receive Type Ia feedback if the clause output is a true positive, and similarly, they receive Type II feedback if the clause output is a false positive. For clauses with negative polarity, the feedback types switch roles. When clauses receive Type Ia or Type II feedback, their weights are updated accordingly. We use the stochastic searching on the line (SSL) automaton to learn appropriate weights. SSL is an optimization scheme for unknown stochastic environments Oommen [1997]. The goal is to find an unknown location λ * within a search interval [0, 1]. In order to find λ * , the only available information for the Learning Mechanism (LM) is the possibly faulty feedback from its attached environment E. In SSL, the search space λ is discretized into N points, {0, 1/N, 2/N, ..., (N -1)/N, 1} with N being the discretization resolution. During the search, the LM has a location λ ∈ {0, 1/N, 2/N, ..., (N -1)/N, 1}, and can freely move to the left or to the right from its current location. The environment E provides two types of feedback: E = 1 is the environment suggestion to increase the value of λ by one step, and E = 0 is the environment suggestion to decrease the value of λ by one step. The next location of λ, i.e. λ n+1 , can thus be expressed as follows:\n(2)\nAsymptotically, the learning mechanics is able to find a value arbitrarily close to λ * when N → ∞ and n → ∞. In our case, the search space of clause weights is [0, ∞], so we use resolution N = 1, with no upper bound for λ. Accordingly, we operate with integer weights. As in Algorithm 4, if the clause output is a true positive, we simply increase the weight by 1. Conversely, if the clause output is a false positive, we decrease the weight by 1. By following the above procedure, the goal is to make low precision clauses team up by giving them low weights, so that they together can reach the summation target T . By teaming up, precision increases due to the resulting ensemble effect. Clauses with high precision, however, obtain a higher weight, allowing them to operate more independently. if (y = 1 and i is odd) or (y = 0 and i is even) then 6:\nif c i = 1 then 7: else: (y = 1 and i is even) or (y = 0 and i is odd) 20:\nif w i > 0 then 22:\nend if" }, { "figure_ref": [], "heading": "24:", "publication_ref": [ "b0" ], "table_ref": [], "text": "for feature k = 1, ..., 2o do 25:\nif l k = 0 then 26:\nType II Feedback end if 36: end for\nThe above weighting scheme has several advantages. First of all, increment and decrement operations on integers are computationally less costly than multiplication based updates of real-valued weights. Additionally, a clause with an integer weight can be seen as multiple copies of the same clause, making it more interpretable than real-valued weighting, as shown in the next section. Additionally, clauses can be turned completely off by setting their weights to 0 if they do not contribute positively to the classification task. For a more detailed explanation of the weighted TM, please refer to Abeyrathna et al. [2021]." } ]
Bayesian networks (BNs) are directed acyclic graphical (DAG) models that have been adopted into many fields for their strengths in transparency, interpretability, probabilistic reasoning, and causal modeling. Given a set of data, one hurdle towards using BNs is in building the network graph from the data that properly handles dependencies, whether correlated or causal. In this paper, we propose an initial methodology for discovering network structures using Tsetlin Machines (TMs).
Generating Bayesian Network Models from Data Using Tsetlin Machines
[ { "figure_caption": "Fig. 1 .1Fig. 1. Showing a collection of clauses and their relationship with weights that are learned during TM training. Each clause contains a set of literals that are also learned during feedback.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Assembling the feature strengths for node (output) A. Most relevant feature fx from the strength map then proposed as parent node for A", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Car insurance network to model expected claims costs", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Top features collected for predicting Age variable.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Showing the global feature strengths for RiskAversion", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig.9. Child network, adapted fromScutari [2010] ", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. Continuous features of the supply chain data set", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. First iteration of the network using top two features for each node", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 13 .13Fig. 13. First candidate network for Bank Customer Churn data", "figure_data": "", "figure_id": "fig_9", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "THREE FEATURES DERIVED FROM EACH NODE", "figure_data": "Node TopFeature 1TopFeature 2 TopFeature 3f 1fxfyfzf 2fx 2fy 2fz 2. . .. . .. . .. . .f Mfx Mfy Mfz M", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "Algorithm 2 Search Algorithm for Network Structure 1: procedure NETWORKSTRUCTURE( S i : f → Z + ) Define f i,k as parent node n j ← P ni", "figure_data": "2:Empty set of nodes Φ X = {}3:for each \"Parameter\" node n i do4:for each TopFeature in list S i do5:Compute most frequent feature f i,k6:7:Add n i and n j to Φ X8:if edge n i → n j exists then9:remove edge10:end if11:end for12:end for13:for each \"Observed/Unobserved\" node n i do14:for each TopFeature in list S i do15:Compute most frequent feature f i,k16:Define f i,k as child node n j ← P ni17:Add n i and n j to Φ X18:if edge n j → n i exists then19:remove edge with lower strength direction20:end if21:end for22:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "MODELS USED TO COMBINE INTO ENSEMBLE MODEL", "figure_data": "Clauses ThresholdSpecificity Max Number LiteralsModel IL + 12259.425Model IIL + 342111.233Model IIIL + 23306.77Model IVL + 58109.855Model VL + 924212.65", "figure_id": "tab_3", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "NETWORKS AND AVERAGE SIMILARITY SCORE ACROSS FIVES", "figure_data": "MODELSNetworkModel IModel IIModel III Model IV Model VAsia0.8340.8550.7910.8460.821Child0.8750.8910.9080.9400.805Insurance0.8310.8740.9240.9810.819Diabetes0.7540.7900.7810.8770.825", "figure_id": "tab_4", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "THREE FEATURES DERIVED FROM EACH NODE", "figure_data": "CreditScoreEstimatedSalary SatisfactionScoreAgeGeographyNumOfProductsTenureGenderPointEarnedEstimatedSalary TenureEstimatedSalary", "figure_id": "tab_6", "figure_label": "VII", "figure_type": "table" } ]
Christian Blakely
[ { "authors": "Kuruge Darshana Abeyrathna; Ole-Christoffer Granmo; Morten Goodwin", "journal": "IEEE Access", "ref_id": "b0", "title": "Extending the Tsetlin Machine With Integer-Weighted Clauses for Increased Interpretability", "year": "2021" }, { "authors": "Christian D Blakely; Ole-Christoffer Granmo", "journal": "Springer", "ref_id": "b1", "title": "Closed-Form Expressions for Global and Local Interpretation of Tsetlin Machines", "year": "2021" }, { "authors": "Sondre Glimsdal; Ole-Christoffer Granmo", "journal": "", "ref_id": "b2", "title": "Coalesced multi-output tsetlin machines with clause sharing", "year": "2021" }, { "authors": "Ole-Christoffer Granmo", "journal": "", "ref_id": "b3", "title": "The Tsetlin Machine -A Game Theoretic Bandit Driven Approach to Optimal Pattern Recognition with Propositional Logic", "year": "2018" }, { "authors": "Jessica Kasza; Patty Solomon", "journal": "", "ref_id": "b4", "title": "A comparison of scorebased methods for estimating bayesian networks using the kullback-leibler divergence", "year": "2011" }, { "authors": "B J Oommen", "journal": "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)", "ref_id": "b5", "title": "Stochastic searching on the line and its applications to parameter learning in nonlinear optimization", "year": "1997" }, { "authors": "Marco Scutari", "journal": "", "ref_id": "b6", "title": "Learning bayesian networks with the bnlearn r package", "year": "2010" }, { "authors": "Marco Scutari", "journal": "", "ref_id": "b7", "title": "Bayesian network constraint-based structure learning algorithms: Parallel and optimised implementations in the bnlearn r package", "year": "2015" }, { "authors": "Jivitesh Sharma; Rohan Yadav; Ole-Christoffer Granmo; Lei Jiao", "journal": "", "ref_id": "b8", "title": "Drop Clause: Enhancing Performance, Interpretability and Robustness of the Tsetlin Machine", "year": "2021-05" }, { "authors": "Harsh Singh", "journal": "", "ref_id": "b9", "title": "Supply chain analysis -kaggle", "year": "2023" }, { "authors": "S V N Vishwanathan; Karsten M Borgwardt; Imre Risi Kondor; Nicol N Schraudolph", "journal": "", "ref_id": "b10", "title": "Graph kernels", "year": "2008" } ]
[ { "formula_coordinates": [ 3, 103.04, 157.62, 196.98, 21.69 ], "formula_id": "formula_0", "formula_text": "g[f i ] ← n c |w n [c]| l k ∈S f i |l k ∈I n c (1)" }, { "formula_coordinates": [ 4, 54.72, 311.37, 232.35, 34.01 ], "formula_id": "formula_1", "formula_text": "f → Z + for each f i ∈ F 3: for each variable f i ∈ F do 4:" }, { "formula_coordinates": [ 4, 109.3, 578.07, 69.71, 9.65 ], "formula_id": "formula_2", "formula_text": "S i := S i + g[f i ]." }, { "formula_coordinates": [ 5, 48.96, 537.07, 163.5, 69.77 ], "formula_id": "formula_3", "formula_text": "Algorithm 3 Cycle detection 1: Φ X , e ∈ E 2: for each node n i ∈ Φ do 3: if cycle of length K exists then 4: φ → {n i , n i,1 , . . . , n i,K } 5:" } ]
10.2514/1.I010973
2023-05-17
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Motivation", "publication_ref": [ "b0", "b2", "b3", "b4", "b5", "b6", "b7" ], "table_ref": [], "text": "According to projections, the number of air vehicles operating in urban areas will experience a significant increase in the next two decades [1]- [3]. One major part of this forecasted traffic surge is from electric vertical take-off and landing (eVTOL) cargo and passenger air taxis in Urban Air Mobility (UAM) operations. The current Air Traffic Control (ATC) system is heavily human-based, which is not expected to support the emerging high-density urban air traffic operations [4]. Automation tools and autonomous agents to manage the urban airspace and UAM traffic are required. Autonomous ATC was proposed in 2005 with the introduction of the NASA Advanced Airspace Concept (AAC) [5]. This rule-based autonomous ATC tool was further developed and validated over the following 10 years to augment human ATC, increase traffic capacity and enhance operation safety [6], [7].\nSpecifically for UAM, the US Federal Aviation Administration (FAA) and National Aeronautics and Space Administration (NASA) proposed concepts for Unmanned Aircraft System (UAS) Traffic Management (UTM) in recent years [8]- [11]. From these proposals, one of the most challenging requirements for an autonomous ATC system is to mitigate conflicts in high-density traffic flows. This can be achieved through a combination of strategic conflict management, which is used to resolve predicted conflicts prior to departure by adding a ground delay or rescheduling another flight route, and tactical deconfliction, which focuses on real-time decision making for airborne aircraft separation through maneuver advisories like speed or heading changes.\nVarious autonomous conflict management systems have been developed, but one persistent challenge in the integration of such systems is to ensure the advisories are coordinated to achieve the desired safety level. If this is not the case, the strategic and tactical deconfliction methods may affect each other's results and introduce new risks. To address this issue, we propose an integrated conflict management framework (ICMF) that combines both strategic conflict management and tactical deconfliction. By implementing this comprehensive autonomous system in air traffic management (ATM) for UAM, we seek to guarantee safety levels within target values, while also optimizing traffic efficiency." }, { "figure_ref": [], "heading": "B. Related Work", "publication_ref": [ "b10", "b11", "b12", "b13", "b14", "b16", "b17", "b17", "b21", "b20" ], "table_ref": [], "text": "Strategic conflict management involves strategic decisions like ground delays made by air traffic managers to balance traffic demand with airspace capacity at bottlenecks, e.g. airport runways, merging points, and air route intersections. For traditional ATM, such an approach has been designed effectively and has shown measurable improvements for airlines in the National Airspace System (NAS). For example, Traffic Management Initiatives (TMI) such as the Ground Delay Program (GDP), Airspace Flow Program (AFP), and the Collaborative Trajectory Options Program (CTOP) are tools used by air traffic flow managers to balance demand with capacity in congested regions [12]. These programs have resulted in reduced delays and cancellations for airlines operating in the NAS, while also improving safety levels by reducing the number of aircraft in the airspace and preventing potential conflicts. However, strategic conflict management for UAM is still a challenge because of the high-density traffic and high-population areas over which that traffic operates [13]. Therefore, further research is required to study and analyze the effectiveness of strategic conflict management in the UAM setting, specifically with the integration of tactical deconfliction technologies.\nThe field of aircraft separation assurance has seen the introduction of many advanced methods, as highlighted by recent studies [14]. One such approach involves using Markov Decision Processes (MDP) to formulate the separation assurance problem by incorporating a probabilistic model that can handle uncertainties encountered during flight [15]. Offline MDP-based methods are useful for strategic deconfliction, while online MDP-based methods are more suitable for tactical deconfliction [16]- [18]. However, offline methods can become intractable if uncertainty occurs en route since the policy is designed ahead of time, and it is challenging for online methods to solve the problem efficiently [19]. To address these challenges, researchers have turned to deep reinforcement learning (DRL) for separation assurance problems [19]- [23]. For instance, the deep distributed multi-agent variable (D2MAV-A) framework incorporates an attention network and employs a modified Proximal Policy Optimization (PPO) algorithm to solve complex sequential decision-making problems with a variable number of agents [22]. Nevertheless, a key concern with DRL is its generalization ability -if the density of traffic flow exceeds the training environment, the DRL agent may provide erroneous advisories and lead to an aircraft conflict, or even a near mid-air collision. Thus, preconditioning air traffic to proper density levels using strategic conflict management is essential for DRL to ensure safe separation." }, { "figure_ref": [], "heading": "C. Contributions and Structure", "publication_ref": [], "table_ref": [], "text": "The major contributions of this paper are summarized as follows:\n1) An integrated conflict management framework for UAM. This new framework is a coordination between strategic conflict management and tactical deconfliction. Through our analysis, we demonstrate that by utilizing strategic conflict management methods, we can ensure a reliable foundation for effective tactical deconfliction for UAM. These complementary approaches work together to enhance the safety and efficiency of the UAM system. 2) Game theory to improve MARL convergence rate.\nThis paper focuses on analyzing the potential safety threats posed by multiple aircraft operating in close proximity, such as when two aircraft merge together. Specifically, we investigate the instability and convergence issues that arise when training a multi-agent reinforcement learning (MARL) model. Through our analysis, we identify the reasons behind the model's instability and introduce a new policy to mitigate this issue using game theory. Our numerical results demonstrate a significant improvement over the previous model, highlighting the effectiveness of our proposed approach.\n3) A open-source UAM conflict mitigation sandbox. We have made the code base of our integrated conflict management simulation, which utilizes the BlueSky simulator, publicly available. Our code includes baseline methods and evaluation metrics, enabling users to easily assess the performance of their own strategic and tactical algorithms by replacing the existing ones. This open framework allows for continued development and testing of conflict management approaches in the context of UAM, ultimately improving the safety and efficiency of the system. 4) Revealing essential insights into the interactions between strategic conflict management and tactical deconfliction. In this paper, we demonstrate that strategic conflict management methods, such as departure separation and DCB, can effectively precondition tactical deconfliction and maintain safety metrics at nearly constant levels. In addition, tactical deconfliction methods improve traffic efficiency by permitting higher capacity near bottlenecks. However, the maneuvers employed by tactical deconfliction also result in demand uncertainty at each capacity constrained resource, which diminishes the effectiveness of DCB. In Section II, the problem formulation and system framework are described. In Section III, we described the strategic conflict management methods, including departure separation and three different approaches for DCB. In Section IV, the multi-agent reinforcement learning separation method and a baseline method for tactical deconfliction are described. In Section V, five numerical experiments are described to demonstrate the effectiveness and interactions between strategic and tactical methods. Finally, we present conclusions in Section VI." }, { "figure_ref": [], "heading": "II. PROBLEM FORMULATION", "publication_ref": [ "b11" ], "table_ref": [], "text": "This paper aims to develop a system that ensures aviation safety metrics remain below target levels while optimizing traffic efficiency. To achieve this objective, we introduce an integrated conflict management platform (ICMP) for UAM, which integrates strategic and tactical separation methods to mitigate conflicts. As previously demonstrated in [13], a combination of strategic conflict management and tactical deconfliction is an effective approach for balancing safety and efficiency." }, { "figure_ref": [ "fig_0" ], "heading": "A. Framework for Integrated Conflict Management Platform", "publication_ref": [], "table_ref": [], "text": "Figure 1 illustrates the ICMP framework, which divides the flight operation into two stages: pre-departure and airborne. The pre-departure stage utilizes strategic conflict management to determine an appropriate departure time by introducing ground delays. This paper presents one departure separation method and two demand capacity balancing algorithms to suit different scenarios, as outlined in Section III. The airborne stage employs tactical deconfliction methods to provide speed advisories for all aircraft to resolve conflicts. This includes a MARL-based separation assurance method and a rule-based separation algorithm, the latter representing a benchmark against which the performance of the other methods can be compared. These tactical deconfliction methods are described in Section IV. Additionally, strategic conflict management generates simulated flight plans for the MARL offline model training process, which is then used for online operations. " }, { "figure_ref": [], "heading": "B. Safety Metrics", "publication_ref": [ "b23" ], "table_ref": [], "text": "In this paper, four safety metrics are measured: is defined which represents a precursor to a Mid Air Collision. For crewed aviation, an NMAC is typically defined as a loss of 500 feet (152 meters) of horizontal separation and 100 feet (30 meters) of vertical separation [25]. Since we simulate operations flying at a co-altitude, we define an NMAC as a loss of 150 meters of horizontal separation, as described in Table I." }, { "figure_ref": [], "heading": "3) Estimated Number of Mid Air Collisions (MAC)", "publication_ref": [ "b24", "b25" ], "table_ref": [], "text": "per flight hour. As described in Table I, we define a Mid Air Collision as a loss of horizontal separation of 10 meters, which is representative of the wingspan or maximum horizontal dimension of a UAM aircraft. However, since actual MACs are infrequent, especially with advanced conflict management, we instead observe the number of NMACs, and use a conditional probability, P(MAC|NMAC), to estimate the probability of MAC. It's worth noting that this paper does not model the effect of collision avoidance systems such as the Airborne Collision Avoidance System X (ACAS X), which provides vertical and horizontal maneuvers to avoid mid-air collisions [26], [27]. Instead, we utilize a P(MAC) risk ratio β to compensate for the impact of airborne collision avoidance on the likelihood of a mid-air collision.\nThe ACAS X risk ratio β is defined as:\nβ = P(NMAC, with ACAS X) P(NMAC, without ACAS X) (1)\nWe estimate the number of MAC events N MAC by:\nE(N MAC ) = P(MAC|NMAC) • β • N NMAC (2)\nwhere N NMAC is the number of NMACs observed in the simulation without the implementation of ACAS X. The P(MAC|NMAC) is obtained by using Monte Carlo simulation on a scenario without any intervention. Table I presents each of the parameter values used in the estimation of the number of MAC events. " }, { "figure_ref": [], "heading": "4) Risk Ratio", "publication_ref": [], "table_ref": [], "text": "The risk ratio is calculated as the ratio of the number of estimated MACs for the non-intervention scenario and the number of estimated MACs for the other methods applying conflict management." }, { "figure_ref": [], "heading": "C. Efficiency Metrics", "publication_ref": [], "table_ref": [], "text": "We calculate three different efficiency metrics: 1) Ground delay due to strategic conflict management. If departure demand is sufficiently high that demand exceeds capacity for any constrained resources, DCB will calculate a new departure time for the aircraft that will prevent the demand from exceeding capacity. Ground delay is calculated as:\nground delay = max{0, (R f -S f )}(3)\nwhere R f is the required departure time of flight f and S f is the original scheduled departure time. 2) Airborne delay due to tactical deconfliction. For each aircraft, we estimate total flying time T f based on the distance and the aircraft cruise speed. During simulation, we implement tactical deconfliction and measure the actual flying time A f . Airborne delay is calculated as follows:\nairborne delay = max{0, (A f -T f )}(4)\n3) Number of alerts is the total number of speed-change advisories requested by the tactical deconfliction methods. Operators generally seek to minimize the number of maneuvers in the air, which use increased energy and increase workload on pilots. Hence, the number of alerts is applied as an efficiency metric." }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "III. STRATEGIC CONFLICT MANAGEMENT", "publication_ref": [ "b9" ], "table_ref": [], "text": "DCB is a mechanism that has been identified by the Federal Aviation Administration (FAA) as potentially being required to support urban air mobility (UAM) operations as the number of operations increases [10]. DCB involves defining airspace capacity and managing demand strategically to prevent demand from exceeding capacity. This can help to balance efficiency and predictability in UAM operations, particularly when operational uncertainties are high. By using DCB, it is possible to manage the demand for constrained resources, such as airspace network intersection points, in a way that helps to ensure the smooth and safe operation of UAM vehicles. Figure 2 shows a schematic diagram of the DCB algorithm. At each bottleneck (crossing or merge point), the time horizon is divided into multiple time windows, each with a fixed duration S. The capacity C of the resource defines the maximum number of flights that can fly through the resource in the same time window. The goal of DCB is to strategically control the throughput at each bottleneck by delaying operations on the ground.\nThis paper introduces two DCB algorithms with different applications. An optimization-based DCB algorithm is centralized and takes the scheduled departure time of all aircraft as input and calculates the optimal departure time required to minimize total delay in advance. In contrast, the heuristic DCB algorithm has no guarantee to minimize total departure delay but can be used to determine ground delays required to ensure that demand does not exceed capacity for unscheduled aircraft in real-time. While optimization-based DCB works well for scheduled demand, heuristic DCB is useful for inserting unscheduled demand into the calculated departure flow. Figure 3 provides an example of how DCB works across multiple resources." }, { "figure_ref": [], "heading": "A. Optimization Based Demand Capacity Balancing", "publication_ref": [ "b5", "b6" ], "table_ref": [], "text": "In this problem, we formulated the DCB problem into a mix-integer programming problem, which can solve for networks with multiple capacity constrained resources. The formulation is shown below.\nmin ω∈B + ,R∈R + d∈D f ∈F d (R d,f -S d,f )(5)\ns.t. R d,f +1 -R d,f ≥ ∆, ∀d ∈ D, f ∈ F d (6) R d,f ≥ S d,f , ∀d ∈ D, f ∈ F d (7) n∈N ω n,d,f,i = 1, ∀d ∈ D, f ∈ F d , i ∈ I d (8) (R d,f + T d,i -B n )ω n,d,f,i ≥ 0,(9)\n∀d ∈ D, f ∈ F d , i ∈ I d , n ∈ N (R d,f + T d,i -B n )ω n,d,f,i ≤ W, (10\n) ∀d ∈ D, f ∈ F d , i ∈ I d , n ∈ N d∈D f ∈F d i∈I d ,i=p ω n,d,f,i ≤ C p , (11\n)\n∀n ∈ N , p ∈ P\nIn this formulation, two decision variables are introduced: the time window identifier ω, and the required time of departure R. The objective (equation ( 5)) of the problem is to minimize the ground delay of all aircraft f ∈ F d on all routes d ∈ D. Here, R s,d is the required time of departure, and S d,f is the original scheduled departure time.\nConstraint (6) ensures that any two aircraft departing from the same vertiport have a minimum separation of ∆. Constraint (7) ensures that the required departure time is not earlier than the scheduled time. Constraints ( 8)-( 10 is the relative arrival time compared to time window n, where T d,i is the estimated flying time from d to i, B n is the start time of time window n, and W is the length of the time window (set to 200 seconds in this paper). The identifier is activated as 1 only when the relative arrival time is within the interval [0, W ], and it can only be activated once. Finally, constraint (11) ensures that the number of aircraft at each resource p ∈ P does not exceed the capacity C p of the resource. It is worth noting that resource set I d includes only the resources involved in the route starting from d, while resource set P includes all the actual capacity constrained resources in the airspace." }, { "figure_ref": [], "heading": "B. Heuristic Demand Capacity Balancing", "publication_ref": [ "b11" ], "table_ref": [], "text": "A single resource heuristic DCB algorithm is proposed in [13]." }, { "figure_ref": [], "heading": "Algorithm 1 Heuristic Demand Capacity Balancing", "publication_ref": [], "table_ref": [], "text": "Collect initial DCB window list ω Initialize start time t while t < T : BlueSky.step()\nt+ = SIMDT if received departure request from aircraft f at route r: check departure time of ahead aircraft R r,f -1 if (R r,f -R r,f -1 ) ≥ ∆: if ω.map(t + D i ) < C i for all bottlenecks: Release aircraft f ω.map(t + D i )+ = 1\nIn our paper, we improved the algorithm to support networks with multiple resources. When the system receives new demand for the resource, it first checks the departure time of the leading aircraft. If the departure separation is within the required separation ∆, the system then uses a mapping function to check the remaining volume of the corresponding window. If the demand in the window reaches any of the involved resource capacities C i , the following departure will be prevented from departing until the next window that is under the capacity limit appears. This algorithm is detailed in Algorithm 1." }, { "figure_ref": [], "heading": "IV. TACTICAL DECONFLICTION", "publication_ref": [ "b11" ], "table_ref": [], "text": "As introduced in [13], strategic deconfliction can mitigate conflicts and guarantee safe separation but at a significant cost to efficiency. To enhance safety and efficiency under uncertainty, airborne operations require tactical deconfliction, which provides maneuver advisories to resolve potential conflicts. In this paper, we introduce two tactical deconfliction methods, i.e., a learning-based method, and a rule-based method." }, { "figure_ref": [], "heading": "A. MARL Tactical Deconfliction", "publication_ref": [ "b18", "b20", "b20" ], "table_ref": [], "text": "The multi-agent reinforcement learning (MARL) algorithm to control individual aircraft in a simulated air traffic environment is originally introduced in [20] and improved in [22]. By using MARL, the algorithm can adapt to changing conditions and learn from past experience, which can help to improve the performance of the system over time. Additionally, by training all of the agents using a shared model, all of the aircraft are following the same separation policy, which can help to prevent conflicts and maintain a safe and efficient flow of traffic. Overall, this approach combines the advantages of MARL and shared model training to provide a powerful tool for aircraft tactical deconfliction.\nThe MARL model is formulated as follows based on [22]: 1) State Space: In reinforcement learning, the state space refers to the set of all possible states that an agent can encounter at a given time. In this particular study, we assume that the aircraft's state and dynamics information is fully accessible to others, like position, speed, and distance to the destination. Specifically, the state space for each agent is formulated as follows:\ns o t = {d (o) goal , v (o) θ (o) , d NMAC }(12)\nh o t (i) = {d (i) goal , v (i) , θ (i) , d (i) o }(13)\nwhere s o t represents the state of the ownship, which contains the distance to the goal d (o) goal , aircraft speed v (o) , aircraft heading θ (o) , and the NMAC boundary d NMAC . The state of the intruder is quite similar to the ownship while replacing the NMAC boundary with the distance between the ownship and the intruder d\n(i) o .\n2) Action Space: In this study, the action space is defined as the set of possible actions that an aircraft can take at each decision-making step. These actions include decreasing speed, holding the current speed, or increasing speed:\nA t = [-∆v, 0, +∆v](14)\n3) Reward Function: A reward function in reinforcement learning can provide a scalar feedback signal to an agent, indicating the desirability of the state-action pair taken by the agent in an environment. In this paper, three types of penalties are considered:\nR(s, t, a) = R(s) + R(t) + R(a)(15)\nSince maintaining separation is the primary objective in this paper, the majority of the reward function during the training process is allocated to the safety penalty term, denoted as R(s). The safety penalty is defined as follows:\nR(s) =          -1 if d (i) o < d NMAC -α + δ • d (i) o if d NMAC ≤ d (i) o ≤ d LoWC 0 otherwise (16\n) If the distance between the ownship and the intruder is within the NMAC threshold d NMAC , the agent incurs a penalty of -1. If the distance falls between the NMAC threshold and the LoWC threshold, the penalty is linearly proportional to the distance.\nThe objective of this paper is also to enhance traffic efficiency while maintaining a specified level of safety. To achieve this goal, the second component of the reward function is the flying time penalty, denoted as R(t).\nR(t) =    -1 if t > T -η otherwise (17)\nIf an aircraft exceeds its maximum flying time T and fails to reach its destination, it incurs a penalty of -1 and is removed from the simulation. Otherwise, a fixed penalty η is applied at each step and accumulated over time. This encourages the agent to avoid local optima, where all aircraft maintain minimum speed until the end of the simulation, by increasing their speeds.\nIn the real world, frequent speed changes can increase pilot workload (in the case of a piloted aircraft), with associated safety implications, and can also result in higher energy use. To mitigate these risks, we introduce the action penalty term\nR(a) R(a) =    0 if a = 0 -ψ otherwise(18)\nWhenever an aircraft changes its speed, a fixed penalty ψ is applied and accumulated over time. This penalty is intended to discourage unnecessary speed changes and encourage smoother flight paths." }, { "figure_ref": [ "fig_5" ], "heading": "B. Implementation of Game Theory", "publication_ref": [ "b18", "b20" ], "table_ref": [ "tab_1" ], "text": "To gain a comprehensive understanding of multi-agent decision-making problems and enhance the efficacy of tactical deconfliction methods, it is valuable to analyze the relationships among agents. However, solving a detailed multi-stage decision-making problem for all the aircraft from start to end becomes challenging when applying game theory. The equilibrium is hard to reach because of the computation complexity and inefficiency. To break down the problem, we focus on a one-step decision-making scenario between two merging aircraft, as illustrated in Figure 4. \n1 Speed up -1, -1 -1, -1 0, -0.01 Hold -1, -1 -1, -1 -1, -1 Slow down -0.01, 0 -1, -1 -1, -1\nThe cost matrix of two aircraft on merging trajectories can be abstracted to that shown in Table II. In this situation, only one speed decrease and one speed increase action can effectively mitigate the conflict, while any other actions may result in a significant penalty (-1). Additionally, choosing to decrease speed incurs a small additional energy cost (-0.01). In the previous work's setting [20], [22], when Aircraft 1 and Aircraft 2 identify each other as intruders, the case is a general sum game with two equilibriums ([speed up, slow down], [slow down, speed up]). This ambiguous relationship leads to difficult decision-making for both aircraft, resulting in a lower convergence rate for multi-agent reinforcement learning (MARL) training. However, if a policy is implemented where aircraft only check for leading aircraft and make decisions in order, the case can be changed to a Stackelberg game, with only one dominant equilibrium ([speed up, slow down]). This new relationship is simpler, making it easier for agents to select the correct actions. Figure 6 shows the learning curve for MARL with different intruder detection policies." }, { "figure_ref": [ "fig_6" ], "heading": "C. Rule-based Tactical Deconfliction", "publication_ref": [ "b11" ], "table_ref": [], "text": "The rule-based tactical deconfliction method relies on predefined rules to determine the actions of aircraft to avoid NMACs, which is described in Figure 5 In the case where the distance between two aircraft is closer than the NMAC threshold, the following aircraft will choose to hover or reduce speed to a minimum level to avoid a potential collision. This situation is defined as an NMAC event. If the distance between the following aircraft and the lead aircraft is lower than the low separation boundary, the following aircraft will choose to slow down. On the other hand, if the distance is larger than the high separation boundary, or if there is no leading aircraft, the following aircraft will choose to speed up.\nThe rule-based tactical deconfliction method serves as a benchmark in the study described in [13]. However, it should be noted that rule-based methods may have limitations in complex and dynamic environments, and it only provides a baseline approach for separation assurance but may require further refinement and improvement for more complex situations." }, { "figure_ref": [], "heading": "V. NUMERICAL EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Simulation Environment", "publication_ref": [ "b27" ], "table_ref": [], "text": "In this study, we use BlueSky [29] as our simulator to run a fast-time simulation. The BlueSky simulator is capable of running a large number of aircraft simulations in parallel efficiently. In addition, it is highly configurable, e.g., allowing the configuration of vertiport locations, waypoint locations, UAM routes, and aircraft performance parameters.\nTo study and evaluate the performance of the integrated conflict management system in structured airspace, we develop an evaluation scenario as shown in Figure 7, which defines capacity constrained resources as the typical bottlenecks in an airspace network. Three routes are included in the scenario:\n• N-7 → N-1 → N-2 → N-3 • N-9 → N-1 → N-2 → N-3 • M-2 → N-2 → M-4\nwhere N-1 and N-2 are two resources in this network. We implement the optimization-based DCB on both resources." }, { "figure_ref": [ "fig_8", "fig_8", "fig_8", "fig_8", "fig_8", "fig_8" ], "heading": "B. Experimental Results", "publication_ref": [ "b28", "b29" ], "table_ref": [], "text": "We conducted several numerical experiments to showcase the efficacy of our proposed integrated conflict management framework. Specifically, we first demonstrate the learning curve of the MARL training process on different capacities, which is used to determine the proper traffic density to obtain the best MARL model. Next, we determine the maximum capacity for the rule-based tactical method and MARL deconfliction model, as well as the capacity without any tactical deconfliction as a reference. Once we have the proper capacity value for DCB, we use those determined parameters and the trained model to compare the performance of different algorithm combinations using six metrics. Finally, we analyze the speed curve of various tactical deconfliction methods to gain insights into the reasons for differences in performance.\nTo ensure a fair comparison between the rule-based and MARL methods, we set the observation range to 1500m for both methods. The decision-making of the ownship aircraft would be affected by the intruders who are within this observation range.\n1) Learning Curve for Different Capacities: The ultimate goal of the MARL model is to reduce penalties and determine the best policy for a given environment. However, if the traffic density is too high, or if aircraft do not have sufficient initial separation, it can be challenging for the MARL model to search for the optimal policy. In fact, high traffic density may lead the model to an unexpected local optimal policy, such as forcing all aircraft to airborne holding to avoid conflicts or even colliding to avoid further penalty steps. Therefore, it is essential to have a DCB layer as a precondition for MARL training.\nTo required departure time, origin, destination, waypoints, cruise speed, and cruise altitude. To avoid overfitting, we generated 100 different flight schedule tables and place them into a scenario pool. During training, the MARL model randomly selected a flight schedule table at each episode to improve its generalization performance. An episode is defined as a simulation round that fully executes the flight schedule table, starting from the first aircraft departure and ending with the final aircraft landing.\nThe training process consisted of a total of 150,000 episodes and was performed on two Nvidia RTX 3090 graphics cards. The model updated its weight every 30 episodes, and the simulation was executed in parallel with the support of the Ray python package [30]. The entire training process took roughly 4 hours.\nFigure 8 depicts the learning curve on capacities of 6, 8, 10, and 30 operations per 200 seconds window, the latter of which corresponds to the case without DCB. The figure indicates that as the capacity increases, the MARL model faces greater difficulty in reaching the optimal policy. For instance, for a capacity of 6 operations per 200 seconds window, the model converges after 30,000 episodes, while for a capacity of 8 operations per 200s window, it continues searching for up to 120,000 episodes. Furthermore, the figure clearly illustrates the different components of the reward function described in Section IV-A. In Figure 8a, LoWC and NMAC events are infrequent, and the only cost incurred is the step penalty, which is introduced from the actual flying time and is unavoidable. In contrast, in Figure 8b and Figure 8c, the occurrence of NMAC is rare, while LoWC is more significant. Additionally, the speed change penalty is higher than in Figure 8a since the agent requires more maneuvers to avoid collisions. Figure 8d shows how MARL attempts to mitigate conflicts with no preconditioning by DCB. The primary component is the NMAC penalty, which implies a failure policy.\nAfter careful consideration, we selected the best model trained with a capacity of 10 operations per 200s window for the subsequent experiments. This is because we want a model that will seek to prevent NMACs and this is the highest capacity that results in very few NMACs. In this paper, we do not seek to minimize LoWC events. It is noted that a MARL model trained on a highly constrained scenario generally performs well on a scenario that is not highly constrained, but the reverse may not hold.\n2) Performance with Different Capacities: After showing the feasibility of MARL, the next challenge is to determine the maximum capacity that each tactical deconfliction method can support, while meeting a Target Level of Safety (TLS). To address this issue, we employed Monte Carlo simulations and evaluated system performance across a range of capacities from 1 to 11 operations per 200s window. Each capacity was applied for 30 simulation runs and the average value of the estimated MAC was recorded in each case. In order to observe the efficacy of DCB on different capacities, the original traffic demand was set up at a high level, where the average demand interval is 30 seconds on each route. To select the appropriate capacity, we compared the average estimated MAC against a TLS of 0.94 MAC per 100,000 flight hours, in accordance with the United States Department of Transportation's proposed TLS for General Aviation aircraft in 2023 [31].\nTable III displays the average estimated MACs for different capacities. As the capacity increases, the estimated MACs also increase for all three tactical methods, indicating that DCB can function effectively to precondition for tactical deconfliction. The table also reveals that, at any capacity level, the performance of the MARL model is superior to that of the rule-based approach. Based on the predefined TLS, we selected a capacity of 4 operations per 200s window for the system with the rule-based tactical method and a capacity of 7 operations per 200s window for the system with the MARL tactical method. This indicates that the MARL method is able to meet the TLS at a higher demand than the rulebased method. Furthermore, if the system lacks any tactical deconfliction method, only a capacity of 1 operation per 200s window is viable. This is effectively strategic deconfliction since only 1 operation is released into each time window." }, { "figure_ref": [ "fig_9", "fig_9", "fig_9" ], "heading": "3) Model Comparison:", "publication_ref": [], "table_ref": [], "text": "In experiments 1 and 2 we successfully trained an effective MARL model for tactical deconfliction and established the maximum capacities of various tactical methods for strategic conflict management. In the experiment described here, we integrated these two components and conducted a comprehensive analysis, comparing different algorithm combinations using the six metrics outlined in section II. To evaluate the impact of different traffic demand levels, we tested each method under high, medium, and low traffic demand levels, corresponding to average departure intervals of 30, 60, and 120 seconds on each route, respectively. To ensure accuracy and eliminate the effects of randomness, we ran each experiment setting 30 times and reported the average values for each metric.\nThe final results are presented in Table IV. They lead us to draw several important conclusions.\n• DCB is essential for safe separation. By incorporating a suitable maximum capacity for DCB, we were able to mitigate conflicts and maintain estimated MACs under the TLS. The first three rows in Table IV do not apply DCB. The first represents no tactical deconfliction, the second the rule-based tactical deconfliction method, and the third the MARL tactical deconfliction method, all applied without preconditioning by DCB to reduce the demand on the tactical systems to levels that would allow them to meet the TLS. Hence we do not expect the estimated MAC per 100,000 flight hours to meet the TLS in these cases. The last three rows correspond to the same tactical deconfliction methods, but with the DCB applied to precondition the traffic demand to a level that will allow the tactical deconfliction method to meet the TLS. It is evident that DCB plays a crucial role in eliminating conflicts and ensuring safety. • DCB can help save energy by reducing fuel consumption and emissions. When traffic demand is high, DCB can lower the number of alerts and shorten flying time, which improves the efficiency metrics. However, to implement DCB, aircraft are delayed on the ground, with the length of the delay depending on the traffic demand and maximum capacity applied. It's worth noting that ground delay is not unique to DCB and exists in all three non-DCB methods as well. This is because the basic departure separation method used for tactical deconfliction in all cases also causes some small ground delays. • Advanced tactical deconfliction methods, such as MARL, can increase system capacity and increase efficiency accordingly. MARL combined with DCB has similar safety metrics to the rule-based method with DCB and DCB with no tactical deconfliction, and all of these methods could guarantee safe separation. However, as the maximum capacity of each resource decreases, ground delay significantly increases. Thus, MARL is the most efficient method simulated because it allows for a higher airspace capacity, which ultimately leads to a decrease in ground delay. • The performance of the rule-based tactical deconfliction method without DCB is worse than the no-intervention case. When the traffic density is too high, the risk ratio can be greater than 1, indicating that the rule-based method can lead to a higher risk of collisions than if no intervention is made at all (i.e., induce airspace risk). The rationale behind this assertion is based on the potential for aircraft to experience blockages en route in the absence of DCB regulation. In scenarios where DCB is not imple-mented, aircraft may reach their minimum speed, leaving them with limited options to avoid collisions. While it is possible to execute other rule-based tactical maneuvers to prevent blockages, our paper does not model them for the sake of simplicity. This observation highlights the necessity of using DCB in such scenarios, which can help reduce the risk of collisions and improve overall efficiency. 4) Speed Curve Analysis: Given the differences observed between the MARL and rule-based methods for tactical deconfliction in the previous experiments, we sought to investigate the factors contributing to these differences. To do so, we recorded and plotted the speed curves of the simulated aircraft, as shown in Figure 9. To facilitate readability, we selected eight aircraft uniformly from the total of 30 aircraft simulated.\nWe observed that the rule-based method for tactical deconfliction resulted in aircraft changing speed dramatically from maximum to minimum, often with rapid acceleration and deceleration. In contrast, the MARL tactical deconfliction method provides speed advisories considering a longer-term view. For instance, for aircraft D533 (the brown curve in Figure 9), the MARL method advised holding at a relatively lower speed range for a period, helping the aircraft avoid slowing down to the minimum speed recommended by the rule-based method. This adjustment allowed the aircraft to arrive earlier than the rule-based method suggested. We also observed speed oscillations in the rule-based separation method, as illustrated by aircraft D118 (the orange curve in Figure 9). This occurred because the aircraft was in a situation where the distance to the leading aircraft was exactly on the boundary of the threshold for speed-up and slow-down.\nIn summary, the MARL tactical deconfliction method provides more optimal speed advisories compared to the rulebased method, allowing aircraft to arrive earlier and avoid rapid acceleration and deceleration, which may lead to more efficient and stable flight operations." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Our approach demonstrated promising results in reducing the number of conflicts and improving the efficiency of UAM operations at scale. The integrated conflict management framework, which combines strategic conflict management and tactical deconfliction methods, offers a comprehensive solution to address some of the challenges in high-density UAM operations. Our research showed that the optimization-based multiple resource demand capacity balancing algorithm plays a crucial role in preconditioning for tactical deconfliction. The successful implementation of game theory also improved the performance of the tactical deconfliction model, saving computational resources and making it possible to apply the system in the real world. In addition, the Monte-Carlo simulation we used to study the interactions between the strategic and tactical safety assurance methods provided valuable insights that can contribute to the development of more effective and efficient UAM systems in the future.\nOne of the next steps in this research is to thoroughly investigate and understand the interplay between strategic and tactical conflict management methods. Currently, strategic conflict management computes the optimal departure time based on a deterministic estimated flying time based on known operations. However, tactical deconfliction within the system may introduce speed changes that can affect the estimated time of arrival (ETA) at resources. As airspace networks become more complex, these time differences can accumulate and result in reduced effectiveness of the preconditioning by strategic conflict management systems. Therefore, formulating the ETA stochastically by considering the method of tactical deconfliction could increase the system's robustness in com-plex networks." }, { "figure_ref": [], "heading": "ACKNOWLEDGEMENTS", "publication_ref": [], "table_ref": [], "text": "Authors Brittain and Wei are partially supported by the NASA Grant 80NSSC21M0087 under the NASA System-Wide Safety (SWS) program." } ]
Urban air mobility (UAM) has the potential to revolutionize our daily transportation, offering rapid and efficient deliveries of passengers and cargo between dedicated locations within and around the urban environment. Before the commercialization and adoption of this emerging transportation mode, however, aviation safety must be guaranteed, i.e., all the aircraft have to be safely separated by strategic and tactical deconfliction. Reinforcement learning has demonstrated effectiveness in the tactical deconfliction of en route commercial air traffic in simulation. However, its performance is found to be dependent on the traffic density. In this project, we propose a novel framework that combines demand capacity balancing (DCB) for strategic conflict management and reinforcement learning for tactical separation. By using DCB to precondition traffic to proper density levels, we show that reinforcement learning can achieve much better performance for tactical safety separation. Our results also indicate that this DCB preconditioning can allow target levels of safety to be met that are otherwise impossible. In addition, combining strategic DCB with reinforcement learning for tactical separation can meet these safety levels while achieving greater operational efficiency than alternative solutions.
Integrated Conflict Management for UAM with Strategic Demand Capacity Balancing and Learning-based Tactical Deconfliction
[ { "figure_caption": "Fig. 1 .1Fig. 1. The framework of integrated conflict management platform.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "1 ) 2 )12Number of Loss of Well Clear (LoWC) events per flight hour. A LoWC event is defined as a loss of horizontal separation between any aircraft, and the range is set as 500 meters in this paper under the recommendation of [24]. Number of Near Mid Air Collisions (NMAC) per flight hour. Since Mid Air Collisions (MACs) between aircraft are rare, a Near Mid Air Collision (NMAC)", "figure_data": "", "figure_id": "fig_1", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Diagram of DCB. For the given bottleneck with a capacity of 3 operations every time window of 200 seconds, the aircraft's estimated arrival time falls within a fully occupied time window. To ensure the aircraft's arrival at the next available time window, the operation is assigned a ground delay.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. Example of how DCB can be applied across multiple resources. The blue bars show the original demand across different time windows, and the orange bars show the optimized traffic demand. The blue and orange dots are the exact departure times of the modeled operations. (a) Traffic demand on resource 1. (b) Traffic demand on resource 2.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Different intruder detection policies.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Description of rule-based tactical deconfliction.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .Fig. 7 .67Fig. 6. The learning curve of MARL with different intruder detection policies. (a)Detect all intruders nearby; (b) Only detect forward intruders.", "figure_data": "", "figure_id": "fig_7", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. MARL learning curve for different capacities. (a) Capacity=6 operations per 200s window. (b) Capacity=8 operations per 200s window. (c) Capacity=10 operations per 200s window. (d) Without DCB.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. The comparison of speed curves of aircraft with the rule-based tactical method and the MARL methods. The y-axis in each plot represents the aircraft's actual speed in knots, while the x-axis is the simulation time in seconds. Each line represents an aircraft.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "COST TABLEAircraft 2Speed upHoldSlow downAircraft", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Shulu Chen; Antony Evans; Marc Brittain; Peng Wei
[ { "authors": "K Balakrishnan; J Polastre; J Mooberry; R Golding; P Sachs", "journal": "", "ref_id": "b0", "title": "Blueprint for the sky: The roadmap for the safe integration of autonomous aircraft", "year": "2018" }, { "authors": "D Jenkins; B Vasigh; C Oster; T Larsen", "journal": "", "ref_id": "b1", "title": "Forecast of the commercial UAS package delivery market", "year": "2017" }, { "authors": "B A Hamilton", "journal": "", "ref_id": "b2", "title": "Urban air mobility market study", "year": "2018" }, { "authors": "N R Council", "journal": "National Academies Press", "ref_id": "b3", "title": "Autonomy research for civil aviation: toward a new era of flight", "year": "2014" }, { "authors": "H Erzberger", "journal": "Tech. Rep", "ref_id": "b4", "title": "Transforming the nas: The next generation air traffic control system", "year": "2004" }, { "authors": "H Erzberger; K Heere", "journal": "Proceedings of the Institution of Mechanical Engineers, Part G: Journal of Aerospace Engineering", "ref_id": "b5", "title": "Algorithm and operational concept for resolving short-range conflicts", "year": "2010" }, { "authors": "H Erzberger; E Itoh", "journal": "Tech. Rep", "ref_id": "b6", "title": "Design principles and algorithms for air traffic arrival scheduling", "year": "2014" }, { "authors": "P Kopardekar; J Rios; T Prevot; M Johnson; J Jung; J E Robinson", "journal": "AIAA", "ref_id": "b7", "title": "Unmanned aircraft system traffic management (utm) concept of operations", "year": "2016" }, { "authors": "D P Thipphavong; R Apaza; B Barmore; V Battiste; B Burian; Q Dao; M Feary; S Go; K H Goodrich; J Homola", "journal": "", "ref_id": "b8", "title": "Urban air mobility airspace integration concepts and considerations", "year": "2018" }, { "authors": "", "journal": "", "ref_id": "b9", "title": "Urban Air Mobility", "year": "" }, { "authors": "G Zhu", "journal": "", "ref_id": "b10", "title": "Decision making under uncertainties for air traffic flow management", "year": "2019" }, { "authors": "S Chen; P Wei; A D Evans; M Egorov", "journal": "", "ref_id": "b11", "title": "Estimating airspace resource capacity for advanced air mobility operations", "year": "2022" }, { "authors": "P Razzaghi; A Tabrizian; W Guo; S Chen; A Taye; E Thompson; A Bregeon; A Baheri; P Wei", "journal": "", "ref_id": "b12", "title": "A survey on reinforcement learning in aviation applications", "year": "2022" }, { "authors": "J P Chryssanthacopoulos; M J Kochenderfer", "journal": "Journal of Guidance, Control, and Dynamics", "ref_id": "b13", "title": "Accounting for state uncertainty in collision avoidance", "year": "2011" }, { "authors": "H Y Ong; M J Kochenderfer", "journal": "Journal of Guidance, Control, and Dynamics", "ref_id": "b14", "title": "Markov decision process-based distributed conflict resolution for drone air traffic management", "year": "2017" }, { "authors": "J Bertram; P Wei", "journal": "", "ref_id": "b15", "title": "Distributed computational guidance for highdensity urban air mobility with cooperative and non-cooperative collision avoidance", "year": "2020" }, { "authors": "A G Taye; J Bertram; C Fan; P Wei", "journal": "", "ref_id": "b16", "title": "Reachability based online safety verification for high-density urban air mobility trajectory planning", "year": "2022" }, { "authors": "M Brittain; P Wei", "journal": "IEEE Transactions on Automation Science and Engineering", "ref_id": "b17", "title": "Scalable autonomous separation assurance with heterogeneous multi-agent reinforcement learning", "year": "2022" }, { "authors": "M Brittain; P Wei", "journal": "", "ref_id": "b18", "title": "Autonomous separation assurance in an highdensity en route sector: A deep multi-agent reinforcement learning approach", "year": "2019" }, { "authors": "M W Brittain; P Wei", "journal": "", "ref_id": "b19", "title": "One to any: Distributed conflict resolution with deep multi-agent reinforcement learning and long short-term memory", "year": "2021" }, { "authors": "M W Brittain; X Yang; P Wei", "journal": "Journal of Aerospace Information Systems", "ref_id": "b20", "title": "Autonomous separation assurance with deep multi-agent reinforcement learning", "year": "2021" }, { "authors": "W Guo; M Brittain; P Wei", "journal": "IEEE", "ref_id": "b21", "title": "Safety enhancement for deep reinforcement learning in autonomous separation assurance", "year": "2021" }, { "authors": "A Weinert; S Campbell; A Vela; D Schuldt; J Kurucar", "journal": "Journal of air transportation", "ref_id": "b22", "title": "Wellclear recommendation for small unmanned aircraft systems based on unmitigated collision risk", "year": "2018" }, { "authors": "A Weinert; L Alvarez; M Owen; B Zintak", "journal": "", "ref_id": "b23", "title": "A quantitatively derived nmac analog for smaller unmanned aircraft systems based on unmitigated collision risk", "year": "2020" }, { "authors": "M P Owen; A Panken; R Moss; L Alvarez; C Leeper", "journal": "IEEE", "ref_id": "b24", "title": "Acas xu: Integrated collision avoidance and detect and avoid capability for uas", "year": "2019" }, { "authors": "L E Alvarez; I Jessen; M P Owen; J Silbermann; P Wood", "journal": "IEEE", "ref_id": "b25", "title": "Acas sxu: Robust decentralized detect and avoid for small unmanned aircraft systems", "year": "2019" }, { "authors": "S M Katz; L E Alvarez; M Owen; S Wu; M W Brittain; A Das; M J Kochenderfer", "journal": "", "ref_id": "b26", "title": "Collision risk and operational impact of speed change advisories as aircraft collision avoidance maneuvers", "year": "2022" }, { "authors": "J M Hoekstra; J Ellerbroek", "journal": "FAA/Eurocontrol USA/Europe", "ref_id": "b27", "title": "Bluesky atc simulator project: an open data and open source approach", "year": "2016" }, { "authors": "P Moritz; R Nishihara; S Wang; A Tumanov; R Liaw; E Liang; M Elibol; Z Yang; W Paul; M I Jordan; I Stoica", "journal": "", "ref_id": "b28", "title": "Ray: A Distributed Framework for Emerging AI Applications", "year": "2017" }, { "authors": "", "journal": "", "ref_id": "b29", "title": "Dot progress in aviation safety: FY 2022", "year": "2022-05" } ]
[ { "formula_coordinates": [ 3, 378.67, 356.42, 184.37, 22.55 ], "formula_id": "formula_0", "formula_text": "β = P(NMAC, with ACAS X) P(NMAC, without ACAS X) (1)" }, { "formula_coordinates": [ 3, 364.52, 421.47, 198.51, 9.81 ], "formula_id": "formula_1", "formula_text": "E(N MAC ) = P(MAC|NMAC) • β • N NMAC (2)" }, { "formula_coordinates": [ 4, 112.41, 174.83, 187.62, 9.65 ], "formula_id": "formula_2", "formula_text": "ground delay = max{0, (R f -S f )}(3)" }, { "formula_coordinates": [ 4, 110.11, 299.82, 189.91, 9.65 ], "formula_id": "formula_3", "formula_text": "airborne delay = max{0, (A f -T f )}(4)" }, { "formula_coordinates": [ 4, 318.33, 403.91, 244.7, 20.75 ], "formula_id": "formula_4", "formula_text": "min ω∈B + ,R∈R + d∈D f ∈F d (R d,f -S d,f )(5)" }, { "formula_coordinates": [ 4, 345.3, 428.73, 217.73, 69.08 ], "formula_id": "formula_5", "formula_text": "s.t. R d,f +1 -R d,f ≥ ∆, ∀d ∈ D, f ∈ F d (6) R d,f ≥ S d,f , ∀d ∈ D, f ∈ F d (7) n∈N ω n,d,f,i = 1, ∀d ∈ D, f ∈ F d , i ∈ I d (8) (R d,f + T d,i -B n )ω n,d,f,i ≥ 0,(9)" }, { "formula_coordinates": [ 4, 366.89, 501.62, 192, 26.67 ], "formula_id": "formula_6", "formula_text": "∀d ∈ D, f ∈ F d , i ∈ I d , n ∈ N (R d,f + T d,i -B n )ω n,d,f,i ≤ W, (10" }, { "formula_coordinates": [ 4, 368.55, 518.95, 194.48, 53 ], "formula_id": "formula_7", "formula_text": ") ∀d ∈ D, f ∈ F d , i ∈ I d , n ∈ N d∈D f ∈F d i∈I d ,i=p ω n,d,f,i ≤ C p , (11" }, { "formula_coordinates": [ 4, 558.89, 551.52, 4.15, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 5, 76.36, 554.98, 220.4, 81.38 ], "formula_id": "formula_9", "formula_text": "t+ = SIMDT if received departure request from aircraft f at route r: check departure time of ahead aircraft R r,f -1 if (R r,f -R r,f -1 ) ≥ ∆: if ω.map(t + D i ) < C i for all bottlenecks: Release aircraft f ω.map(t + D i )+ = 1" }, { "formula_coordinates": [ 5, 379.13, 713.83, 183.91, 14.3 ], "formula_id": "formula_10", "formula_text": "s o t = {d (o) goal , v (o) θ (o) , d NMAC }(12)" }, { "formula_coordinates": [ 5, 376.27, 731.58, 186.76, 14.3 ], "formula_id": "formula_11", "formula_text": "h o t (i) = {d (i) goal , v (i) , θ (i) , d (i) o }(13)" }, { "formula_coordinates": [ 6, 104.84, 117.05, 12.03, 12.46 ], "formula_id": "formula_12", "formula_text": "(i) o ." }, { "formula_coordinates": [ 6, 131.44, 188.53, 168.58, 9.65 ], "formula_id": "formula_13", "formula_text": "A t = [-∆v, 0, +∆v](14)" }, { "formula_coordinates": [ 6, 106.93, 276.79, 193.09, 8.96 ], "formula_id": "formula_14", "formula_text": "R(s, t, a) = R(s) + R(t) + R(a)(15)" }, { "formula_coordinates": [ 6, 62.13, 350.03, 233.74, 63.14 ], "formula_id": "formula_15", "formula_text": "R(s) =          -1 if d (i) o < d NMAC -α + δ • d (i) o if d NMAC ≤ d (i) o ≤ d LoWC 0 otherwise (16" }, { "formula_coordinates": [ 6, 116.78, 529.13, 183.25, 33.28 ], "formula_id": "formula_16", "formula_text": "R(t) =    -1 if t > T -η otherwise (17)" }, { "formula_coordinates": [ 6, 48.96, 705.32, 251.06, 43.55 ], "formula_id": "formula_17", "formula_text": "R(a) R(a) =    0 if a = 0 -ψ otherwise(18)" }, { "formula_coordinates": [ 6, 345.97, 569.95, 163.55, 33.81 ], "formula_id": "formula_18", "formula_text": "1 Speed up -1, -1 -1, -1 0, -0.01 Hold -1, -1 -1, -1 -1, -1 Slow down -0.01, 0 -1, -1 -1, -1" }, { "formula_coordinates": [ 7, 321.94, 167.39, 122.84, 32.87 ], "formula_id": "formula_19", "formula_text": "• N-7 → N-1 → N-2 → N-3 • N-9 → N-1 → N-2 → N-3 • M-2 → N-2 → M-4" } ]
10.18653/v1/W15-3001
2023-06-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b4", "b2", "b3", "b0", "b22", "b4", "b2", "b3", "b0", "b4", "b2", "b4", "b2", "b0" ], "table_ref": [], "text": "Automatic postediting (APE) is an automated process to transform a given machine translation (MT) into a higher-quality text (Knight and Chander, 1994). Since 2015, Conference on Machine Translation (WMT) has been hosting an annual shared task for APE, and most of the recently developed APE systems are within the common framework of representation learning using artificial neural networks to learn postediting patterns from the training data (Chatterjee et al., 2018(Chatterjee et al., , 2019(Chatterjee et al., , 2020;;Akhbardeh et al., 2021).\nSince 2018, all participants in the shared task have used Transformer-based models (Vaswani et al., 2017), but recent findings of the shared task (Chatterjee et al., 2018(Chatterjee et al., , 2019(Chatterjee et al., , 2020;;Akhbardeh et al., 2021) cast doubt on whether Transformer-based APE models learn good generalizations because such models' APE quality appears to be significantly affected by external factors such as the source-target language pair, the qualitative characteristics of the provided data, and the quality of the given MT.\nEspecially, the good quality of the given MTs has brought great difficulty in performing APE on the WMT 2019 test data set: the better the given MT is, the harder it is to decide what parts to edit and how to correct these errors (Chatterjee et al., 2018(Chatterjee et al., , 2019)). The thing to notice is that this outcome is not a question of data scarcity because the language pair of this test data set, English-German, is a language pair provided with abundant training, validation, and test data. Also, it is not a question of data heterogeneity, either: the domain of this test data set, IT, shows a high degree of lexical repetition, which indicates that data sets in this domain use the same small set of lexical items (Chatterjee et al., 2018(Chatterjee et al., , 2019;;Akhbardeh et al., 2021). Thus, it would be a question of modeling, and one possible solution is to implant deeper knowledge about the target language into the model.\nTo this end, we propose a new method of regularization that is expected to enhance Transformerbased APE models' understanding of German translations. Specifically, the proposed method is based on Feldermodell ( §2), an established linguistic model, which implies the need for proper treatment of the underlying symmetry of German sentence structures. To instill the idea of syntactic symmetry into Transformer-based APE models, we introduce a loss function that encourages symmetric self-attention on the given MT. Based on experimental results, we conduct a careful analysis and conclude that the proposed method has a positive effect on improving the state-of-the-art architecture's APE quality for high-quality MTs." }, { "figure_ref": [], "heading": "Linguistic Theory", "publication_ref": [ "b15", "b23", "b6", "b23", "b6", "b20", "b11" ], "table_ref": [], "text": "In German linguistics, das topologische Satzmodell ('the topological sentence model') or das Feldermodell ('the field model') (Reis, 1980;Wöllstein, 2018;Höhle, 2019) describes how constituents of a sentence are closely related even if they are far apart from each other. Usually, Feldermodell divides a clause into das Vorfeld ('the prefield'; VF), die linke Satzklammer ('the left bracket'; LSK), das Mittelfeld ('the middlefield'; MF), die rechte Satzklammer ('the right bracket'; RSK), and das Nachfeld ('the postfield'; NF).\n(1)\n[ Heute VF ] [ habe LSK ] [ ich MF ] [ gesehen RSK ] [ zufällig NF ], (2) [ [ dass LSK ] [ du eine Tasse Kaffee MF ] [ getrunken hast RSK ] NF ].\nThese parts are all interrelated; LSK and RSK are a typical example: while the former holds a finite verb or a complementizer, the latter holds a past participle, an infinitive, and a particle. In (1), VF holds \"Heute\" ('today'); LSK holds \"habe\" ('have'); MF holds \"ich\" ('I'); RSK holds \"gesehen\" ('seen'); and NF holds \"zufällig\" ('by chance'). ( 2) is an additional NF of (1) and includes its own LSK holding \"dass\" ('that'); MF holding \"du eine Tasse Kaffee\" ('you a cup of coffee'); and RSK holding \"getrunken hast\" ('drank').\nFor such analyses, special tree structures such as Doppelbaum (Wöllstein, 2018) ('double tree') can be used, which is a bimodal tree (Fig. 1), where two CP, C, IP, I, and VP subtrees are 'symmetric' with respect to V. We assume that this structural symmetry is parameterized from the perspective, not only of generative linguistics (Wöllstein, 2018;Höhle, 2019), but also of a parametric model P = {P θ | θ ∈ Θ}, where P θ and Θ are a probability distribution and the parameter space, respectively.\nEspecially, if we look at APE in terms of sequence-to-sequence learning (Sutskever et al., 2014), the probability distribution of the output sequence (y 1 , ⋯, y Ly ) is obtained in the following manner:\nP θ (y 1 , ⋯, y Ly | x 1 , ⋯, x Lx , z 1 , ⋯, z Lz ) = Ly ∏ t=1 P θ (y t | u, v, y 1 , ⋯, y t-1 ),\nwhere u and v are the representations of a source text (x 1 , ⋯, x Lx ) and its MT (z 1 , ⋯, z Lz ), respectively. In this process, we presume that the syntactic symmetry of the target language affects the resulting distribution P θ ; in other words, this syntactic symmetry would be an inductive bias (Mitchell, 1980) that should be handled properly." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b17" ], "table_ref": [], "text": "We implement a multi-encoder Transformer model consisting of the \"Joint-Final\" encoder and the \"Parallel\" decoder, which is a state-of-the-art architecture for APE (Shin et al., 2021), and conduct a controlled experiment without concern for usage of performance-centered tuning techniques. Specifically, the Joint-Final encoder consists of a sourcetext encoder and an MT encoder, which process the given source text and MT, respectively. Based on this baseline architecture, we propose a method to encourage the MT encoder to perform symmetric self-attention by minimizing the skewness of each self-attention layer's categorical distribution p self .\nThe used measure of skewness is\n(μ 3 ) i = ⎛ ⎜ ⎝ ⌊ Lz 2 ⌋ ∑ j=1 p self [i, j] - Lz ∑ j=⌈ Lz 2 ⌉+1 p self [i, j] ⎞ ⎟ ⎠ 2 ,\nfor each token z i in the given MT (z 1 , ⋯, z Lz ).\nAccordingly, the basic cross-entropy loss L CE is regularized by (μ 3 ) i , resulting in a new loss function\nL DOPPELBAUM = L CE + E[α]E[μ 3 ] + (1 -α),\nwhere\nE[α] = ∑ B b=1 ∑ Lz i=1 α b,i B × L z is the expected value of coefficients α b,i = σ(W T v b,i + β)\nin the given minibatch, and\nE[μ 3 ] = ∑ B b=1 ∑ N n=1 ∑ H h=1 ∑ Lz i=1 (μ 3 ) b,n,h,i B × N × H × L z\nis the expected value of (μ 3 ) b,n,h,i . In addition, (1α) is an initial inducement to utilizing μ3 . In the equations above, σ is the sigmoid function, v is the output of the final layer of the MT encoder, W ∈ R d model and β ∈ R are learned parameters, B is the number of data examples, N is the number of layers, and H is the number of heads." }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [ "b17" ], "table_ref": [ "tab_0" ], "text": "In the conducted experiment, all hyperparameters are the same as those of Shin et al. (2021) Both the baseline model and the proposed model are trained by using the training data sets and the validation data set listed in Table 1; we first train the models by using eSCAPE-NMT mixed with the WMT 2019 training data in the ratio of 27 ∶ 1, and then tune them by using the WMT 2019 training data solely." }, { "figure_ref": [ "fig_1" ], "heading": "Results and Analysis", "publication_ref": [ "b18", "b13", "b1" ], "table_ref": [ "tab_2", "tab_6", "tab_7", "tab_7" ], "text": "The result of automatic evaluation (Table 2) indicates that the proposed model improves on the baseline model in terms of BLEU (75.47) but does not in terms of TER (16.54), which is unusual. Although those measures have a strong correlation overall (Fig. 2), the proposed model has more outliers, δBLEU (the value obtained by subtracting a given MT's BLEU from the postedited result's BLEU) of which is over 20, compared to the baseline model; they must be the ones that bring the improvement in BLEU.\nThus, we present an additional evaluation result to further investigate this mismatch between TER improvements and BLEU improvements: a relative frequency distribution of successes and failures in APE with regard to the TER difference (Snover et al., 2006) and BLEU (Papineni et al., 2002), their sentence-level standard deviations (σ) are presented. In each column, the figure implying the best performance is in bold. The dagger symbols denote the proposed model's quality improvement on the given MTs is statistically significant (p ≤ 0.05). The asterisks denote the proposed model's performance improvement on the baseline model is statistically significant (p ≤ 0.05).\n- between a given MT and each model's output (Table 3). Then, the mentioned outliers correspond to PERF, which is the set of the cases where an APE system succeeds in perfectly correcting the given MT with one or more errors, considering that the proposed model's PERF has a µ δBLEU (the average of sentence-level BLEU improvements) of 27.21. We see that the proposed model has substantially more PERF cases (5.87%) than the baseline model (4.30%) and that because most of those 'new' (1.57pp) cases are results of nontrivial postediting (Table 4), this increase in the proportion of perfect postediting is valid evidence of the proposed method's effect on enhancing the baseline model's APE quality for high-quality MTs. Table 3: A relative frequency distribution containing the frequencies of the following groups (we compare the TER of the given MT and that of the postedited result.): the cases where an APE system injects errors to an already perfect MT (RUIN); both the given MT and the APE result are not perfect, but the former is better in terms of TER (DEGR); both are not perfect and have the same TER although they are different from each other (EVEN); both are not perfect, but the latter is better (IMPR); the given MT is not perfect whereas the APE result is (PERF); both are perfect (ACCE); and lastly, even though the MT is not perfect, the APE system does not change anything (NEGL).\nThe calculation of the F1 score is based on two criteria: whether the given MT is perfect or not (for recall) and whether the APE system edits the given MT or not (for precision). % is the proportion of the cases belonging to each category, µ δBLEU is the average of sentence-level BLEU improvements, and σ δBLEU is their standard deviation. In addition, in an actual example where only the proposed model corrects the given MT perfectly (Table 5), we observe that the proposed model successfully captures the close relation between the verb \"enthält\" ('contains') and its object so that the correct form \"Variablen\" ('variables') is used. Considering that the adverb phrase \"zum Beispiel\" ('for example') in the given MT makes some distance between the verb and its object, it appears that the proposed model integrates information from a wider range of constituents than the baseline model; hence the conclusion that the proposed method instills Feldermodell's idea of syntactic symmetry into Transformer-based APE models and enhances their understanding of German translations.\nAnother example (Table 6) suggests that the increase in the proportion of ACCE (0.3pp), which is the set of the cases where an APE system adopts the given, already perfect MT, should be cautiously interpreted. Although professional translators tend to perform \"only the necessary and sufficient corrections\" (Bojar et al., 2015), the validity of test data created by professional translators, including the WMT 2019 test data set, can also be disputable because other native speakers might argue that they can perform better postediting. For example, some people may consider hyphenated compound \"Zoom-Werkzeug\" ('Zoom tool') more natural than closed compound \"Zoomwerkzeug\" (Table 6).\nHowever, considering the big differences in the proportion of NEGL (2.35pp), which is the set of the cases where an APE system neglects to postedit the given MT, and the F1 score (Table 3), it appears that such a risk need not be considered in this analysis. Moreover, the proposed model has fewer RUIN cases (1.56%), where it injects errors to the given, already perfect MT, than the baseline model (1.86%). Although the proposed model has more DEGR cases (7.33%), where it degrades the given MT, than the baseline (6.65%), the proposed model's quality degradation µ δBLEU = -11.72 is less severe than that of the baseline (µ δBLEU = -13.51). Therefore, we conclude that the proposed method results in small but certain improvements." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b15", "b23", "b6", "b4", "b2", "b0" ], "table_ref": [], "text": "To improve the APE quality for high-quality MTs, we propose a linguistically motivated method of regularization that enhances Transformer-based APE models' understanding of the target language: a loss function that encourages APE models to perform symmetric self-attention on a given MT.\nExperimental results suggest that the proposed method helps improving the state-of-the-art architecture's APE quality for high-quality MTs; we also present a relative frequency distribution of successes and failures in APE and see increases in the proportion of perfect postediting and the F1 score. This evaluation method could be useful for assessing the APE quality for high-quality MTs in general. Actual cases support that the proposed method successfully instills the idea of syntactic symmetry into APE models. Future research should consider different language pairs and different sets of hyperparameters. First, neither Feldermodell (Reis, 1980;Wöllstein, 2018;Höhle, 2019) nor Doppelbaum (Wöllstein, 2018) has obtained complete concurrence among linguists. Also, we limit our scope to the English-German language pair and the IT domain using the WMT 2019 training, validation, and test data sets. A broader scope would not provide confidence in the validity of conducted experiments because there are hardly any standard setups for experimental research (Chatterjee et al., 2018(Chatterjee et al., , 2019;;Akhbardeh et al., 2021)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [ "b5", "b4", "b2", "b3", "b0", "b17" ], "table_ref": [], "text": "In addition, the conducted experiment should take into consideration the effect of randomness that is attended in the process of training artificial neural networks; different techniques, different hyperparameters, and multiple runs of optimizers (Clark et al., 2011) may present different results. However, as previous studies (Chatterjee et al., 2018(Chatterjee et al., , 2019(Chatterjee et al., , 2020;;Akhbardeh et al., 2021), including the study on the baseline model (Shin et al., 2021), do not consider the effect of randomness, we also do not investigate the effect of randomness further, considering that training multiple models (Appendix A) to obtain good estimators (TER and BLEU) will cost a lot." }, { "figure_ref": [], "heading": "A Experimental Details", "publication_ref": [ "b19", "b16", "b14", "b24", "b8" ], "table_ref": [], "text": "We use the following hyperparameters: the number of layers N = 6, the number of heads H = 8, the dimension of key vectors d k = 64, the dimension of value vectors d v = 64, the vector dimension for multi-head attention layers d model = 512, the vector dimension for the inner layer of position-wise feedforward networks d ff = 2,048, the dropout (Srivastava et al., 2014) probability P drop = 0.1, the label smoothing value ϵ LS = 0.1, minibatches of 25,000 tokens, a learning rate of 2.0, warmup for 18,000 training steps, and a shared vocabulary consisting of 32,000 subword units (Sennrich et al., 2016) 1 . We also use weight tying (Pappas et al., 2018) and the Adam optimizer (Kingma and Ba, 2015) with β 1 = 0.9, β 2 = 0.998, and ϵ = 10 -8 . Decoding options are beam search with a beam size b = 5, a length penalty multiplied by a strength coefficient α = 0.6, and beam search stopping (Yang et al., 2018) with the length ratio lr = 1.3.\nWe use OpenNMT-py 3.0 (Klein et al., 2017) 2 with the random seed 1128. We first train the models for 100,000 steps, about 36 hours on one NVIDIA GeForce RTX ™ 3090, and then tune them around 1,000 steps." } ]
Automatic postediting (APE) is an automated process to refine a given machine translation (MT). Recent findings present that existing APE systems are not good at handling highquality MTs even for a language pair with abundant data resources, English-German: the better the given MT is, the harder it is to decide what parts to edit and how to fix these errors. One possible solution to this problem is to instill deeper knowledge about the target language into the model. Thus, we propose a linguistically motivated method of regularization that is expected to enhance APE models' understanding of the target language: a loss function that encourages symmetric self-attention on the given MT. Our analysis of experimental results demonstrates that the proposed method helps improving the state-of-the-art architecture's APE quality for high-quality MTs.
Bring More Attention to Syntactic Symmetry for Automatic Postediting of High-Quality Machine Translations
[ { "figure_caption": "Figure 1: A depiction of Doppelbaum ( §2).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The relationship between models' sentencelevel TER improvements (-δTER; positive values denote decrease in TER) and sentence-level BLEU improvements (δBLEU; positive values denote increase in BLEU) on those of the given MTs in the test data set.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "except", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The results of automatic evaluation on the WMT 2019 test data set. Baseline is the abovementioned baseline model ( §3), and DOPPELBAUM is the proposed model. Beside TER", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "A case where only the proposed model corrects the given MT perfectly. Considering the manually postedited result, wrong words in the given MT, the APE result of the baseline model, and that of the proposed model are highlighted in pink while correct words are highlighted in green. All the texts are tokenized or detokenized using Moses(Koehn et al., 2007).", "figure_data": "CASE 2: ACCETER ↓BLEU ↑Source TextDouble-click the Zoom tool .Given MTDoppelklicken Sie auf das Zoomwerkzeug .0.00100.00BaselineDoppelklicken Sie auf das Zoom-Werkzeug .16.6753.73DOPPELBAUMDoppelklicken Sie auf das Zoomwerkzeug .0.00100.00Manual PosteditingDoppelklicken Sie auf das Zoomwerkzeug .", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "A case where only the proposed model adopts the given, already perfect MT. Details are the same as in Table5.", "figure_data": "", "figure_id": "tab_7", "figure_label": "6", "figure_type": "table" } ]
Baikjin Jung; Myungji Lee; Jong-Hyeok Lee; ♢♡ Yunsu Kim
[ { "authors": "Farhad Akhbardeh; Arkady Arkhangorodsky; Magdalena Biesialska; Ondřej Bojar; Rajen Chatterjee; Vishrav Chaudhary; Marta R ; Markus Freitag; Yvette Graham; Roman Grundkiewicz; Barry Haddow; Leonie Harter; Kenneth Heafield; Christopher Homan; Matthias Huck; Kwabena Amponsah-Kaakyire; Jungo Kasai; Daniel Khashabi; Kevin Knight; Tom Kocmi; Philipp Koehn; Nicholas Lourie; Christof Monz; Makoto Morishita; Masaaki Nagata; Ajay Nagesh; Toshiaki Nakazawa; Matteo Negri; Santanu Pal; Auguste Allahsera; Marco Tapo; Valentin Turchi; Marcos Vydrin; Zampieri", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Findings of the 2021 Conference on Machine Translation (WMT21)", "year": "2021" }, { "authors": "Ondřej Bojar; Rajen Chatterjee; Christian Federmann; Barry Haddow; Matthias Huck; Chris Hokamp; Philipp Koehn; Varvara Logacheva; Christof Monz; Matteo Negri; Matt Post; Carolina Scarton; Lucia Specia; Marco Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Findings of the 2015 Workshop on Statistical Machine Translation", "year": "2015" }, { "authors": "Rajen Chatterjee; Christian Federmann; Matteo Negri; Marco Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Findings of the WMT 2019 Shared Task on Automatic Post-Editing", "year": "2019" }, { "authors": "Rajen Chatterjee; Markus Freitag; Matteo Negri; Marco Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Findings of the WMT 2020 Shared Task on Automatic Post-Editing", "year": "2020" }, { "authors": "Rajen Chatterjee; Matteo Negri; Raphael Rubino; Marco Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Findings of the WMT 2018 Shared Task on Automatic Post-Editing", "year": "2018" }, { "authors": "Jonathan H Clark; Chris Dyer; Alon Lavie; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability", "year": "2011" }, { "authors": "N Tilman; Höhle", "journal": "Language Science Press", "ref_id": "b6", "title": "Topologische Felder", "year": "2019" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b7", "title": "Adam: A Method for Stochastic Optimization", "year": "2015-05-07" }, { "authors": "Guillaume Klein; Yoon Kim; Yuntian Deng; Jean Senellart; Alexander Rush", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "OpenNMT: Open-Source Toolkit for Neural Machine Translation", "year": "2017" }, { "authors": "Kevin Knight; Ishwar Chander", "journal": "", "ref_id": "b9", "title": "Automated Postediting of Documents", "year": "1994" }, { "authors": "Philipp Koehn; Hieu Hoang; Alexandra Birch; Chris Callison-Burch; Marcello Federico; Nicola Bertoldi; Brooke Cowan; Wade Shen; Christine Moran; Richard Zens; Chris Dyer; Ondřej Bojar; Alexandra Constantin; Evan Herbst", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Moses: Open Source Toolkit for Statistical Machine Translation", "year": "2007" }, { "authors": "Tom M Mitchell", "journal": "", "ref_id": "b11", "title": "The Need for Biases in Learning Generalizations", "year": "1980" }, { "authors": "Matteo Negri; Marco Turchi; Rajen Chatterjee; Nicola Bertoldi", "journal": "European Language Resources Association (ELRA", "ref_id": "b12", "title": "eSCAPE: a Large-scale Synthetic Corpus for Automatic Post-Editing", "year": "2018" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "BLEU: a Method for Automatic Evaluation of Machine Translation", "year": "2002" }, { "authors": "Nikolaos Pappas; Lesly Miculicich; James Henderson", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Beyond Weight Tying: Learning Joint Input-Output Embeddings for Neural Machine Translation", "year": "2018" }, { "authors": "Marga Reis", "journal": "Documentation et Recherche en Linguistique Allemande Vincennes", "ref_id": "b15", "title": "On Justifying Topological Frames : 'Positional Field' and the Order of Nonverbal Constituents in German 0", "year": "1980" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Neural Machine Translation of Rare Words with Subword Units", "year": "2016" }, { "authors": "Jaehun Shin; Wonkee Lee; Byung-Hyun Go; Baikjin Jung; Youngkil Kim; Jong-Hyeok Lee", "journal": "ACM Transactions on Asian and Low-Resource Language Information Processing", "ref_id": "b17", "title": "Exploration of Effective Attention Strategies for Neural Automatic Post-Editing with Transformer", "year": "2021" }, { "authors": "Matthew Snover; Bonnie Dorr; Rich Schwartz; Linnea Micciulla; John Makhoul", "journal": "Association for Machine Translation in the Americas", "ref_id": "b18", "title": "A Study of Translation Edit Rate with Targeted Human Annotation", "year": "2006" }, { "authors": "Nitish Srivastava; Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov", "journal": "Journal of Machine Learning Research", "ref_id": "b19", "title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting", "year": "2014" }, { "authors": "Ilya Sutskever; Oriol Vinyals; Quoc V Le", "journal": "", "ref_id": "b20", "title": "Sequence to Sequence Learning with Neural Networks", "year": "2014" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b21", "title": "", "year": "" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Attention is All you Need", "year": "2017" }, { "authors": "Inc Angelika Curran Associates; Wöllstein", "journal": "Stauffenburg", "ref_id": "b23", "title": "Topologisches Satzmodell", "year": "2018" }, { "authors": "Yilin Yang; Liang Huang; Mingbo Ma", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Breaking the Beam Search Curse: A Study of (Re-)Scoring Methods and Stopping Criteria for Neural Machine Translation", "year": "2018" } ]
[ { "formula_coordinates": [ 2, 81.78, 384.03, 207.36, 70.4 ], "formula_id": "formula_0", "formula_text": "[ Heute VF ] [ habe LSK ] [ ich MF ] [ gesehen RSK ] [ zufällig NF ], (2) [ [ dass LSK ] [ du eine Tasse Kaffee MF ] [ getrunken hast RSK ] NF ]." }, { "formula_coordinates": [ 2, 331.55, 123.91, 167.45, 47.23 ], "formula_id": "formula_1", "formula_text": "P θ (y 1 , ⋯, y Ly | x 1 , ⋯, x Lx , z 1 , ⋯, z Lz ) = Ly ∏ t=1 P θ (y t | u, v, y 1 , ⋯, y t-1 )," }, { "formula_coordinates": [ 2, 312.64, 500.18, 205.27, 42.44 ], "formula_id": "formula_2", "formula_text": "(μ 3 ) i = ⎛ ⎜ ⎝ ⌊ Lz 2 ⌋ ∑ j=1 p self [i, j] - Lz ∑ j=⌈ Lz 2 ⌉+1 p self [i, j] ⎞ ⎟ ⎠ 2 ," }, { "formula_coordinates": [ 2, 318.66, 616.64, 193.23, 13.05 ], "formula_id": "formula_3", "formula_text": "L DOPPELBAUM = L CE + E[α]E[μ 3 ] + (1 -α)," }, { "formula_coordinates": [ 2, 306.14, 648.72, 157.05, 69.55 ], "formula_id": "formula_4", "formula_text": "E[α] = ∑ B b=1 ∑ Lz i=1 α b,i B × L z is the expected value of coefficients α b,i = σ(W T v b,i + β)" }, { "formula_coordinates": [ 2, 325.15, 748.91, 178.56, 30.94 ], "formula_id": "formula_5", "formula_text": "E[μ 3 ] = ∑ B b=1 ∑ N n=1 ∑ H h=1 ∑ Lz i=1 (μ 3 ) b,n,h,i B × N × H × L z" } ]
10.1145/10.1145/3575813.3597345.
2023-05-17
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b17", "b5", "b12", "b36", "b41", "b6", "b12", "b15", "b17", "b34", "b12", "b18", "b35", "b12", "b1", "b16", "b30", "b31", "b40", "b42", "b2", "b33", "b33", "b39", "b20", "b25", "b17" ], "table_ref": [], "text": "Precise electricity load forecasts are key elements for planning and operating electrical power systems [18]. As utilities rely on such forecasts to purchase or generate electricity [6], the forecasting performance has a direct impact on their decision quality. With increasingly volatile electricity production and demand, electric load forecasting has recently become more difficult. Reasons include, among others, a more decentralized electricity generation, additional loads through sector coupling (heat pumps, electric vehicles, etc.) [13], changing behavioral patterns caused by the COVID-19 pandemic [37,42], and the war in Ukraine.\nRecent review studies on short-term electricity load forecasting [7,13,16,18,35] note that many research works examine the prediction of load at the level of complete energy systems (e.g., country or grid level). Another frequently investigated level is the household demand, given that smart meter data is increasingly available [13,19,36]. Yet, only a few studies have investigated intermediate levels in the low-voltage (distribution) grid for short-term load forecasting [13]. Within this forecasting domain, hierarchical forecasting turns out to be a promising approach, as it can model the topological distribution of load across the grid [2,17].\nWhile many approaches have been applied for short-term load forecasting, the most effective forecasting models currently employ variants of deep Artificial Neural Network (ANN) algorithms, such as Long-Term Short-Term Memorys (LSTMs) [31,32,41,43]. LSTMs can handle sequences of data points well, yet, they have difficulties with learning patterns in long time series [3]. To remedy this drawback, Vaswani et al. [34] presented a new architecture called Transformer, which outperforms LSTMs in several sequential modeling problems like natural language processing, text generation, and machine translation [34].\nResearch has started to examine the Transformer approach for short-term electricity load forecasting [40] and also applied the Temporal Fusion Transformer (TFT), a Transformer variant for time series data, to short-term [21] and mid-term [26] time horizons with promising results. However, the studies on Transformers and TFTs currently investigate selective aspects and focus on suggesting new algorithmic variants rather than thoroughly testing existing approaches for various problem facets. Particularly, the use of benchmark datasets and the examination of forecasts on various grid levels are missing, although both are known limitations in the field of energy forecasting [18].\nOur study addresses this research gap by conducting several experiments on the performance of the TFT in hourly electricity forecasting on the distribution grid. We vary time horizons (day-ahead and week-ahead), data sources (electricity consumption, calendar data, weather data, epidemic data), and network levels (grid and substation level). Before we present our evaluation approach in section 4 and analyze the results in section 5, we review current time series forecasting methods and related works in the electricity forecasting field." }, { "figure_ref": [], "heading": "BACKGROUND", "publication_ref": [ "b13", "b39", "b23", "b5", "b5", "b17", "b42", "b9", "b40", "b0", "b9", "b33", "b27", "b25", "b12", "b17", "b19", "b26", "b28", "b38", "b39", "b12", "b22", "b7", "b19", "b20", "b25", "b28", "b37", "b39", "b38", "b26", "b12", "b17" ], "table_ref": [ "tab_0" ], "text": "Starting with the first studies on short-term load forecasting in the 1960s [14,40], scholars have conducted intensive research within this field. Such research includes conventional statistical approaches (e.g., linear regression and Auto-Regressive Integrated Moving Average (ARIMA) models), but also Machine Learning (ML) methods such as fuzzy logic [24], and random forest [6]. In recent years, similar to other applications of forecasting, (deep) ANN approaches have gained prominence in the field of electricity load forecasting [6,18]. Particularly LSTMs have proven to be a robust forecasting approach in several variations, as shown by studies based on data obtained from Scotland [43], Malaysia [10], the U.S. [41], and Great Britain, France, and Germany [1,10].\nRecent studies examined the Transformer architecture [34] for load forecasting. The TFT [28] in particular holds significant potential to boost the predictive performance, as it overcomes known limitations of both, the Transformer and the LSTM architecture for time series forecasting. For the application field of short-term load forecasting, we found1 several recent studies (see Table 1) that evaluate Transformers and TFTs (and variants of them) with diverse sets of parameters and data. All of these studies indicate that the Transformer and TFT approaches outperform other methods for short-term load forecasting. However, we identify three issues that need further investigation.\nFirst, almost all studies that we found propose a slightly different version of the Transformer or TFT architecture and test them with a single dataset (only one study [26] uses a second, publicly available dataset to demonstrate external validity). Hence, it remains unclear to what extent the reported performance results are generalizable or dataset-specific (an aspect that is also criticized in several review studies on short-term load forecasting [13,18]). A comparison across different datasets would be helpful, although this requires significant effort and computational resources.\nSecond, several studies that we identified are quite selective regarding the input variables they consider. Some use electricity load data only [20,27,29,39,40], although the inclusion of exogenous variables such as weather or calendar data are known to improve forecast quality and should therefore be included [13,23]. A detailed analysis of the performance of the TFT with different exogenous variables would benefit the assessment of the architecture's potential for load forecasting. Third, the majority of studies focus on a single forecasting unit, e.g., the load of the whole grid [8,20,21,26,29,38,40], a single household [39] or a heating system [27]. This observation is also echoed in comprehensive review studies on short-term load forecasting [13], and energy forecasting [18]. Yet, forecasting on secondary levels of the distribution grid (e.g., substations or grid zones) is beneficial for grid operation and planning and has the potential to boost predictive performance, as hierarchical forecasts can model the topological distribution of load across the grid.\nOur study addresses the outlined research gaps by using the TFT architecture for short-term load forecasting (day-ahead and week-ahead) on the grid and substation level while considering an acknowledged benchmark dataset." }, { "figure_ref": [ "fig_0" ], "heading": "FORECASTING METHOD FOR SHORT-TERM LOAD FORECASTING", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "We instantiate the TFT architecture to forecast the hourly electricity load based on past electricity consumption data and a variety of further data sources, as we illustrate in Figure 1. Thereby, our particular focus lies on the variation of the grid-level of the forecast (b). More specifically, we examine the TFT performance in the distribution grid using a single time series on the first grid-level and using the aggregation of multiple time series from the secondary grid-level (substation-level). The latter makes use of the TFT's and LSTM's capability to forecast several target variables for future time steps at the same time (also known as multi-horizon time series forecasting). Feature extraction. To obtain a multidimensional forecast, we consider four different data sources: Electric load, calendar, weather, and epidemic data. Table 2 provides an overview of all features employed, their value ranges, and their use in our study. We list further details on the data preparation in appendix A.2." }, { "figure_ref": [], "heading": "ANN model architectures.", "publication_ref": [ "b14", "b16" ], "table_ref": [ "tab_1" ], "text": "Our forecasting approach provides load forecasts on an hourly level (i.e., the next 24 hours day-ahead and the next 168 hours week-ahead). All features are connected with the TFT using a separate Variable Selection Network (VSN) per input type. Weights are shared among VSN for past known, future known, and static variables, respectively. Table 2 lists these variable types. For benchmarking the predictive performance of the TFT, we use a linear ARIMA estimator and an LSTM architecture. Our analysis uses the Python package darts [15].\nGrid-level forecast. We vary the levels for which we obtain forecasts. First, by considering the complete grid, which is a single time series of demand data. Second, by obtaining a substation-level forecast, which considers multiple time series for training and forecasts in each time step to predict demand data for each substation. To obtain a more precise forecast on the grid-level, we aggregate all substation-level forecasts-an approach that the literature describes as hierarchical load forecasting [17]." }, { "figure_ref": [], "heading": "PERFORMANCE EVALUATION", "publication_ref": [ "b16", "b32", "b3", "b4", "b28", "b23" ], "table_ref": [], "text": "We rely on two datasets to evaluate the performance of the TFTbased forecasting approach. The first stems from a local grid operator located in central Germany (𝐷𝐸) and covers a recent time frame (2019-2021). The second is a validation dataset, which is publicly available and stems from the Global Energy Forecasting Competition 2012 (GEFC'12) [17] (𝑈 𝑆). It comprises data from 20 grid zones in the U.S., which we consider as substations. The detailed processing of both datasets is described in appendix A.2.\nFor model training, we choose a time-wise 80/20 train-test split. For the 𝐷𝐸 dataset, the training set spans from Jan 1st, 2019, to May 23rd, 2021. The test set comprises the period from May 24th, 2021, until Dec 31st, 2021. For the day-ahead forecast, we choose all complete days (0h-23h), and for the week-ahead forecast, all complete weeks (Mo-So) in the test set. In total, we rely on 219 days and 28 weeks for the evaluation using the 𝐷𝐸 test set data. The training set of the 𝑈 𝑆 dataset spans from Jan 1st 2004 to March 14th 2007. The test set consists of the remaining data until December 31st, 2007. For the evaluation of the 𝑈 𝑆 dataset, the day-ahead test set consists of 291 complete days, and the week-ahead evaluation of 38 full weeks. We normalize all input features for both datasets to ensure unbiased model training [33].\nWe performed a random hyperparameter search [4] for those parameters for which we could not obtain meaningful values through reasoning. For the TFT, the parameters are: Number of neurons in the hidden layer, the number of LSTM layers, the number of attention heads, the dropout value, the batch size, and the size of the input window. For this purpose, we conducted a hyperparameter search using sweeps from the \"Weights & Biases\" platform [5]. The configurations for each sweep are based on the parameter bandwidth suggested in [29]. In addition, we varied the input window size 𝑘 across the day-ahead forecast with 𝑘 ∈ [24,48,72,168,336,672] and for the week-ahead forecast with 𝑘 ∈ [168, 336, 504, 672]. We list the final parameter configurations of the best-performing models for each task in appendix (Table 4).\nTo evaluate the TFT, LSTM, and ARIMA models, we compare the predicted values 𝑝 𝑡 with the actual demand values 𝑦 𝑡 for each time step 𝑡 = 1, . . . , 𝑁 and assess the forecasting performance using the Root Mean Square Error (RMSE) as absolute and Mean Absolute Percentage Error (MAPE) and Symmetric Mean Absolute Percentage Error (SMAPE) as relative error metrics, which find regular use in earlier studies." }, { "figure_ref": [], "heading": "RESULTS", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Within the scope of this paper, we present and discuss the main results of our analysis. Thereby, we analyze the predictive performance of the TFT against acARIMA and LSTM and compare the performance of our approach with earlier studies. A detailed list of the performance results of our approach can be found in Table 5 as part of the appendix." }, { "figure_ref": [], "heading": "Baseline comparison", "publication_ref": [ "b20", "b37", "b28" ], "table_ref": [ "tab_3" ], "text": "For the analysis of the models, we focus on SMAPE as it allows for a comparison across datasets and grid-levels while expressing the relative performance results in relation to the actual and forecasted value. Similar to earlier studies, the ARIMA models display a relatively high error, which can be attributed to their limited capacity to generalize over long time series. Consequently, the LSTM and TFT clearly outperform the (more simplistic) statistical approach.\nOverall, we obtain lower errors for the TFT than the LSTM models, yet, not for all configurations. We observe a clear superiority of the TFT with a larger forecasting horizon (week-ahead). We attribute this result to the stronger capability of the TFT architecture to learn patterns over longer time intervals. The TFT also performs better than the LSTM for a forecast on the substation level. For dayahead forecasts and single time series forecasts, the LSTM approach is still a reasonable alternative.\nWith demand and calendar features (configuration II in Table 5), the TFT has an average performance of 3.98 MAPE, which is similar to what [21] and [38] report in their studies, although we have a simpler data processing without applying linear regression to the input features to estimate trends. As these studies do not provide pure TFT and LSTM estimates and only partially use public datasets, an appropriate comparison is not feasible. Lim et al. [29], who propose the TFT approach, conclude that the TFT results in a lower error than other approaches for time series forecasting. Using RMSE, MAPE, and SMAPE to review the forecasting error, we cannot confirm this result for the day-ahead, but for the week-ahead forecast. Other works applying the TFT to electricity forecasting do not report relative error metrics, which makes a reasonable comparison of the results unfeasible." }, { "figure_ref": [], "heading": "Forecasts on various grid-levels", "publication_ref": [], "table_ref": [], "text": "The results show that the TFT's hierarchical forecast outperforms single time series forecasting regarding predictive performance on the grid-level: We observe a statistically significant difference for both approaches regarding the day-ahead t(363.58) = 13.90, p < .001, d = 1.41 (with MAPE 2.43%) and week-ahead t(38.27) = 6.56, p < .001, d = 1.82 (with MAPE 2.52%) forecasts using the DE dataset. We validate this result by performing the same tests on the US dataset, where we also find a significant difference with a large effect size for both approaches (day-ahead t(550.99) = 10.60, p < .001, d = 0.89, and week-ahead t(71.14) = 3.84, p < .001, d = 0.88). Hence, we conclude that the hierarchical forecast approach outperforms single time series forecasting on the grid-level. Additionally, the results display performance improvements for the LSTM architecture when the load is predicted and aggregated at the substation level-however, this observation does not hold for all predictive cases of this study." }, { "figure_ref": [], "heading": "DISCUSSION", "publication_ref": [ "b12", "b17", "b12", "b12", "b15", "b5", "b17", "b24", "b28", "b11", "b21" ], "table_ref": [], "text": "Practical contribution. Our analysis demonstrated the potential of the TFT approach compared to a state-of-the-art approach (LSTM) and a simple estimator (ARIMA). In addition, we empirically illustrated the benefits that can be obtained through hierarchical load forecasting in the electric grid (reflecting the call for such analyses from recent review studies [13,18]).\nWhile we observed that the TFT approach is on par with (or only slightly better than) the LSTM approach on a day-ahead horizon, the TFT clearly outperformed the LSTM on a week-ahead horizon. Conversely, the TFT seems to be more costly to train regarding the computational effort because it has significantly more parameters. Therefore, practitioners need to balance a trade-off between more accurate methods in longer time spans and computational costs.\nLimitations and future work. Our study is a starting point for a more in-depth evaluation of Transformer and TFT approaches in the domain of load forecasting. In summary, we identify six areas for future research:\nFirst, we used weather observations as inputs for the forecasting period, which leads to an underestimation of the forecasting error [13]. In practice, only weather forecasts are available. Future studies should therefore include historical weather forecasts and quantify their impact on the models' forecasting quality.\nSecond, we included the Covid-19 incidence as a covariate for the TFT. However, the incidence data do not properly represent the lockdown periods. Hence, additional epidemic data might reflect time periods and their effect on the energy demand more precisely (e.g., by employing a binary feature that reflects lockdown periods).\nThird, we only compared point estimates of the forecasting models in our study. However, probabilistic forecasting is a very promising area in load forecasting [13,16]. Future studies may extend the TFT approach and assess its potential for probabilistic forecasting.\nFourth, for real-world applications, the runtime performance of the models and their explainability might be of major importance to electricity vendors. In some cases, higher explainability outweighs higher costs for training [6,18,25]. The TFT architecture contains an interpretable multi-head self-attention mechanism that enables feature importance-based explanations [29]. So far, this functionality has not been studied for the case at hand, although explainable ML offers detailed insights on model forecasts that can benefit decision-makers [12].\nFifth, our analysis has shown the potential of predictions on more granular network levels employing a subsequent aggregation. Future work should make use of increasingly available smart meter data to obtain household level predictions and their aggregations to enhance the forecasting performance.\nSixth, we integrated empirical load data mostly as is in our analysis. The body of forecasting literature has suggested several meaningful data preprocessing steps that improved the performance of less complex forecasting models, such as taking into account typical daily or weekly load profiles [22]. Considering that varying existing algorithms often result in only small changes in predictive performance, we encourage future research to focus on an in-depth evaluation of existing methods, more advanced feature engineering, and the evaluation of real-world problems with (multiple) benchmark datasets." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "Current developments related to more volatile electricity production and demand challenge the management of the electric grid. Thus, precise load forecasts become more and more important. Recent forecasting literature has proposed the TFT architecture, which theoretically addresses known limitations of the LSTM and Transformer approach. To date, studies on the TFT approach to short-term load forecasting have been empirically inconclusive and neglect external validity.\nOur study carries out several experiments using the TFT architecture and multiple datasets. The results show that the TFT architecture does not outperform a LSTM model for day-ahead forecasting for the entire grid. Yet, we find that the predictive performance of the TFT is higher when applied at the substation level in conjunction with a subsequent aggregation to upper grid-levels.\nOur investigation opens avenues for future research on the TFT approach for short-term load forecasting. In particular, we would like to motivate other scholars to conduct further experiments, specifically with respect to different network levels of forecasting (e.g., grid, substation, household) and the explainability of the models used." }, { "figure_ref": [], "heading": "A DATA PROCESSING A.1 Datasets", "publication_ref": [ "b16", "b16", "b10" ], "table_ref": [], "text": "The two real datasets that we select stem from two geographic regions, differ in the number of substations, the magnitude of load connected, and the timespans of the data. The severe differences in the datasets should ensure the external validity of our results.\nUS dataset. The dataset stems from the Global Energy Forecasting Competition 2012 (GEFC'12) [17] and comprises data from 20 grid zones in the U.S., which we consider as substations. For our analyses, we consider the years 2004-2007 of the dataset. The authors of the dataset [17] advise excluding two substations, #4 because of outages and #9 because it was covered by an industrial customer. We remove substation #9 but keep substation #4 and add the following data cleaning to all substations to handle potential outages on every substation. We identify extreme values (e.g., outages) per substation using the statistical quartiles 𝑞 0.25 and 𝑞 0.75 and remove values smaller than 𝑞 0.25 -1.5 * (𝑞 0.27 -𝑞 0.25 ), known as the inter quantile range. We calculate the quartiles per substation. Only demand values for substation #4 fall under this criterion, and we remove 52 out of 39,576 (0.0013%) data points. We replace the removed measurements using a linear interpolation [11].\nThe dataset also contains temperature data of 11 weather stations in the U.S. but the connection of the weather stations to the zones is not given (the connection of the weather stations to the grid zones was part of the GEFC'12 challenge). Hence, for our analysis, we use the average temperature of all 11 weather stations per hour.\nDE dataset. In addition to the public dataset, which does not contain detailed geographic references, we use a dataset from central Germany, which we obtained from a local distribution grid operator. The dataset consists of hourly smart meter data on the household level, covering the years 2019-2021 (36 months). Given that the first wave of the COVID-19 pandemic started in Germany in March 2020, the dataset contains 14 months of pre-pandemic electricity load and 22 months within the pandemic. In total, 9,455 households are connected to one of 70 substations in the distribution grid, where each substation serves between 8 and 447 households (M=135.07, SD=94.75).\nWe prepare the data on the level of each household and apply the following preparation steps. First, we remove households with unusually low consumption values. For this purpose, we exclude observations with a mean consumption less than 0.01 kWh or a total consumption less than 100 kWh. In total, we remove 598 households applying this criterion. Second, we harmonize time shift events (to and from daylight saving time) in spring and fall. For time shifts in the fall, where a single day has 25 hours, we exclude the extra hour. For the days with only 23 hours (i.e., time shift in spring), we linearly interpolate the missing value to harmonize the data into a 24-hour shape. Third, we remove all values from the meter readings that were labelled as \"provisional\", \"defective\", and \"incorrect\" and linearly interpolated the readings. In total, we replace 1,572,786 of such missing values out of 51,157,455,768 observations (0.0031%).\nFinally, we aggregate the data on the level of the substations and on the grid-level for our analysis. We were also provided with the geographic location data for each substation, which we leverage to connect weather and epidemic data." }, { "figure_ref": [], "heading": "A.2 Features", "publication_ref": [ "b8", "b12", "b21" ], "table_ref": [], "text": "From the calendar data, we extract the hour of the day, day of the week, day of the year, and a binary feature if a day is a national holiday or weekend day. We use the Python package python-holidays2 to obtain the local holidays. As most of the calendar features have a cyclic pattern, we encode them cyclically combining sine and cosine, following [9].\nOne of the most common input features in demand forecasts are meteorological variables, in particular, temperature data [13,22]. As weather data, we use the hourly temperature of the region obtained from the Meteostat3 Python package, which uses, for example, data from the German Meteorological Service4 . For cities and areas with no own weather station, we interpolate the temperature for the selected geographical point using the geographic reference and altitude. We assume that the temperature data is also available for the test data horizon and apply the measurements as a proxy for a weather forecast.\nFinally, we consider a data source that we have not found to be used by earlier studies, namely epidemic data. This is feasible, as one of the datasets we include covers the beginning of the COVID-19 pandemic and thus accounts for multiple lockdowns in the area of the grid. This lead us to include the officially announced number of infected people in the area as a feature. Such data is, for example, published by the German Robert Koch Institute5 ." }, { "figure_ref": [], "heading": "B HYPERPARAMETERS", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We list the value ranges for our hypoerparameter search in Table 3 and the finally used parameters in Table 4. " }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "We gratefully thank our research partner Stadtwerk Haßfurt GmbH for providing comprehensive data from their distribution grid that enabled this study. We further thank the Bavarian Ministry of Economic Affairs, Regional Development, and Energy for their financial support of the project \"DigiSWM\" (DIK-2103-0014)." } ]
Recent developments related to the energy transition pose particular challenges for distribution grids. Hence, precise load forecasts become more and more important for effective grid management. Novel modeling approaches such as the Transformer architecture, in particular the Temporal Fusion Transformer (TFT), have emerged as promising methods for time series forecasting. To date, just a handful of studies apply TFTs to electricity load forecasting problems, mostly considering only single datasets and a few covariates. Therefore, we examine the potential of the TFT architecture for hourly short-term load forecasting across different time horizons (day-ahead and week-ahead) and network levels (grid and substation level). We find that the TFT architecture does not offer higher predictive performance than a state-of-the-art LSTM model for day-ahead forecasting on the entire grid. However, the results display significant improvements for the TFT when applied at the substation level with a subsequent aggregation to the upper grid-level, resulting in a prediction error of 2.43% (MAPE) for the best-performing scenario. In addition, the TFT appears to offer remarkable improvements over the LSTM approach for week-ahead forecasting (yielding a predictive error of 2.52% (MAPE) at the lowest). We outline avenues for future research using the TFT approach for load forecasting, including the exploration of various grid levels (e.g., grid, substation, and household level).
Short-Term Electricity Load Forecasting Using the Temporal Fusion Transformer: Effect of Grid Hierarchies and Data Sources
[ { "figure_caption": "Figure 1 :1Figure 1: Simplified illustration of the approach; numbers indicate evaluation variations", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Studies examining Transformer architectures for short-term electricity load forecasting", "figure_data": "Ref.ModelPlace YearsNCal. Temp. Level[29]TFTPT2011-14 1Grid[39]Transf.+k-FR2006-10 1Household[40]Means Transf.+k-AU2006-10 1Grid[27]Means Transf.-CN-1Heat appl.[21]variant TFT+lin. reg.VN2014-21 1xxGrid[30]Transf.US2004-08 20xSubstation[20]Transf.-SP20161Grid[38]variant Transf.-AU2006-10 1xxGrid[8]variant Transf.PA2017-20 1xxGrid[26]Transf., TFT,CN2016-17 1xxGridITFTUK2004-09 1xxThisTFTUS2004-08 20 xxSubstationstudy TFTDE2019-22 70 xxSubstation", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Features with value ranges and horizon", "figure_data": "DataFeatureRangeHorizon 𝑎Mean DEMean USLoadConsumptionRP(grid)3,175.82 392,945.48(substation)46.9320,681.34CalendarHour of the day[0, 23]FDay of the week[0, 6]FDay of the year[1, 365]FHoliday/weekend[0, 1]F0.310.31WeatherTemperatureRF10.4814.46Epidemic Covid-19 incidence RP6xOtherGrid node ID 𝑏I", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Value ranges hyperparameter tuning", "figure_data": "TFT hyper-parameteratt heads[1, 4]hidden size[16, 32, 64]dropout[0.1, 0.3]batch size[32, 128]LSTM layers[1, 2, 4]LSTM hyper-parameterbatch size[50, 10, 120, 150]learning rate[0.001, 0.01, 0.1]dropout[0.1, 0.2, 0.3]LSTM layer[1, 2, 4]hidden size[64, 128, 248, 496]C DETAILED RESULTSSee our detailed evaluation results in Table 5.", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Forecasting performance", "figure_data": "day-aheadweek-aheadModelRMSEMAPESMAPERMSEMAPESMAPEI. Grid-level forecast, demand only (both datasets)ARIMA 𝐷𝐸191,376.30(±73,874.24) 116.23(±60.62) 183.82 (±22.69) 197,129.80 (±63,766.59) 115.25 (±42.65) 185.51 (±18.69)ARIMA 𝑈 𝑆251,621.73 (±675,237.99)56.99 (±161.73)64.39 (±14.66) 868,061.20 (±2,431,013) 209.04 (±602.5)80.56 (±35.12)II. Grid-level forecast, demand + calendar (both datasets)LSTM 𝐷𝐸154.79(±68.68)3.94(±1.52)4.01(±1.54)190.73(±48.37)4.94(±0.91)4.88(±0.84)TFT 𝐷𝐸151.13(±58.52)3.98(±1.25)3.95(±1.22)167.12(±54.37)4.13(±0.92)4.18(±0.96)LSTM 𝑈 𝑆32,116.58(±16,425.95)6.25(±2.76)6.35(±2.88)50,107.86 (±18,577.51)10.14(±4.27)9.79(±3.67)TFT 𝑈 𝑆30,977.48(±15,061.03)6.11(±2.73)6.19(±2.82)50,232.94 (±17,623.17)9.98(±3.23)10.01(±3.24)III. Grid-level forecast, demand + weather + calendarLSTM 𝐷𝐸137.42(±59.21)3.52(±1.21)3.54(±1.23)157.66(±39.60)4.24(±1.13)4.18(±1.05)TFT 𝐷𝐸164.53(±60.43)4.22(±1.34)4.20(±1.31)151.41(±52.1)3.88(±0.96)3.83(±0.9)LSTM 𝑈 𝑆25,531.28(±14,523.90)4.89(±2.35)4.99(±2.50)57,549.48 (±24,345.73)11.77(±5.55)11.26(±4.99)TFT 𝑈 𝑆22,825.27(±10,190.03)4.59(±1.86)4.70(±1.98)26,967.22(±9,804.02)5.22(±1.80)5.36(±1.97)IV. Hierarchical forecast, demand + weather + calendarLSTM 𝐷𝐸146.43(±67.89)3.65(±1.42)3.60(±1.33)337.97(±119.81)7.99(±1.98)7.56(±1.75)TFT 𝐷𝐸102.46(±55.09)2.55(±1.06)2.54(±1.05)102.75(±29.58)2.5(±0.49)2.52(±0.5)LSTM 𝑈 𝑆19,751.91(±9,955.10)3.88(±2.02)3.86(±1.95)32,064.27 (±10,200.25)6.23(±1.88)6.39(±2.03)TFT 𝑈 𝑆15,712.65(±8,763.61)3.04(±1.59)3.09(±1.65)18,955.79(±6,553.06)3.76(±1.42)3.81(±1.49)V. Substation forecast, consumption + weather + calendarLSTM 𝐷𝐸7.26(±5.82)16.98(±16.07)28.70 (±48.55)10.97(±8.73)27.58 (±29.90)35.17 (±47.92)TFT 𝐷𝐸4.52(±2.78)10.15(±7.52)23.38 (±49.36)4.83(±2.71)10.56(±7.65)23.91 (±49.29)LSTM 𝑈 𝑆1,518.69(±1,592.79)6.54(±3.73)6.45(±3.50)2,453.43(±2,397.44)10.04(±3.62)10.07(±3.57)TFT 𝑈 𝑆1,172.17(±1,310.05)4.81(±2.61)4.85(±2.62)1,527.51(±1,456.43)6.43(±2.63)6.38(±2.51)VI. Forecast with demand + weather + calendar + epidemic featuresTFT 𝐷𝐸 (grid-level)169.59(±63.46)4.46(±1.2)4.48(±1.2)149.39(±44.35)3.89(±0.61)3.88(±0.61)TFT 𝐷𝐸 (hierarchical)98.32(±50.49)2.43(±0.88)2.44(±0.89)100.59(±26.72)2.52(±0.47)2.51(±0.46)TFT 𝐷𝐸 (substation)4.39(±2.81)9.86(±7.68)23.12 (±49.42)4.84(±2.74)10.85(±8.05)23.99 (±49.27)", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" } ]
Felix Haag; Konstantin Hopf; Elena Giacomazzi
[ { "authors": "S N Aasim; Abheejeet Singh; Mohapatra", "journal": "Applied Soft Computing", "ref_id": "b0", "title": "Data driven day-ahead electrical load forecasting through repeated wavelet transform assisted SVM model", "year": "2021" }, { "authors": "George Athanasopoulos; Roman A Ahmed; Rob J Hyndman", "journal": "International Journal of Forecasting", "ref_id": "b1", "title": "Hierarchical forecasts for Australian domestic tourism", "year": "2009" }, { "authors": "Yoshua Bengio; Patrice Y Simard; Paolo Frasconi", "journal": "IEEE transactions on neural networks", "ref_id": "b2", "title": "Learning long-term dependencies with gradient descent is difficult", "year": "1994" }, { "authors": "James Bergstra; Yoshua Bengio", "journal": "Journal of machine learning research", "ref_id": "b3", "title": "Random search for hyper-parameter optimization", "year": "2012" }, { "authors": "Lukas Biewald", "journal": "", "ref_id": "b4", "title": "Experiment Tracking with Weights and Biases", "year": "2020" }, { "authors": "Stefan Borkovski; Stefan Petkoski; Maja Erkechova", "journal": "Innovations", "ref_id": "b5", "title": "Electricity consumption forecasting using recurrent neural network: Electrical trade market study", "year": "2019" }, { "authors": "Mathieu Bourdeau; Xiao Qiang Zhai; Elyes Nefzaoui; Xiaofeng Guo; Patrice Chatellier", "journal": "Sustainable Cities and Society", "ref_id": "b6", "title": "Modeling and forecasting building energy consumption: A review of data-driven techniques", "year": "2019-07" }, { "authors": "Yang Cao; Zhenzhen Dang; Feng Wu; Xovee Xu; Fan Zhou", "journal": "", "ref_id": "b7", "title": "Probabilistic Electricity Demand Forecasting with Transformer-Guided State Space Model", "year": "2022" }, { "authors": "Richard E Edwards; Joshua New; Lynne E Parker", "journal": "Energy and Buildings", "ref_id": "b8", "title": "Predicting future hourly residential electrical consumption: A machine learning case study", "year": "2012-06" }, { "authors": "Manar Behnam Farsi; Nizar Amayri; Ursula Bouguila; Eicker", "journal": "IEEE Access", "ref_id": "b9", "title": "On Short-Term Load Forecasting Using Machine Learning Techniques and a Novel Parallel Deep LSTM-CNN Approach", "year": "2021" }, { "authors": "Arne Groß; Antonia Lenders; Friedhelm Schwenker; Daniel A Braun; David Fischer", "journal": "Energy Informatics", "ref_id": "b10", "title": "Comparison of short-term electrical load forecasting methods for different building types", "year": "2021-09" }, { "authors": "Felix Haag; Konstantin Hopf; Pedro Menelau Vasconcelos; Thorsten Staake", "journal": "AIS electronic library", "ref_id": "b11", "title": "Augmented Cross-Selling Through Explainable AI-A Case From Energy Retailing", "year": "2022" }, { "authors": "Stephen Haben; Georgios Giasemidis; Florian Ziel; Siddharth Arora", "journal": "International Journal of Forecasting", "ref_id": "b12", "title": "Short Term Load Forecasts of Low Voltage Demand and the Effects of Weather", "year": "2019-10" }, { "authors": "D A G T Heinemann; E C Nordmian; Plant", "journal": "IEEE TRANSACTIONS ON POWER APPARATUS AND SYSTEMS", "ref_id": "b13", "title": "and Summer Loads-A Regression Analysis", "year": "1966" }, { "authors": "Julien Herzen; Francesco Lässig; Samuele Giuliano Piazzetta; Thomas Neuer; Léo Tafti; Guillaume Raille; Tomas Van Pottelbergh; Marek Pasieka; Andrzej Skrodzki; Nicolas Huguenin; Maxime Dumonal; Jan Kościsz; Dennis Bader; Frédérick Gusset; Mounir Benheddi; Camila Williamson; Michal Kosinski; Matej Petrik; Gaël Grosch", "journal": "Journal of Machine Learning Research", "ref_id": "b14", "title": "Darts: User-Friendly Modern Machine Learning for Time Series", "year": "2022" }, { "authors": "Tao Hong; Shu Fan", "journal": "International Journal of Forecasting", "ref_id": "b15", "title": "Probabilistic electric load forecasting: A tutorial review", "year": "2016-07" }, { "authors": "Tao Hong; Pierre Pinson; Shu Fan", "journal": "International Journal of Forecasting", "ref_id": "b16", "title": "Global Energy Forecasting Competition 2012", "year": "2014-04" }, { "authors": "Tao Hong; Pierre Pinson; Yi Wang; Rafał Weron; Dazhi Yang; Hamidreza Zareipour", "journal": "IEEE Open Access Journal of Power and Energy", "ref_id": "b17", "title": "Energy Forecasting: A Review and Outlook", "year": "2020-10" }, { "authors": "Konstantin Hopf", "journal": "", "ref_id": "b18", "title": "Contributions of the Faculty Information Systems and Applied Computer Sciences of the Otto-Friedrich", "year": "2019" }, { "authors": "Shichao Huang; Jing Zhang; Yu He; Xiaofan Fu; Luqin Fan; Gang Yao; Yongjun Wen", "journal": "Energies", "ref_id": "b19", "title": "Short-Term Load Forecasting Based on the CEEMDAN-Sample Entropy-BPNN-Transformer", "year": "2022-01" }, { "authors": "Canh Pham; Huy; Quoc Nguyen; Nguyen Minh; Tao Dang Tien; Quynh Thi; Anh", "journal": "IEEE Access", "ref_id": "b20", "title": "Short-Term Electricity Load Forecasting Based on Temporal Fusion Transformer Model", "year": "2022" }, { "authors": "A Boye; Axel Høverstad; Helge Tidemann; Pinar Langseth; Öztürk", "journal": "IEEE Transactions on Smart Grid", "ref_id": "b21", "title": "Short-Term Load Forecasting With Seasonal Decomposition Using Evolution for Parameter Tuning", "year": "2015-07" }, { "authors": "K Rishee; Kevin M Jain; Patricia J Smith; John E Culligan; Taylor", "journal": "Applied Energy", "ref_id": "b22", "title": "Forecasting energy consumption of multi-family residential buildings using support vector regression: Investigating the impact of temporal and spatial monitoring granularity on performance accuracy", "year": "2014-06" }, { "authors": "Ahsan Raza Khan; Anzar Mahmood; Awais Safdar; A Zafar; Syed Khan; Naveed Bilal; Khan Ahmed; Javaid", "journal": "", "ref_id": "b23", "title": "Load Forecasting and Dynamic Pricing based Energy Management in Smart Grid-A Review", "year": "2015" }, { "authors": "Jesus Lago; Fjo De Ridder; Bart De Schutter", "journal": "Applied Energy", "ref_id": "b24", "title": "Forecasting spot electricity prices: Deep learning approaches and empirical comparison of traditional algorithms", "year": "2018-07" }, { "authors": "Dan Li; Ya Tan; Yuanhang Zhang; Shuwei Miao; Shuai He", "journal": "International Journal of Electrical Power & Energy Systems", "ref_id": "b25", "title": "Probabilistic forecasting method for mid-term hourly load time series based on an improved temporal fusion transformer model", "year": "2023-03" }, { "authors": "Guangxia Li; Cheng Zhou; Ruiyu Li; Jia Liu", "journal": "Association for Computing Machinery", "ref_id": "b26", "title": "Heat load forecasting for district water-heating system using locality-enhanced transformer encoder", "year": "2022" }, { "authors": "Bryan Lim; Sercan Ö Arık; Nicolas Loeff; Tomas Pfister", "journal": "International Journal of Forecasting", "ref_id": "b27", "title": "Temporal Fusion Transformers for interpretable multi-horizon time series forecasting", "year": "2021" }, { "authors": "Bryan Lim; Sercan Ö Arık; Nicolas Loeff; Tomas Pfister", "journal": "International Journal of Forecasting", "ref_id": "b28", "title": "Temporal Fusion Transformers for interpretable multi-horizon time series forecasting", "year": "2021-10" }, { "authors": "Alexandra L' Heureux; Katarina Grolinger; Miriam A M Capretz", "journal": "Energies", "ref_id": "b29", "title": "Transformer-Based Model for Electrical Load Forecasting", "year": "2022-01" }, { "authors": "Sana Mujeeb; Nadeem Javaid; Manzoor Ilahi; Zahid Wadud; Farruh Ishmanov; Muhammad Afzal", "journal": "Sustainability", "ref_id": "b30", "title": "Deep Long Short-Term Memory: A New Price and Load Forecasting Scheme for Big Data in Smart Cities", "year": "2019-02" }, { "authors": "Md Jamal; Ahmed Shohan; Md ; Omar Faruque; Simon Y Foo", "journal": "Energies", "ref_id": "b31", "title": "Forecasting of Electric Load Using a Hybrid LSTM-Neural Prophet Model", "year": "2022-03" }, { "authors": "J Sola; J Sevilla", "journal": "IEEE Transactions on Nuclear Science", "ref_id": "b32", "title": "Importance of input data normalization for the application of neural networks to complex industrial problems", "year": "1997" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b33", "title": "Attention Is All You Need", "year": "2017-12" }, { "authors": "Frederik Vom Scheidt; Hana Medinová; Nicole Ludwig; Bent Richter; Philipp Staudt; Christof Weinhardt", "journal": "Energy and AI", "ref_id": "b34", "title": "Data analytics in the electricity sector -A quantitative and qualitative literature review", "year": "2020" }, { "authors": "Yi Wang; Qixin Chen; Tao Hong; Chongqing Kang", "journal": "", "ref_id": "b35", "title": "Review of Smart Meter Data Analytics: Applications, Methodologies, and Challenges", "year": "2019-05" }, { "authors": "Le Wen; Basil Sharp; Kiti Suomalainen; Mingyue ; Selena Sheng; Fengtao Guang", "journal": "Sustainable Energy, Grids and Networks", "ref_id": "b36", "title": "The impact of COVID-19 containment measures on changes in electricity demand", "year": "2022-03" }, { "authors": "Guangqi Zhang; Chuyuan Wei; Changfeng Jing; Yanxue Wang", "journal": "ternational Journal of Computational Intelligence Systems", "ref_id": "b37", "title": "Short-Term Electrical Load Forecasting Based on Time Augmented Transformer", "year": "2022-08" }, { "authors": "Junfeng Zhang; Hui Zhang; Song Ding; Xiaoxiong Zhang", "journal": "Frontiers in Energy Research", "ref_id": "b38", "title": "Power Consumption Predicting and Anomaly Detection Based on Transformer and K-Means", "year": "2021-10" }, { "authors": "Zezheng Zhao; Chunqiu Xia; Lian Chi; Xiaomin Chang; Wei Li; Ting Yang; Albert Y Zomaya", "journal": "Information", "ref_id": "b39", "title": "Short-Term Load Forecasting Based on the Transformer Model", "year": "2021-12" }, { "authors": "Huiting Zheng; Jiabin Yuan; Long Chen", "journal": "Energies", "ref_id": "b40", "title": "Short-term load forecasting using EMD-LSTM neural networks with a Xgboost algorithm for feature importance evaluation", "year": "2017" }, { "authors": "Haiwang Zhong; Zhenfei Tan; Yiliu He; Le Xie; Chongqing Kang", "journal": "CSEE Journal of Power and Energy Systems", "ref_id": "b41", "title": "Implications of COVID-19 for the electricity industry: A comprehensive review", "year": "2020-09" }, { "authors": "Mingzhe Zou; Duo Fang; Gareth Harrison; Sasa Djokic", "journal": "IEEE", "ref_id": "b42", "title": "Weather Based Day-Ahead and Week-Ahead Load Forecasting using Deep Recurrent Neural Network", "year": "2019" } ]
[]
10.18653/v1/P19-3004
2023-05-17
[ { "figure_ref": [ "fig_0", "fig_1", "fig_2" ], "heading": "Introduction", "publication_ref": [ "b8", "b2", "b3" ], "table_ref": [], "text": "Understanding global events is critical to understanding the world around us-whether those events consist of pandemics, political unrest, natural disasters, or cyber attacks. The breadth of events of possible interest, the speed at which surrounding socio-political event contexts evolve, and the complexities involved in generating representative annotated data all contribute to this challenge. Events are also intrinsically global: many downstream use cases for event extraction involve reporting not just in a few major languages but in a much broader context. The languages of interest for even a fixed task may still shift from day to day, e.g. when a disease emerges in an unexpected location.\nThe ISI-CLEAR (CROSS-LINGUAL EVENT & ARGUMENT RETRIEVAL) system meets these challenges by building state-of-the-art, languageagnostic event extraction models on top of massively multi-lingual language models. These event models require only English training data (not even bitext-no machine translation required) and can identify events and the relationships between them in at least a hundred different languages. Unlike more typical benchmark tasks explored for zero-shot cross-lingual transfer-e.g. named entity detection or sentence similarity, as in (Hu et al., 2020)-event extraction is a complex, structured task involving a web of relationships between elements in text.\nISI-CLEAR makes these global events available to users in two complementary ways. First, users can supply their own text in a language of their choice; the system analyzes this text in that native language and provides multiple event-centric views of the data in response. Second, we provide an interface for cross-lingual event-centric search, allowing English-speaking users to search over events automatically extracted from a corpus of non-English documents. This interface allows for both natural language queries (e.g. statements by Angela Merkel about Ukraine) or structured queries (event type = {Arrest, Protest}, location = Iraq), and builds upon our existing cross-lingual search capabilities, demonstrated in (Boschee et al., 2019).\nThe primary contributions of this effort are threefold:\n1. Strong, language-agnostic models for a complex suite of tasks, deployed in this demo on a hundred different languages and empirically tested on a representative variety of languages. 2. An event-centric user interface that presents events in intuitive text-based, graphical, or summary forms. 3. Novel integration of cross-lingual search capabilities with zero-shot cross-lingual event extraction. We provide a video demonstrating the ISI-CLEAR user interface at https://youtu.be/ PE367pyuye8. In our first mode, users are invited to supply their own text in a language of their choice. The system supports any language present in the underlying multi-lingual language model; for this demo we use XLM-RoBERTa (Conneau et al., 2020), which supports 100 languages ranging from Afrikaans to Yiddish.\nAfter submission, the system displays the results in an initial text-based format, showing the events found in each sentence (Figure 1). For a more intuitive display of the relationships between events, users can select a graphical view (Figure 2). We can easily see from this diagram that the EU is the agent of both the withdrawal and the buying events, and that the two events are related (the EU is withdrawing from buying Russian oil).\nFinally, the user can see an event-centric summary of the document, choosing to highlight either particular categories of event (e.g., Crime, Military, Money) or particular participants (e.g., Ukraine, Putin, Russia). When one or more categories or participants are selected, the system will highlight the corresponding events in both the original text and, where possible, in the machine translation. An example of a Farsi document is shown in Figure 3. Here, the system is highlighting three events in the document where Russia is either an agent or a patient of an event. For this demo, we use simple heuristics over English translations to group participant names and descriptions; in future work we plan to incorporate a zero-shot implementation of document co-reference to do this in the original language." }, { "figure_ref": [ "fig_3" ], "heading": "Cross-Lingual Event-Centric Search", "publication_ref": [ "b1", "b1", "b6" ], "table_ref": [], "text": "The second mode of the ISI-CLEAR demo allows users to employ English queries to search over events extracted from a foreign language corpus. To enable this, we repurpose our work in crosslingual document retrieval (Barry et al., 2020) to index and search over event arguments rather than whole documents. A query may specify target event types as well as agent, patient, or location arguments; it may also include additional words to con- Query specification. We allow queries to be specified in two ways. The first simply asks the user to directly specify the query in structured form: using checkboxes to indicate which event types should be included and directly typing in values for each condition (agent, patient, etc.). A second and more intuitive method allows users to enter a query as natural language. The system processes the query using the ISI-CLEAR event system and populates a structured query automatically from the results. For instance, if the user enters the phrase anti-inflation protests in Vietnam, ISI-CLEAR will detect a Protest event with location Vietnam in that phrase. It will turn this result into a query with event type Protest, location Vietnam, and additional context word anti-inflation.\nDisplay. We display corpus events in ranked order with respect to the user query. The ranking is a combination of system confidence in the underlying extractions (e.g., is this event really located in Vietnam?) and system confidence in the cross-lingual alignment (e.g., is étudiants internationaux really a good match for the query phrase foreign students?). To estimate the latter, we rely on our prior work in cross-lingual retrieval, where we developed state-of-the-art methods to estimate the likelihood that foreign text f conveys the same meaning as English text e (Barry et al., 2020). We note that for locations, we include containing countries (as determined via Wikidata) in the index so that a search for Iran will return events happening in, e.g., Tehran. More specific details on the ranking functions can be found in Appendix A.3.\nAs part of our display, we break down system confidence by query condition-that is, we separately estimate the system's confidence in the agent vs., say, the location. For each condition, we display a \"traffic light\" indicator that shows the system's confidence in that condition for an event. Red, yellow, and green indicate increasing levels of confidence; black indicates that there is no evidence for a match on this condition, but that other conditions matched strongly enough for the event to be returned. A sample natural language query and search results are shown in Figure 4.\nCorpora. For this demo, we support two corpora: (1) 20,000 Farsi news documents drawn from Common Crawl1 and (2) ∼55K Weibo messages (in Chinese) on the topic of the Russo-Ukrainian crisis (Fung and Ji, 2022)." }, { "figure_ref": [], "heading": "Ontology & Training Data", "publication_ref": [ "b5" ], "table_ref": [], "text": "The ISI-CLEAR demo system is compatible with any event ontology that identifies a set of event types and argument roles. The system expects sentence-level English training data that identifies, for each event, one or more anchor spans and zero or more argument spans (with roles). For this demonstration, we use the \"basic event\" ontology and data developed for the IARPA BET-TER program (available at https://ir.nist. gov/better/). The ontology consists of 93 event types and a small set of argument roles (agent, patient, and related-event). In other settings, we have trained and tested the underlying system on the publicly available ACE event ontology2 , showing stateof-the-art zero-shot cross-lingual results in (Fincke et al., 2022). We prefer the BETTER ontology for this demo because of its broad topical coverage and its inclusion of event-event relations (in the form of related-event arguments). The ISI-CLEAR system is also designed to attach general-purpose when and where arguments to any event, regardless of ontology; see section 4.5." }, { "figure_ref": [], "heading": "System Components", "publication_ref": [], "table_ref": [], "text": "We present here the highlights of our technical approach, which relies on a collection of strong, language-agnostic models to perform all aspects of event extraction and the classification of relationships between events, as well as machine translation and foreign-to-English projection of event output (for display purposes)." }, { "figure_ref": [], "heading": "Ingest & Tokenization", "publication_ref": [ "b12" ], "table_ref": [], "text": "Consistent with XLM-RoBERTa, we use Sentence Piece (Kudo and Richardson, 2018) to tokenize text, and at extraction time, our models label each input subword separately. For languages where words are typically surrounded by whitespace, our system then expands spans to the nearest whitespace (or punctuation) to improve overall performance. If the system produces a conflicting sequence of la-bels for a single word, we apply simple heuristics leveraging label frequency statistics to produce just one label." }, { "figure_ref": [], "heading": "Anchor Detection", "publication_ref": [ "b5" ], "table_ref": [], "text": "ISI-CLEAR performs anchor identification and classification using a simple beginning-insideoutside (BIO) sequence-labeling architecture composed of a single linear classification layer on top of the transformer stack. For more details please see (Fincke et al., 2022)." }, { "figure_ref": [], "heading": "Argument Attachment", "publication_ref": [ "b5" ], "table_ref": [], "text": "For argument attachment, we consider one event anchor A and one role R at a time. We encourage the system to focus on A and R by modifying the input to the language model. For instance, when A=displaced and R=1 (agent), the input to the language model will be displaced ; 1 </s> Floods < displaced > thousands last month. This modification encourages the language model to produce representations of tokens like thousands that are contextualized by the anchor and role being examined. The argument attachment model concatenates the language model output vector for each input token with an embedding for event type and applies a linear classifier to generate BIO labels. For more details please see (Fincke et al., 2022)." }, { "figure_ref": [], "heading": "Event-Event Relations", "publication_ref": [], "table_ref": [], "text": "ISI-CLEAR can handle arbitrary event-event relations within a sentence, including the special case of event co-reference (when a given event has two or more anchor spans). We consider one event anchor A 1 at a time. Again we modify the input to the language model (by marking A 1 with special characters on either side) to encourage the model to consider all other anchors in light of A 1 . We then represent each event anchor in the sentence (including A 1 itself) as a single vector, generated by feeding the language model output for its constituent tokens into a bi-LSTM and then concatenating the bi-LSTM's two final states. (This allows us to smoothly handle multi-word anchors.) To identify the relationship between A 1 and A 2 , if any, we then concatenate the representations for A 1 and A 2 and pass the result to a linear classifier. The final step optimizes over the scores of all such pairwise classifications to label all relations in the sentence." }, { "figure_ref": [], "heading": "When & Where", "publication_ref": [ "b19" ], "table_ref": [], "text": "The ontology used for this demonstration (described in Section 3) does not annotate when and where arguments. However, these event attributes are critical for downstream utility. We therefore deploy an ontology-agnostic model that can assign dates and locations to events of any type. To do this, we train a question-answering model to answer questions such as <s> When/Where did the {anchor} happen? </s> Context </s>. We first train the model on the SQUAD2 dataset (Rajpurkar et al., 2016) and then continue training on the event location and time annotations in the English ACE dataset." }, { "figure_ref": [], "heading": "Machine Translation & Projection", "publication_ref": [ "b7", "b4" ], "table_ref": [], "text": "All event extraction happens in the target language; no machine translation (or bitext) is required. However, for system output to be useful to English speakers, translation is highly beneficial. Here, we rely on the 500-to-1 translation engine developed by our collaborators at ISI (Gowda et al., 2021) 3 . Translation happens after event extraction. We have not optimized this deployment of MT for speed, so we display the results without translation first and then (when the small light in the top toolbar turns green, usually after a few seconds), we can refresh the screen to show results with translations added.\nTo project anchor and argument spans into machine translation, we require no parallel data for training. Instead, we leverage the fact that the pre-trained XLM-RoBERTa embeddings are well aligned across languages and have been shown to be effective for word alignment tasks (Dou and Neubig, 2021). The similarity of a word in a foreign-language sentence to a word in the parallel English sentence is determined by the cosine distance between the embeddings of the two words." }, { "figure_ref": [], "heading": "System Evaluation & Analysis", "publication_ref": [ "b22", "b10", "b16", "b18", "b10" ], "table_ref": [ "tab_0", "tab_2" ], "text": "We evaluate our system on a variety of languages and ontologies and compare where possible to existing baselines. Following community practice, e.g. Zhang et al. (2019), we consider an anchor correct if its offsets and event type are correct, and we consider an argument correct if its offsets, event type, and role find a match in the ground truth. For event coreference (same-sentence only), we consider each anchor pair separately to produce an overall F-score.\nTable 1 provides overall scores in several settings where multi-lingual event annotations are available. All models are trained on English data only. For the ACE data, we follow (Huang et al., 2022). The BETTER Basic task is described in Section 3; there are two ontologies (Basic-1 and Basic-2) from different phases of the originating program. The BET-TER Abstract task is similar to BETTER Basic, but all action-like phrases are annotated as events, with no further event type specified4 ; valid roles are only agent and patient (McKinnon and Rubino, 2022). More dataset statistics are found in Appendix A.1.\nIt is difficult to compare system accuracy across languages; a lower score in one language may reflect a real difference in performance across languages-or just that one set of documents is harder than another. Still, we observe the following. First, performance on anchors seems most sensitive to language choice-for instance, we note that Arabic and Chinese anchor performance on ACE differs by almost 10 points. For arguments, however, non-English performance is relatively consistent given a task-but varies more widely between tasks. Second, we note that cross-lingual performance seems best on anchors, where it exceeds 80% of English performance for all but one condition. In contrast, argument performance varies more widely, with many conditions below 70% of English (though some as high as 89%).\nWe also compare against existing published baselines where possible. There are relatively few pub- lished results on cross-lingual event anchor detection (and none that we could find on the task of cross-lingual event co-reference as defined here).\nTo benchmark performance on anchors, we turn to MINION (Pouran Ben Veyseh et al., 2022), a multi-lingual anchor-only dataset that uses a derivative of the ACE ontology. For a fair comparison, we retrained our model (tuned for use with XLM-RoBERTa large) with XLM-RoBERTa base; we did not adjust any hyperparameters. For argument detection, much more published work exists, and we show in Table 3 that ISI-CLEAR achieves state-of-the-art performance on all ACE datasets, comparing against the previous state-of-the-art as reported in Huang et al. (2022 " }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b14", "b13", "b17", "b0", "b21" ], "table_ref": [], "text": "Several recent demos have presented multi-lingual event extraction in some form, but most assume training data in each target language (e.g. Li et al. (2019) or Li et al. ( 2020)) or translate foreignlanguage text into English before processing (e.g. Li et al. (2022)). In contrast, the focus of our demo is making events available in languages for which no training data exists. Other demos have shown the potential of zero-shot cross-lingual transfer, but on unrelated tasks, e.g. offensive content filtering (Pelicon et al., 2021). Akbik et al. (2016) uses annotation projection from English FrameNet to build target-language models for frame prediction; the focus of the demo is then on building effective queries over language-agnostic frame semantics for extraction. Finally, Xia et al. (2021) also produce FrameNet frames cross-lingually (using XLM-RoBERTa), but in contrast to our work, several of their supporting models use target-language data, and they also supply only a simpler user interface and lack the cross-lingual search-by-query capability that is a key aspect of our demo." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "ISI-CLEAR provides a monolingual Englishspeaking user with effective access to global events, both on-demand (extracting events from input of a user's choice) or as a set of indexed documents accessible via cross-lingual search. The system provides a variety of visualizations and modes for engaging with system results. We look forward to future work improving the quality of the underlying components and exploring additional capabilities to cross language barriers and expand access to information around the globe." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Our core approach is limited by the underyling multi-lingual language model it employs. For this demo, we are therefore limited to the 100 languages that make up the XLM-RoBERTa training set. Performance also varies across languages, tracking in part (though not in whole) with the volume of training data available for each language when building the multi-lingual language model. For instance, anecdotally, the performance on Yiddish (34M tokens in the CC-100 corpus used to train XLM-RoBERTa) is inferior to that of Farsi (13259M tokens). We have provided empirical results for eleven languages and five tasks, but it would be ideal to have a broader set of test conditions; unfortunately, annotated datasets for events are much less common than for simpler tasks like named entity recognition.\nA second limitation of our system involves compute requirements. We employ multiple separate components for event extraction (e.g., for anchor detection vs. argument attachment), which increases memory/GPU footprint compared to a more unified system. Finally, our system assumes an existing ontology and (English) training data set; it would be interesting to explore zero-shot ontology expansion in future work." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "One important note is that our system is designed to extract information about events that are reported in text, with no judgment about their validity. This can lead a user to draw false conclusions. For instance, the system might return many results for a person X as the agent of a Corruption event, but this does not necessarily mean that X is actually corrupt. This should be prominently noted in any use case for this demonstration system or the underlying technologies." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Dataset Statistics", "publication_ref": [], "table_ref": [], "text": "We report results for a variety of different tasks in a variety of different languages. We outline the sizes for these diverse datasets in Tables 4 and5. The tasks use five different ontologies; we also report the number of event types for each ontology in Table 6." }, { "figure_ref": [], "heading": "A.2 Speed", "publication_ref": [ "b20" ], "table_ref": [], "text": "Table 7 presents speed results for six representative languages, calculated as number of seconds per 100 \"words\". For this exercise we consider words to be the output of UDPipe's language-specific tokenization (Straka, 2018). The primary driver of speed difference is that, given XLM-RoBERTa's fixed vocabulary, different languages will split into more or fewer subwords on average. For instance, an average Korean word will produce at least half again as many subwords than, say, an average Farsi word; this is presumably why 100 words of Korean takes about 70% longer to process than 100 words of Farsi. On average, for a standard short news article (200 words), we expect to wait about two seconds for extraction and an additional six or seven seconds for MT and projection. We did not optimize our selection of MT package for speed (e.g., it decodes one sentence at a time instead of batching); this could easily be updated in future work to be more efficient." }, { "figure_ref": [], "heading": "A.3 Search Ranking", "publication_ref": [ "b1" ], "table_ref": [], "text": "ISI-CLEAR extracts a large number of events from the documents indexed from search, some of which vary in quality and some of which will match more or less confidently to an English query. The ranking function described here significantly improves the usability of our search results.\nThe goal of our search ranking function is to rank each extracted event E with respect to a user query Q. To calculate score(Q, E), we combine two separate dimensions of system confidence:\n1. Cross-lingual alignment confidence (CAC): are the components of E reasonable translations of the query terms? For instance, is étudiants internationaux a good match for the query phrase foreign students? Here, we assume the existence of a cross-lingual retrieval method cac(e, f ) that estimates the likelihood that foreign text f conveys the same meaning as English text e, as in our prior work (Barry et al., 2020). 2. Extraction confidence (EC): how likely is it that the elements of E were correctly extracted in the first place? Here we use confidence measures (denoted ec) produced by individual system components. To combine these dimensions, we consider each query condition separately (summing the results). For simplicity we describe the scoring function for the agent condition:\nscore(Q agent , E agent ) = β * ec(E agent ) * cac(Q agent , E agent ) + (1 -β) * cac(Q agent , E sentence )\nThe first term of this equation captures the two dimensions described above. The second term allows us to account for agents missed by the system, letting us give \"partial credit\" when the user's search term is at least found in the nearby context (e.g., in E sentence ). Based on empirical observation, we set β to 0.75.\nWe follow the same formula for patient and location. For context we use only the final term cac(Q topic , E sentence ) since context does not directly correspond to an event argument.\nFor now, event type operates as a filter with no score attached; in future work we will incorporate both the system's confidence in the event type as well as a fuzzy match over nearby event types (e.g., allowing for confusion between Indict and Convict)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research is based upon work supported in part by the Office of the Director of National Intelli-gence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via Contract No. 2019-19051600007. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "en ar fa ko ru zh Event 1.1 1.0 0.9 1.5 0.8 1.1 Display n/a 2.6 2.8 4.1 3.4 3.9\nTable 7: Processing speed (seconds per 100 words). Event processing includes ingest, tokenization, anchors, arguments, event-event relationships, and when/where extraction. Display processing includes components solely required for display (MT and projection). We use 11GB GTX 1080Ti GPUs for extraction/projection and use a 48GB Quadro RTX 8000 GPU for MT." } ]
In this paper, we present ISI-CLEAR, a stateof-the-art, cross-lingual, zero-shot event extraction system and accompanying user interface for event visualization & search. Using only English training data, ISI-CLEAR makes global events available on-demand, processing user-supplied text in 100 languages ranging from Afrikaans to Yiddish. We provide multiple event-centric views of extracted events, including both a graphical representation and a document-level summary. We also integrate existing cross-lingual search algorithms with event extraction capabilities to provide crosslingual event-centric search, allowing Englishspeaking users to search over events automatically extracted from a corpus of non-English documents, using either English natural language queries (e.g. cholera outbreaks in Iran) or structured queries (e.g. find all events of type Disease-Outbreak with agent cholera and location Iran).
Massively Multi-Lingual Event Understanding: Extraction, Visualization, and Search
[ { "figure_caption": "Figure 1 :1Figure 1: Text-based display of Polish news. The user provides only the Polish text. To aid an English-speaking user, ISI-CLEAR displays the extracted event information not only in Polish but also in English. All processesincluding anchor detection, argument extraction, machine translation and span-projection-are carried out in real time.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Graph-based display of event information extracted from user provided text in Polish.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Event-centric summary of Farsi document.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Example of search results.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Component-level accuracy by language / task. Dataset statistics are available in Appendix A.1. ACE lacks same-sentence event coreference so those figures are omitted. Event coreference is peripheral to the overall Abstract task; we chose to not model it explicitly and exclude it here.", "figure_data": "TaskACEBasic-1Basic-2AbstractLanguageenarzhenarenfaenarfakoAnchors71.2 58.1 49.6 64.2 52.5 64.6 54.3 87.4 78.3 72.5 78.9Arguments72.1 51.5 51.7 64.5 51.5 71.6 64.0 69.8 45.0 45.7 45.0Event coreference ---83.4 67.9 86.5 65.9 ----", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table2that the ISI-CLEAR model performs on average 2.7 points better than the reported MINION numbers for cross-lingual settings. We also show the numbers from our actual demo models (trained with XLM-RoBERTa large) for comparison.", "figure_data": "baselargeMINION ISI-CLEAR ∆ ISI-CLEARen79.578.9-0.678.0es62.862.3-0.565.3pt72.871.1-1.775.0pl60.152.6-7.566.4tr47.252.0+4.856.5hi58.272.2+14.072.7ko56.864.1+7.363.5AVG59.762.4+2.766.6Table 2: Cross-lingual anchor detection (F1) for MIN-ION dataset, training on English only. Average isacross all cross-lingual settings.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "). Cross-lingual argument detection (F1) for ACE over gold anchors, training on English only.", "figure_data": "X-GEAR ISI-CLEARen71.272.1ar44.851.5zh51.551.7", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Chris Jenkins; Shantanu Agarwal; Joel Barry; Steven Fincke; Elizabeth Boschee
[ { "authors": "Alan Akbik; Laura Chiticariu; Marina Danilevsky; Yonas Kbrom; Yunyao Li; Huaiyu Zhu", "journal": "", "ref_id": "b0", "title": "Multilingual information extraction with PolyglotIE", "year": "2016" }, { "authors": "Joel Barry; Elizabeth Boschee; Marjorie Freedman; Scott Miller", "journal": "European Language Resources Association", "ref_id": "b1", "title": "SEARCHER: Shared embedding architecture for effective retrieval", "year": "2020" }, { "authors": "Elizabeth Boschee; Joel Barry; Jayadev Billa; Marjorie Freedman; Thamme Gowda; Constantine Lignos; Chester Palen-Michel; Michael Pust; Kayang Banriskhem; Srikanth Khonglah; Jonathan Madikeri; Scott May; Miller", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "SARAL: A low-resource cross-lingual domain-focused information retrieval system for effective rapid document triage", "year": "2019" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Zi-Yi Dou; Graham Neubig", "journal": "", "ref_id": "b4", "title": "Word alignment by fine-tuning embeddings on parallel corpora", "year": "2021" }, { "authors": "Steven Fincke; Shantanu Agarwal; Scott Miller; Elizabeth Boschee", "journal": "", "ref_id": "b5", "title": "Language model priming for cross-lingual event extraction", "year": "2022" }, { "authors": "Yi R Fung; Heng Ji", "journal": "", "ref_id": "b6", "title": "A weibo dataset for the 2022 russo-ukrainian crisis", "year": "2022" }, { "authors": "Thamme Gowda; Zhao Zhang; Chris Mattmann; Jonathan May", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Many-to-English machine translation tools, data, and pretrained models", "year": "2021" }, { "authors": "Junjie Hu; Sebastian Ruder; Aditya Siddhant; Graham Neubig; Orhan Firat; Melvin Johnson", "journal": "", "ref_id": "b8", "title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b9", "title": "", "year": "" }, { "authors": "Kuan-Hao Huang; I-Hung Hsu; Prem Natarajan; Kai-Wei Chang; Nanyun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Multilingual generative language models for zero-shot crosslingual event argument extraction", "year": "2022" }, { "authors": "Jalili Masoud; Philipp Sabet; François Dufter; Hinrich Yvon; Schütze", "journal": "", "ref_id": "b11", "title": "SimAlign: High quality word alignments without parallel training data using static and contextualized embeddings", "year": "2020" }, { "authors": "Taku Kudo; John Richardson", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for neural text processing", "year": "2018" }, { "authors": "Manling Li; Revanth Gangi Reddy; Ziqi Wang; Yishyuan Chiang; Tuan Lai; Pengfei Yu; Zixuan Zhang; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "COVID-19 claim radar: A structured claim extraction and tracking system", "year": "2022" }, { "authors": "Manling Li; Ying Lin; Joseph Hoover; Spencer Whitehead; Clare Voss; Morteza Dehghani; Heng Ji", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "Multilingual entity, relation, event and human value extraction", "year": "2019" }, { "authors": "Manling Li; Alireza Zareian; Ying Lin; Xiaoman Pan; Spencer Whitehead; Brian Chen; Bo Wu; Heng Ji; Shih-Fu Chang; Clare Voss; Daniel Napierski; Marjorie Freedman", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "GAIA: A fine-grained multimedia knowledge extraction system", "year": "2020" }, { "authors": "Timothy Mckinnon; Carl Rubino", "journal": "", "ref_id": "b16", "title": "The IARPA BETTER program abstract task four new semantically annotated corpora from IARPA's BET-TER program", "year": "2022" }, { "authors": "Andraž Pelicon; Ravi Shekhar; Matej Martinc; Blaž Škrlj; Matthew Purver; Senja Pollak", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Zero-shot cross-lingual content filtering: Offensive language and hate speech detection", "year": "2021" }, { "authors": "Ben Amir Pouran; Minh Veyseh; Franck Van Nguyen; Thien Dernoncourt; Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "MINION: a large-scale and diverse dataset for multilingual event detection", "year": "2022" }, { "authors": "Pranav Rajpurkar; Jian Zhang; Konstantin Lopyrev; Percy Liang", "journal": "", "ref_id": "b19", "title": "SQuAD: 100,000+ Questions for Machine Comprehension of Text", "year": "2016" }, { "authors": "Milan Straka", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "UDPipe 2.0 prototype at CoNLL 2018 UD shared task", "year": "2018" }, { "authors": "Patrick Xia; Guanghui Qin; Siddharth Vashishtha; Yunmo Chen; Tongfei Chen; Chandler May; Craig Harman; Kyle Rawlins; Aaron Steven White; Benjamin Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "LOME: Large ontology multilingual extraction", "year": "2021" }, { "authors": "Tongtao Zhang; Ji Heng; Avirup Sil", "journal": "Data Intelligence", "ref_id": "b22", "title": "Joint entity and event extraction with generative adversarial imitation learning", "year": "2019" } ]
[ { "formula_coordinates": [ 9, 316.1, 156.07, 198.34, 43.71 ], "formula_id": "formula_0", "formula_text": "score(Q agent , E agent ) = β * ec(E agent ) * cac(Q agent , E agent ) + (1 -β) * cac(Q agent , E sentence )" } ]
10.1207/s15516709cog2402_4
2023-05-24
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b16", "b24", "b36", "b33", "b37", "b31", "b29", "b32", "b32", "b13", "b33", "b1", "b4", "b8", "b5" ], "table_ref": [], "text": "Noun compounds (NCs) are prevalent in English, but most individual NCs are infrequent (Kim and Baldwin, 2007). Yet, it is possible to derive the meaning of most NCs from the meanings of their constituent nouns. The task of noun compound interpretation (NCI) addresses this by explicitly uncovering the implicit semantic relation between the constituent nouns. We focus on the paraphrasing variant (Nakov and Hearst, 2006), where the goal is to generate multiple paraphrases that explicitly express the semantic relation between the constituents. For example (Figure 1), a chocolate bunny is a \"chocolate shaped like a bunny\".\nEarlier methods for NCI represented NCs as a function their constituents' representations (e.g. Van de Cruys et al., 2013;Shwartz and Dagan, 2018). In recent years, pre-trained language models (PLMs) caused a paradigm shift in NLP. Such models are based on the transformer architecture (Vaswani et al., 2017), which by design computes a word representation as a function of the representation of its context. Further, PLMs are pre-trained on vast amounts of text, which equips them with broad semantic knowledge (Rogers et al., 2020). Such knowledge may facilitate interpreting unseen NCs based on observed NCs that are semantically similar. Indeed, Ponkiya et al. (2020) showed that a masked language model is useful for this task, and Shwartz (2021) demonstrated the utility of generative language models on this task.\nWe formalize the experiments presented in Shwartz (2021) and evaluate generative models on NCI. We manually analyze and correct many problems with the standard SemEval 2013 task 4 dataset (Hendrickx et al., 2013), and release a cleaned version of the dataset. Following the criticism in Shwartz and Dagan (2018) on the task's dedicated evaluation metrics, we propose a more complete set of evaluation metrics including both automatic metrics and human evaluation.\nOur experiments show that a few-shot model based on GPT-3 (Brown et al., 2020) achieves near-perfect performance on the NCI test set. The impressive performance may be due to a combination of factors. First, it tends to memorize texts seen during pre-training (Carlini et al., 2022), likely including partial or complete definitions of common NCs. Second, it has learned vast commonsense and world knowledge from its pre-training corpus, which-together with its ability to generalizemay be useful for interpreting less frequent NCs.\nTo test the extent that GPT-3 reasons about its knowledge as opposed to memorizes definitions, we propose a second task: noun compound conceptualization (NCC). The setup is identical to NCI, but the NCs are rare or novel (e.g., chocolate crocodile in Fig. 1), requiring a model to come up with a plausible interpretation based on its existing knowledge. We construct a test set for this task based on data from Dhar and van der Plas (2019). The results show that GPT-3 outperforms humans on NCC, presumably thanks to its fast access to a huge \"knowledge base\", and compared to the relative human slowness on this task (Connell and Lynott, 2012).\nYet, compared to its performance on NCI, GPT-3's performance on NCC shows a significant drop. We thus quantify the extent that GPT-3 copies from its pre-training corpus when generating paraphrases for either NCI or NCC. We find that the generated paraphrases have significant overlap with a large web-based corpus, but that as expected, the copying strategy is less beneficial for NCC than for NCI.\nWe anticipate that the cleaned dataset and proposed evaluation setup will be adopted by the research community for NCI, and hope to see further research on NCC.1 2 Background" }, { "figure_ref": [], "heading": "Noun Compound Interpretation", "publication_ref": [ "b19", "b15", "b35", "b33", "b24", "b13", "b3", "b17", "b26", "b33", "b33", "b29", "b33", "b32" ], "table_ref": [], "text": "Traditionally, NCI has been framed as a classification task into predefined relation labels. Datasets differed by the number of relations and their specificity level; from 8 prepositional relations (e.g. of, from, etc.; Lauer, 1995), to finer-grained inventories with dozens of relations (e.g. contains, purpose, time of; Kim and Baldwin, 2005;Tratz and Hovy, 2010). The classification approach is limited because even the larger relation inventories don't cover all possible relationships between nouns. In addition, each NC is classified to a single relation, although several relations may be appropriate. E.g., business zone is both a zone that contains businesses and a zone whose purpose is business (Shwartz and Dagan, 2018).\nFor these reasons, in this paper we focused on the task of interpreting noun compounds by producing multiple free-text paraphrases (Nakov and Hearst, 2006). The reference paraphrases could be any text, but in practice they typically follow a \"[n 2 ] ... [n 1 ]\" pattern, where n 1 and n 2 are the constituent nouns. The main dataset for this task comes from SemEval 2013 task 4 (Hendrickx et al., 2013), following a similar earlier task (Butnariu et al., 2009).\nEarlier methods for this task reduced the paraphrasing task into a classification task to one of multiple paraphrase templates extracted from a corpus (Kim and Nakov, 2011;Paşca, 2015;Shwartz and Dagan, 2018). Shwartz and Dagan (2018) jointly learned to complete any item in the ([n 1 ], [n 2 ], paraphrase template) tuple, which allowed the model to generalize, predicting paraphrases for rare NCs based on similarity to other NCs.\nMore recently, Ponkiya et al. (2020) showed that PLMs already capture this type of knowledge from their pre-training. They used an offthe-shelf T5 model to predict the mask substitutes in templates such as \"[n 2 ] [MASK] [n 1 ]\", achieving a small improvement over Shwartz and Dagan (2018). Shwartz (2021) further showed that supervised seq2seq models based on PLMs and a few-shot model based on GPT-3 yielded correct paraphrases for both common and rare NCs." }, { "figure_ref": [], "heading": "Forming and Interpreting new Concepts", "publication_ref": [ "b38", "b6", "b5", "b6", "b5", "b18", "b8", "b21", "b7", "b23" ], "table_ref": [], "text": "Research in cognitive science studied how people interpret new noun-noun combinations such as cactus fish (e.g. Wisniewski, 1997;Costello and Keane, 2000;Connell and Lynott, 2012). While such combinations invite various interpretations, there is usually a single preferred interpretation which is more intuitively understood. For example, a cactus fish would more likely mean \"a fish that is spiky like a cactus\" than \"a fish that is green like a cactus\", because \"spiky\" is more characteristic of cacti than \"green\" (Costello and Keane, 2000). Connell and Lynott (2012) constructed a set of 27 novel NCs and asked people to (1) judge the sensibility of an NC; and (2) come up with a plausible interpretation. The short response times for the sensibility judgment task indicated that participants relied on shallow linguistic cues as shortcuts, such as the topical relatedness between the constituent nouns. Response times in the interpretation generation task were longer, indicating that participants employed a slower process of mental simulation. Interpreting a new concept required building a detailed representation by re-experiencing or imagining the perceptual properties of the constituent nouns.\nComputational work on plausibility judgement for NCs involves rare NCs (Lapata and Lascarides, 2003) and novel NCs (Dhar and van der Plas, 2019). The latter built a large-scale dataset of novel NCs by extracting positive examples from different decades in the Google Ngram corpus for training and testing. Negative examples were constructed by randomly replacing one of the constituents in the NC with another noun from the data. They proposed an LSTM-based model that estimates the plausibility of a target NC based on the pairwise similarity between the constituents of the target NC and other, existing NCs. For example, the candidate NC glass canoe was predicted as plausible thanks to its similarity to glass boat.\nIn this paper, we go beyond plausibility judgement to the more complicated task of interpretation. In concurrent work, Li et al. (2022) conducted similar experiments evaluating GPT-3's ability to define common and new noun compounds, as well as combinations of nonce words. They found no evidence that GPT-3 employs human-like linguistic principles when interpreting new noun compounds, and suggested it might be memorizing lexical knowledge instead. We further try to quantify the latter possibility in this work.\nSimilarly to novel NCs, Pinter et al. (2020b) look at novel blends from the NYTWIT corpus, collected automatically from a Twitter bot that tweets words published for the first time in the NYT (Pinter et al., 2020a). For example, thrupple is a blend of three and couple, used to describe \"A group of three people acting as a couple\". They found that PLMs struggled to separate blends into their counterparts.\nIn a related line of work on creativity, researchers proposed models that coin new words from existing ones. Deri and Knight (2015) generated new blends such as frenemy (friend + enemy). Mizrahi et al. (2020) generated new Hebrew words with an algorithm that is inspired by the human process of combining roots and patterns." }, { "figure_ref": [], "heading": "Noun Compound Interpretation", "publication_ref": [ "b13" ], "table_ref": [], "text": "We first evaluate PLMs' ability to interpret existing noun compounds. We focus on the free-text paraphrasing version of NCI, as exemplified in Table 2. We use the standard dataset from SemEval 2013 Task 4 (Hendrickx et al., 2013). We identified several problems in the dataset that we address in Sec 3.1. We then trained PLM-based models on the revised dataset (Sec 3.2), and evaluated them both automatically and manually (Sec 3.3)." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b13", "b13", "b11" ], "table_ref": [], "text": "We manually reviewed the SemEval-2013 dataset and identified several major issues with the data quality. We propose a revised version of the dataset, with the following modifications.\nTrain-Test Overlap. We discovered 32 NCs that appeared in both the training and test sets, and removed them from the test set.\nIncorrect Paraphrases. We manually corrected paraphrases with superficial problems such as spelling or grammatical errors, redundant spaces, and superfluous punctuation. We also identified and removed NCs that were semantically incorrect. For example, rubber glove was paraphrased to \"gloves has been made to get away from rubber\", perhaps due to the annotator mistaking the word rubber for robber. Finally, we found and removed a few paraphrases that contained superfluous or subjective additions, deviating from the instructions by Hendrickx et al. (2013). For example, tax reduction was paraphrased as \"reduction of tax hurts the economy\", and engineering work as \"work done by men in the field of engineering\". Further, we discarded a total of 14 NCs from the training set and 11 NCs from the test set that had no correct paraphrases. In total, we removed 1,960 paraphrases from the training set and 5,066 paraphrases from the test set.\n\"Catch-All\" Paraphrases. The paraphrases in Hendrickx et al. (2013) were collected from crowdsourcing workers. An issue with the crowdsourcing incentive structure, is that it indirectly encourages annotators to submit any response, even when they are uncertain about the interpretation of a given NC. In the context of this dataset, this incentive leads to what we call \"catch-all\" paraphrases. Such paraphrases include generic prepositional paraphrases such as \"[n 2 ] of [n 1 ]\" (e.g. \"drawing of chalk\"). Data Augmentation. To increase the size of the dataset in terms of paraphrases and facilitate easier training of models, we performed semi-automatic data augmentation. Using WordNet (Fellbaum, 2010), we extended the set of paraphrases for each NC by replacing verbs with their synonyms and manually judging the correctness of the resultant paraphrase. We also identified cases were two paraphrases could be merged into additional paraphrases. For example, steam train contained the paraphrases \"train powered by steam\" and \"train that operates using steam\", for which we added \"train operated by steam\" and \"train that is powered using steam\". Overall, we added 3,145 paraphrases to the training set and 3,115 to the test set. We followed the same train-test split as the original dataset, but dedicated 20% of the test set to validation. Table 1 displays the statistics of the NCI datasets." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b30", "b1", "b39", "b14" ], "table_ref": [], "text": "We evaluate the performance of two representative PLM-based models on our revised version of the SemEval-2013 dataset (henceforth: the NCI dataset): a supervised seq2seq T5 model (Raffel et al., 2020) and a few-shot prompting GPT-3 model (Brown et al., 2020). Supervised Model. We trained the seq2seq model from the Transformers package (Wolf et al., 2019), using T5-large. We split each instance in the dataset into multiple training examples, with the NC as input and a single paraphrase as output. We used the default learning rate (5 × 10 -5 ), batch size ( 16), and optimizaer (Adafactor). We stopped the training after 4 epochs when the validation loss stopped improving. During inference, we used topp decoding (Holtzman et al., 2020) with p = 0.9 and a temperature of 0.7, and generated as many paraphrases as the number of references for a given NC." }, { "figure_ref": [], "heading": "Few-shot", "publication_ref": [], "table_ref": [], "text": "Model. We used the text-davinci-002 GPT-3 model available through the OpenAI API. We randomly sampled 10 NCs, each with one of its paraphrases, from the training set, to build the following prompt:\nQ: what is the meaning of <NC>? A:<paraphrase> This prompt was followed by the same question for the target NC, leaving the paraphrases to be completed by GPT-3. We used the default setup of top-p decoding with p = 1 and a temperature of 1." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b33", "b13", "b33", "b22", "b20", "b40", "b22" ], "table_ref": [ "tab_3", "tab_2", "tab_3" ], "text": "We decided to deviate from the original evaluation setup of the SemEval 2013 dataset, which was criticized in Shwartz and Dagan (2018). We describe the original evaluation setup, and our proposed setup including automatic and manual evaluation.\nOriginal Evaluation Setup. The original Se-mEval task was formulated as a ranking task. The paraphrases of each NC were ranked according to the number of annotators who proposed them. Hendrickx et al. (2013) introduced two dedicated evaluation metrics, an 'isomorphic' score that measured the recall, precision, and order of paraphrases predicted by the systems, and a 'non-isomorphic' score that disregarded the order. Both metrics rewarded systems for predicting shorter prepositional paraphrases (e.g. \"[n 2 ] of [n 1 ]\"), that were in the set of paraphrases for many NCs, and were often ranked high because many annotators proposed them. For example, for the NC access road, the catch-all paraphrase \"road for access\" was ranked higher than the more informative \"road that provides access\". Indeed, as noted in Shwartz and Dagan (2018), a baseline predicting a fixed set of common, generic paraphrases already achieves moderately good non-isomorphic score. In general, we do not see the benefit of the ranking system, NC GPT-3 T5 access road road that provides access road for access reflex action a sudden, involuntary response to a stimulus action performed to perform reflexes sport page a page in a publication that is devoted to sports page dedicated to sports computer format the way in which a computer organizes data format used in computers grief process process of grieving or mourning process that a grief sufferer experiences since some of the most informative paraphrases are unique and are less likely to have been proposed by many annotators. Instead, we propose to use standard evaluation metrics for generative tasks, as we describe below.\nAutomatic Evaluation. Table 3 (columns 2-4) displays the performance of T5 and GPT-3 on the test set using the following standard evaluation metrics for text generation tasks: the lexical overlap metrics ROUGE-L (Lin, 2004) and METEOR (Lavie and Agarwal, 2007), and the semanticsimilarity metric BERT-Score (Zhang et al., 2020). These metrics compare the system generated paraphrases with the reference paraphrases, further motivating our data augmentation in Sec 3.1 (e.g., Lin (2004) found that considering multiple references improves ROUGE's correlation with human judgements). For each metric m, we compute the following score over the test set T:\ns = mean nc∈T mean p∈system(nc) max r∈references(nc) m(p, r)\nIn other words, we generate a number of paraphrases equal to the number of reference paraphrases, then find the most similar reference for each of the generated paraphrases, and average across all paraphrases for each NC in the test set.\nThe automatic metrics show a clear preference to T5. However, upon a closer look at the outputs of each model, it seems that T5 generated paraphrases that more closely resembled the style and syntax of the references, as expected from a supervised model, but the paraphrases were not \"more correct\" than those outputted by GPT-3. For example, in Table 2, the paraphrase generated by GPT-3 for reflex action is correct but doesn't follow the syntax of the references in the training data ([n 2 ] ...\n[n 1 ]). The T5-generated paraphrase follows that syntax but generates the generic and inaccurate paraphrase \"action performed to perform reflexes\". More broadly, lexical overlap based metrics such as ROUGE and METEOR penalize models for lexical variability.\nHuman Evaluation. To assess the quality of predictions in a more reliable manner, we turn to human evaluation. We used Amazon Mechanical Turk (MTurk) and designed a human intelligence task (HIT) which involved displaying an NC along with 10 generated paraphrases, 5 from GPT-3 and 5 from T5, randomly shuffled. We asked workers to indicate for each paraphrase whether they deemed it acceptable or not. Each HIT was to be performed by 3 workers, and acceptability was measured using majority voting. To ensure the quality of workers, we required that workers reside in the US, Canada, or the UK, and that they had an acceptance rate of at least 99% for all prior HITs. We also required them to pass a qualification task that resembled the HIT itself. We paid each worker $0.10 per task, which yielded an approximate hourly wage $15.\nThe last column in Table 3 presents the results of the human evaluation in terms of percentage of paraphrases deemed acceptable by a majority of human evaluators. GPT-3 performed remarkably well with over 95% of generated paraphrases deemed acceptable by a majority of human evaluators. In contrast to the automatic metrics, T5 fared much worse on human evaluation, and human annotators judged a third of T5 outputs as incorrect." }, { "figure_ref": [], "heading": "Noun Compound Conceptualization", "publication_ref": [ "b10", "b34", "b5", "b33", "b38", "b6" ], "table_ref": [], "text": "GPT-3's impressive success at interpreting existing noun compounds is related to PLMs' ability to associate nouns with their hypernyms (Ettinger, 2020) and to generate accurate definitions for terms (Shwartz et al., 2020). Such models are trained on vast amounts of texts, including said definitions, and the target NC itself occurring alongside contexts that indicate its meaning. Humans are different in their ability to interpret NCs. We can often rely on a single context, or no context at all, to have at least an educated guess at the meaning of a new NC. We are capable of representing new concepts by \"mentally manipulating old ones\" (Connell and Lynott, 2012), e.g. coming up with a plausible interpretation for chocolate crocodile based on similar concepts such as chocolate bunny.\nPrior work on NCI simulated this by training a model to jointly predict a paraphrase as well as answer questions such as \"what can chocolate be shaped like?\" (Shwartz and Dagan, 2018). We are interested in learning whether PLMs already do this implicitly, or more broadly, to what extent can PLMs interpret new noun compounds?\nInspired by studies in cognitive science about \"conceptual combination\" (Wisniewski, 1997;Costello and Keane, 2000), we define the task of Noun Compound Conceptualization (NCC). NCC has the same setup as NCI ( §3), but the inputs are rare or novel noun compounds. The task thus requires some level of creativity and the ability to make sense of the world. We first describe the creation of the NCC test set (Sec 4.1). We evaluate the best model from Sec 3.2 on the new test set, and present the results in Sec 4.2." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b0", "b30", "b12" ], "table_ref": [], "text": "We construct a new test set consisting of novel or rare NCs. The guidelines for adding an NC for the test set are that: (a) humans could easily make sense of it; but (b) it is infrequent in or completely absent from the web.\nNoun Compounds. The main source for the test set is a dataset from Dhar and van der Plas (2019). They proposed the task of classifying an unseen sequence of two nouns to whether it can form a plausible NC or not. The data was created by extracting noun-noun bigrams from the Google Ngram corpus (Brants, 2006). To simulate novel NCs, the models were trained on bigrams that only appeared in the corpus until the year 2000 and evaluated on bigrams that only appeared after 2000. Since GPT-3 was trained on recent data, we had to make sure that we only include the most infrequent NCs. We thus further refined the data from Dhar and van der Plas (2019) by including only the 500 most infrequent NCs based on their frequency in a largescale text corpus, C4 (Raffel et al., 2020). We then semi-automatically filtered out named entities, compounds that were part of larger expressions, and NCs with spelling errors. Finally, we manually chose only the NCs for which we could come up with a plausible interpretation, leaving us with 83 NCs in total.\nWe added 22 more NCs that we extracted in a similar manner from the Twitter sentiment 140 dataset (Go et al., 2009). We expected to find more \"ad-hoc\" NCs in tweets than in more formal texts such as news. Due to the age and size of this dataset, we filtered the NCs based on frequency in C4, setting the threshold to 250 occurances. Overall, our NCC test set contains a total of 105 NCs.\nParaphrases. We collected reference paraphrases for the NCC test set using MTurk. We showed workers the target NC and asked them to paraphrase the NC or give their best estimate if they are unfamiliar with the NC. We used the same qualifications as in Sec 3.3, and paid $0.12 per HIT." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "We focus on GPT-3 due to its almost perfect performance on NCI. We evaluated GPT-3 on the NCC test set using the few-shot setup described in Sec 3.2. We selected the few-shot examples from the NCI training set.\nWe focus on human evaluation (as described in Sec 3.3), which is more reliable than automatic metrics. We asked workers to judge the validity of both human-written and GPT-3 generated paraphrases.\nTable 4 shows that GPT-3 performs significantly better than humans at this task. GPT-3 benefits from access to huge amounts of data. We conjecture that even though the target NCs are rare in its training data, it likely observed similar NCs, and is able to generalize and make sense of new concepts. At the same time, while humans are in general ca- pable of coming up with a plausible interpretation for an unfamiliar concept, it is an effortful and cognitively taxing task. We hypothesize that in a setup other than crowdsourcing, i.e. given more time or incentive, human performance may increase.\nCompared to its performance on NCI, GPT-3's performance on NCC shows a significant drop. This may suggest that GPT-3 struggles to reason about certain rare NCs, which we investigate in the next section." }, { "figure_ref": [], "heading": "Does GPT-3 Parrot its Training Data?", "publication_ref": [], "table_ref": [], "text": "While GPT-3 performs fairly well on NCC, looking at failure cases brings up interesting observations. For example, one of its responses for chocolate crocodile was \"A large, aggressive freshwater reptile native to Africa\". This response seems to have ignored the chocolate part of the NC entirely, and opted to provide an answer to \"What is a crocodile?\". Much like a student who doesn't know the answer to a question so instead regurgitates everything they memorized about the topic in hopes that it will include the correct answer. 3To quantify the extent to which GPT-3 may be parroting its training corpus, we look at n-gram overlap between GPT-3's generated paraphrases and the large-scale web-based corpus C4 (Raffel et al., 2020). 4 Figure 2 displays the percents of n-grams among the generated paraphrases (for n = {3, 4, 5}) that occur in the C4 corpus 0, 1-5, or 5+ times, for each of the NCI and NCC test sets. The results are presented separately for paraphrases deemed correct and incorrect by human evaluators.\nWe learn several things from Figure 2. First, the generated paraphrases often had significant overlap with the corpus (34-94%). As expected, trigrams are copied more than 4-grams, which are copied more than 5-grams, as those tend to be rarer.\nSecond, for the NCI test set, for each n, we see that n-grams from the correct paraphrases are copied from the web more often than n-grams from the incorrect paraphrases. The trend is reversed for NCC, where incorrect paraphrases are copied from the web more often than correct ones. Naturally, the copying strategy is less useful for NCC, which requires reasoning about new concepts. When GPT-3 generates correct paraphrases for NCC, their ngrams tend to not appear in the web at all.\nWe reach a similar conclusion by looking at the percent of n-grams in correct vs. incorrect paraphrases that are copied from the web. The vast majority of n-grams copied from the web (97%) for the NCI test set were correct, as opposed to only 80% for NCC." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [ "b30" ], "table_ref": [], "text": "We evaluated PLMs on their ability to paraphrase existing and novel noun compounds. For interpre-C4 (Raffel et al., 2020) is a colossal, cleaned version of Common Crawl, thus it is the closest to GPT-3's training corpus. tation of existing NCs (NCI), we released a cleaned version of the SemEval 2013 dataset, with manual correction and automatic augmentation of paraphrases, and proposed additional evaluation metrics to overcome limitations described in prior work. GPT-3 achieved near perfect performance on this new test set. We then investigated the task of noun compound conceptualization (NCC). NCC evaluates the capacity of PLMs to interpret the meaning of new NCs. We showed that GPT-3 still performs reasonably well, but its success can largely be attributed to copying definitions or parts of definitions from its training corpus." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b5", "b25" ], "table_ref": [], "text": "Human performance on NCC. The human accuracy on NCC was 73%, compared to 83% for GPT-3. We know from cognitive science research that humans are capable of forming new concepts based on existing ones (Connell and Lynott, 2012). Moreover, we manually selected NCs in the NCC test set that we could come up with a plausible interpretation for. The fact that 27% of the paraphrases proposed by MTurk workers were judged as incorrect could be explained by one of the following. The first explanation has to do with the limitations of crowdsourcing. To earn enough money, workers need to perform tasks quickly, and conceptualization is a slow cognitive process. On top of that, a worker that has already spent considerable amount of time trying to come up with a plausible interpretation for a new NC, is incentivized to submit any answer they managed to come up with, regardless of its quality. Skipping a HIT means lost wages. In a different setup, we hypothesize that human performance may increase for this task.\nThe second explanation has to do with the evaluation setup. We asked people to judge paraphrases as correct or incorrect. Upon manual examination of a sample of the human-written paraphrases, we observed a non-negligible of reasonable (but not optimal) paraphrases that were annotated as incorrect. For future work, we recommend doing a more nuanced human evaluation that will facilitate comparing the outputs of humans and models along various criteria.\nThe work focuses only on English. Our setup and data construction methods are fairly generic and we expect it to be straightforward to adapt them to other languages that use noun compounds. With that said, languages such as German, Nor-wegian, Swedish, Danish, and Dutch write noun compounds as a single word. Our methods will not work on these languages without an additional step of separating the NC into its constituent nouns, similar to unblending blends (Pinter et al., 2020b). In the future, we would like to investigate how well PLMs for other languages perform on NCI and NCC, especially for low-resource languages.\nLimitations of automatic metrics for generative tasks. Automatic metrics based on n-gram overlap are known to have low correlation with human judgements on various NLP tasks (Novikova et al., 2017). In particular, they penalize models for lexical variability. To mitigate this issue, we semi-automatically expanded the set of reference paraphrases using WordNet synonyms. Yet, we still saw inconsistencies with respect to the automatic metrics and human evaluation on NCI. The automatic metrics showed a clear preference to T5, which thanks to the supervision, learned to generate paraphrases that more closely resembled the style and syntax of the references. GPT-3's paraphrases, which were almost all judged as correct by human annotators, were penalized by the automatic metrics for their free form (e.g., they didn't always include the constituent nouns). For this reason, we focused only on human evaluation for NCC." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [ "b13", "b8", "b0", "b30", "b9" ], "table_ref": [], "text": "Data Sources. All the datasets and corpora used in this work are publicly available. The cleaned version of the NCI dataset is based on the existing Se-mEval 2013 dataset (Hendrickx et al., 2013). The NCs for the new NCC test set were taken from another publicly-available dataset (Dhar and van der Plas, 2019), based on frequencies in the Google Ngram corpus (Brants, 2006). To quantify Ngram overlap, we used the Allen AI version of the C4 corpus (Raffel et al., 2020;Dodge et al., 2021) made available by the HuggingFace Datasets package.5 Data Collection. We performed human evaluation using Amazon Mechanical Turk. We made sure annotators were fairly compensated by computing an average hourly wage of $15, which is well above the US minimum wage. We did not collect any personal information from annotators.\nModels. The models presented in this paper are for a low-level NLP task rather than for an appli-cation with which users are expected to interact directly. The generative models are based on PLMs, which may generate offensive content if prompted with certain inputs." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was funded, in part, by an NSERC USRA award, the Vector Institute for AI, Canada CIFAR AI Chairs program, an NSERC discovery grant, and a research gift from AI2." } ]
Noun compound interpretation is the task of expressing a noun compound (e.g. chocolate bunny) in a free-text paraphrase that makes the relationship between the constituent nouns explicit (e.g. bunny-shaped chocolate). We propose modifications to the data and evaluation setup of the standard task (Hendrickx et al., 2013), and show that GPT-3 solves it almost perfectly. We then investigate the task of noun compound conceptualization, i.e. paraphrasing a novel or rare noun compound. E.g., chocolate crocodile is a crocodile-shaped chocolate. This task requires creativity, commonsense, and the ability to generalize knowledge about similar concepts. While GPT-3's performance is not perfect, it is better than that of humans-likely thanks to its access to vast amounts of knowledge, and because conceptual processing is effortful for people (Connell and Lynott, 2012). Finally, we estimate the extent to which GPT-3 is reasoning about the world vs. parroting its training data. We find that the outputs from GPT-3 often have significant overlap with a large web corpus, but that the parroting strategy is less beneficial for novel noun compounds.
From chocolate bunny to chocolate crocodile: Do Language Models Understand Noun Compounds?
[ { "figure_caption": "Figure 1 :1Figure 1: An example NC (input) and paraphrases (output) for each of the NCI and NCC tasks.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Example paraphrases generated using GPT-3 and T5 for NCs in the revised SemEval 2013 test set.", "figure_data": "Method METEOR ROUGE-L BERTScore HumanT569.8165.9695.3165.35GPT-356.2747.3191.9495.64", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance of the T5 and GPT-3 models on the revised SemEval 2013 test set.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Human evaluation performance (percent of correct paraphrases) of paraphrases proposed by people or generated by GPT-3 for the NCI and NCC test sets.", "figure_data": "Test SetNCINCCHuman Performance-73.33GPT-395.64 83.81", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Figure2: The percent of n-grams among the generated paraphrases (for n = {3, 4, 5}) that occur in the C4 corpus 0, 1-5, or 5+ times, for each of the NCI and NCC test sets, grouped by correct vs. incorrect generated paraphrases.", "figure_data": "01-55+100%94%89%75%65%46%100% 42%76%88%57%70%34%47%75%75%13%50%18%12%50%53%14%25%11%12% 23%36%46%25%4% 20%9% 34%9% 21%39%0%3% 3%5% 6%14%0%4% 8%345345CorrectIncorrectCorrectIncorrectCorrectIncorrectCorrectIncorrectCorrectIncorrectCorrectIncorrectNCINCC1", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
Jordan Coil; Vered Shwartz
[ { "authors": "Thorsten Brants", "journal": "", "ref_id": "b0", "title": "Web 1t 5-gram version 1", "year": "2006" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b1", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b2", "title": "", "year": "" }, { "authors": "Cristina Butnariu; Nam Su; Preslav Kim; Nakov; Ó Diarmuid; Stan Séaghdha; Tony Szpakowicz; Veale", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "SemEval-2010 task 9: The interpretation of noun compounds using paraphrasing verbs and prepositions", "year": "2009" }, { "authors": "Nicholas Carlini; Daphne Ippolito; Matthew Jagielski; Katherine Lee; Florian Tramer; Chiyuan Zhang", "journal": "", "ref_id": "b4", "title": "Quantifying memorization across neural language models", "year": "2022" }, { "authors": "Louise Connell; Dermot Lynott", "journal": "", "ref_id": "b5", "title": "Flexible shortcuts: Linguistic distributional information affects both shallow and deep conceptual processing", "year": "2012" }, { "authors": "J Fintan; Mark T Costello; Keane", "journal": "Cognitive Science", "ref_id": "b6", "title": "Efficient creativity: Constraint-guided conceptual combination", "year": "2000" }, { "authors": "Aliya Deri; Kevin Knight", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "How to make a frenemy: Multitape FSTs for portmanteau generation", "year": "2015" }, { "authors": "Prajit Dhar; Lonneke Van Der Plas", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Learning to predict novel noun-noun compounds", "year": "2019" }, { "authors": "Jesse Dodge; Maarten Sap; Ana Marasović; William Agnew; Gabriel Ilharco; Dirk Groeneveld; Margaret Mitchell; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Documenting large webtext corpora: A case study on the colossal clean crawled corpus", "year": "2021" }, { "authors": "Allyson Ettinger", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b10", "title": "What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models", "year": "2020" }, { "authors": "Christiane Fellbaum", "journal": "Springer", "ref_id": "b11", "title": "Wordnet. In Theory and applications of ontology: computer applications", "year": "2010" }, { "authors": "Alec Go; Richa Bhayani; Lei Huang", "journal": "", "ref_id": "b12", "title": "Twitter sentiment classification using distant supervision", "year": "2009" }, { "authors": "Iris Hendrickx; Zornitsa Kozareva; Preslav Nakov; Ó Diarmuid; Stan Séaghdha; Tony Szpakowicz; Veale", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "SemEval-2013 task 4: Free paraphrases of noun compounds", "year": "2013" }, { "authors": "Ari Holtzman; Jan Buys; Li Du; Maxwell Forbes; Yejin Choi", "journal": "", "ref_id": "b14", "title": "The curious case of neural text degeneration", "year": "2020" }, { "authors": "Nam Su; Timothy Kim; Baldwin", "journal": "", "ref_id": "b15", "title": "Automatic interpretation of noun compounds using WordNet similarity", "year": "2005" }, { "authors": "Nam Su; Timothy Kim; Baldwin", "journal": "", "ref_id": "b16", "title": "Interpreting noun compounds using bootstrapping and sense collocation", "year": "2007" }, { "authors": "Nam Su; Preslav Kim; Nakov", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Large-scale noun compound interpretation using bootstrapping and the web as a corpus", "year": "2011" }, { "authors": "Mirella Lapata; Alex Lascarides", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Detecting novel compounds: The role of distributional evidence", "year": "2003" }, { "authors": "Mark Lauer", "journal": "", "ref_id": "b19", "title": "Designing statistical language learners: Experiments on noun compounds", "year": "1995" }, { "authors": "Alon Lavie; Abhaya Agarwal", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "METEOR: An automatic metric for MT evaluation with high levels of correlation with human judgments", "year": "2007" }, { "authors": "Siyan Li; Riley Carlson; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Systematicity in GPT-3's interpretation of novel English noun compounds", "year": "2022" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Moran Mizrahi; Stav Yardeni Seelig; Dafna Shahaf", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Coming to Terms: Automatic Formation of Neologisms in Hebrew", "year": "2020" }, { "authors": "Preslav Nakov; Marti Hearst", "journal": "", "ref_id": "b24", "title": "Using verbs to characterize noun-noun relations", "year": "2006" }, { "authors": "Jekaterina Novikova; Ondřej Dušek; Amanda Cercas Curry; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Why we need new evaluation metrics for NLG", "year": "2017" }, { "authors": "Marius Paşca", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Interpreting compound noun phrases using web search queries", "year": "2015" }, { "authors": "Yuval Pinter; Cassandra L Jacobs; Max Bittker", "journal": "International Committee on Computational Linguistics", "ref_id": "b27", "title": "NYTWIT: A dataset of novel words in the New York Times", "year": "2020" }, { "authors": "Yuval Pinter; Cassandra L Jacobs; Jacob Eisenstein", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Will it unblend?", "year": "2020" }, { "authors": "Girishkumar Ponkiya; Rudra Murthy; Pushpak Bhattacharyya; Girish Palshikar", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "Looking inside noun compounds: Unsupervised prepositional and free paraphrasing", "year": "2020" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b30", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Anna Rogers; Olga Kovaleva; Anna Rumshisky", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b31", "title": "A primer in BERTology: What we know about how BERT works", "year": "2020" }, { "authors": "Vered Shwartz", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "A long hard look at MWEs in the age of language models", "year": "2021" }, { "authors": "Vered Shwartz; Ido Dagan", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Paraphrase to explicate: Revealing implicit noun-compound relations", "year": "2018" }, { "authors": "Vered Shwartz; Peter West; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Unsupervised commonsense question answering with self-talk", "year": "2020" }, { "authors": "Stephen Tratz; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "A taxonomy, dataset, and classifier for automatic noun compound interpretation", "year": "2010" }, { "authors": "Tim Van De Cruys; Stergos Afantenos; Philippe Muller", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "MELODI: A supervised distributional approach for free paraphrasing of noun compounds", "year": "2013" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b37", "title": "Attention is all you need", "year": "2017" }, { "authors": "J Edward; Wisniewski", "journal": "Psychon Bull Rev", "ref_id": "b38", "title": "When concepts combine", "year": "1997" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b39", "title": "Huggingface's transformers: State-ofthe-art natural language processing", "year": "2019" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b40", "title": "Bertscore: Evaluating text generation with bert", "year": "2020" } ]
[ { "formula_coordinates": [ 5, 84.99, 535.86, 185.47, 29.07 ], "formula_id": "formula_0", "formula_text": "s = mean nc∈T mean p∈system(nc) max r∈references(nc) m(p, r)" } ]
2023-11-28
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b20", "b20", "b13", "b16", "b10", "b27", "b21" ], "table_ref": [], "text": "Neural Radiance Fields (NeRFs) [21] enable synthesizing novel views of complex scenes from a few 2D images with known camera positions. This neural network model can reproduce high-quality scenes from previously invisible viewpoints based on the relationship between these basic images and computer graphics principles such as radiation tracking. NeRFs [21] represent a scene using a fully connected architecture. NeRF takes a 5D coordinate as the input: spatial location and camera positions. On outputs, we obtain color and volume density. The loss of NeRF is inspired by classical volume rendering [14]. We render the color of all rays passing through the scene. In practice, the shape and colors of the 3D object are encoded in neural network weights.\nNeRF architecture produces extremely sharp renders of new views of a static scene. Unfortunately, such a model has a few important limitations. NeRF must be trained on each object separately, and it does not generalize to unseen data. The training time is long since we encode the object's shape in neural network weights.\nTherefore, several modifications of NeRF have appeared to solve the above problems. In practice, trainable voxel-bounded implicit fields [17,11,28,22] can be used as a representation of 3D objects. Instead of encoding the 3D structure in the weights of the deep model, we train voxels and a small Z (x,y,z)" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "RGB σ", "publication_ref": [ "b7", "b6", "b30", "b8", "b12", "b1", "b17", "b0", "b24", "b32", "b25", "b11", "b15", "b14", "b20", "b3", "b4", "b16", "b22", "b23", "b28", "b29", "b10", "b27", "b21", "b7", "b2", "b9", "b23", "b31", "b6", "b30", "b33", "b7" ], "table_ref": [], "text": "Figure 1: In MultiPlaneNeRF approach, we divided 2D training images into two parts. The first one builds a 2D representation and is used as input to a small implicit decoder. The second part is used as a vanilla NeRF training data set. The representation of a 3D object containing n 2D images is part of the architecture. The implicit decoder takes the coordinates of the 3D point (x, y, z) and applies projection on the given 2D images. Then the aggregate information of the projected pixel Z (x,y,z) ∈ R 5k is used to predict the color RGB and the volume density σ.\nimplicit decoder (MLP) to predict RGB colors and volume density. Such a solution reduces the training and inference times, see Fig. 2. Alternatively, we can use a TriPlane concept described in [6,8], based on training orthogonal planes to represent 3D objects. Similar to voxel-type NeRFs, TriPlane-based models use a small MLP (implicit decoder) to aggregate information and predict RGB colors and volume density. The planes are trained together with the implicit decoder.\nVoxel and plane-based representations reduce the computational time and increase the model's accuracy. But the above models do not generalize well to unseen data. To solve such a problem, we can use existing 2D images and an extensive network to extract information [7,31]. Thanks to the trainable feature extractor, we can train models on a large number of various objects. But architecture becomes vast and training time increases drastically.\nIn this paper, we present MultiPlaneNeRF2 -new NeRF model with easy-to-train small architecture, which has generalization properties. Our model works directly on 2D images. We project 3D points on 2D images to produce non-trainable representations. The projection step is not parametrized, and a very shallow decoder can efficiently process the representation. In MultiPlaneNeRF, we split the initial set of 2D training images into two subsets. The first one is used to build a 2D representation and further used as input to a small implicit decoder, see Fig. 2. The second one is utilized as a training set to calculate the weights of the decoder. Furthermore, we can train MultiPlaneNeRF on a large data set and force our implicit decoder to generalize across many objects. Consequently, we can only change the 2D image to produce a NeRF representation of the new object.\nMultiPlaneNeRF decoder can be used not only as a NeRF representation of a 3D object but also similarly to the TriPlane decoder as a component in a large generative model such as GAN [6].\nTo summarize, the contributions of our work are the following:\n• We propose a new method dubbed MultiPlaneNeRF which uses non-trainable representations of 3D objects.\n• MultiPlaneNeRF achieves comparable results to state-of-the-art models for synthesizing new views and can generalize to unseen objects by changing image-based representation without additional training.\n• We propose MultiPlaneGAN -a GAN-based generative model that uses a MultiPlane decoder as an interpretable representation of the 3D objects. 2 Related Works 3D objects can be represented by using many different approaches, including voxel grids [9], octrees [13], multi-view images [2,18], point clouds [1,25,33], geometry images [26], deformable meshes [12,16], and part-based structural graphs [15].\nThe above representations are discreet, which causes some problems in real-life applications. In contrast to such apprehension, NeRF [21] represents a scene using a fully-connected architecture. NeRF and many generalizations [4,5,17,23,24,29,30] synthesize novel views of a static scene using differentiable volumetric rendering.\nOne of the largest limitations is training time. To solve such problems in [11], authors propose Plenoxels, a method that uses a sparse voxel grid storing density and spherical harmonics coefficients at each node. The final color is the composition of tri-linearly interpolated values of each voxel.\nIn DVGO [28] also optimize voxel grids of features for fast radiance field reconstruction. In [22], authors use a similar approach, but space is divided into an independent multilevel grid. In [8], authors represent a 3D object as an orthogonal tensor component. A small MLP network, which uses orthogonal projection on tensors, obtains the final color and density. There exist some methods which use additional information to Nerf, like depth maps or point clouds [3,10,24,32].\nMany approaches are dedicated to training models on a few existing views. Most of the method uses a large feature extractor trained on many different objects to allow generalization properties. In [7], authors build a cost volume at the reference view by warping 2D neural features onto multiple sweeping planes. Then authors use a 3D CNN to aggregate information. In [31], authors use a large feature extractor and a new projection strategy to build a 3D object representation that can generalize across different objects. In the end, a small network called a ray transformer aggregates information.\nIn pixelNeRF [34], the convolutions layer transfers input images to produce a representation for NeRF based model.\nEG3D [6] uses a tri-plane representation for 3D GANs; their representation is similar to our Ten-soRF [8] and exists only as a part of large generative models.\nThe above models solve some of the most critical NeRF limitations. But no model solves all of them simultaneously. In this paper, we present MultiPlaneNeRF -new NeRF model with easy-to-train small architecture, which has generalization properties." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "MultiPlaneNeRF: NeRF with generalization properties", "publication_ref": [ "b20", "b13", "b10", "b16", "b21", "b27", "b26", "b18", "b19", "b20", "b16" ], "table_ref": [], "text": "This section briefly describes the three most popular NeRF representations: vanilla NeRF, voxel NeRF, and TriPlane NeRF. Next, we provide the details about MultiPlaneNeRF the novel alternative rendering approach with non-parametric representations (see Fig. 2).\nNeRF representation of 3D objects Vanilla NeRF [21] is the model for representing complex 3D scenes using neural architectures. NeRF takes a 5D coordinate as input, which includes the spatial location x = (x, y, z) and viewing direction d = (θ, ψ) and returns emitted color c = (r, g, b) and volume density σ.\nA vanilla NeRF uses a set of images for training. In such a scenario, we produce many rays traversing through the image and a 3D object represented by a neural network. NeRF approximates this 3D object with an MLP network:\nF N eRF (x, d; Θ) = (c, σ).\nThe model is parameterized by Θ and trained to map each input 3D coordinate to its corresponding volume density and directional emitted color.\nThe loss of NeRF is inspired by classical volume rendering [14]. We render the color of all rays passing through the scene. The volume density σ(x) can be interpreted as the differential probability of a ray. The expected color C(r) of camera ray r(t) = o + td (where o is ray origin and d is direction) can be computed with an integral, but in practice, it is estimated numerically using stratified sampling. The loss is simply the total squared error between the rendered and true pixel colors:\nL = r∈R ∥ Ĉ(r) -C(r)∥ 2 2 ,(1)\nwhere R is the set of rays in each batch, and C(r), Ĉ(r) are the ground truth and predicted RGB colors for ray r respectively.\nThe predicted RGB colors Ĉ(r) can be calculated with formula:\nĈ(r) = N i=1 T i (1 -exp(-σ i δ i ))c i , where T i = exp   - i-1 j=1 σ i δ i   (2\n)\nwhere N is the number of samples, δ i is the distance between adjacent samples, and σ i denotes the opacity of sample i. This function for calculating Ĉ(r) from the set of (c i , σ i ) values is trivially differentiable.\nIn practice, we encode the structure of 3D objects into neural network weights. Such architecture is limited by network capacity or difficulty finding accurate intersections of camera rays with the scene geometry. Synthesizing high-resolution imagery from these representations often require time-consuming optical ray marching.\nNeural Voxel Fields Instead of modeling the entire space with a single implicit function, we can use voxel-bounded implicit fields [11,17,22,28]. Specifically, we assign a voxel embedding at each vertex and obtain a new representation. A small implicit decoder can aggregate information from voxel representation to model RGB colors and volume density, see sub-figure (b) in Fig. 2.\nWe assume that we have a grid n × m of voxels, and each of them is represented by trainable embedding v i,j :\nV = {v i,j ∈ R k }\n, where i = 1, . . . , n, and j = 1. . . . , m, is a representation of 3D object. In practice, we use a sparse voxel structure, where nodes exist only in non-empty areas of 3D space.\nA small implicit decoder aggregates information from such voxel representation of 3D objects to model RGB color and volume density: The model is parameterized by Θ and voxel representation V . We train the model to map each input 3D coordinate to its corresponding volume density and directional emitted color. In practice, for position x ∈ R 3 , we aggregate information from the six nearest voxels from the grid, see sub-figure (b) in Fig. 2. The loss function and rendering procedure are directly taken from vanilla NeRF.\nF V oxelN eRF (x, d; Θ, V ) = (c, σ).\nVoxel representation allows faster training, but there is a problem with scaling the solution to a higher resolution.\nRGB i+1,j+1 RGB i+1,j RGB i,j RGB i,j+1\nImage I\nx = (x, y, z) P r(x, I) Similarly to Voxel NeRF, we use a vanilla NeRF cost function and rendering procedure.\nThe primary advantage of this hybrid representation is keeping the decoder small and using explicit feature representation [27], NV [19], LLFF [20], and original NeRF [21], and Voxel based model NSFV [17] on rendering task. Our model obtains comparable results with the original NeRF and voxel-based method using less number of trainable parameters.\nFigure 5: Visualization of PSNR metric concerning the number of images used from object representations. We train MultiPlaneNeRF for 40k epochs. As we can see, our model obtains better results when we increase the number of images in representations.\nOur framework uses the projection of point x ∈ R 3 on n fixed 2D images. We stay in NeRF framework, which reconstructs real scenes with the camera-to-world scenario. In NeRF rendering procedure, we have ray and 2D images, and our goal is to model 3D coordinates. Using the same transformation, we can reverse the process to obtain world-to-camera. From 3D coordinates x = (x, y, z), we obtain 2D coordinates on image I. In practice we apply projection of x = (x, y, z) on image I:\nP r(x, I) = z I ∈ R 2 ,\nwhere z I represents the point in R 2 created by projecting x on image I.\nIn MultiPlaneNeRF approach, we assume that we have n training images I i for i = 1, . . . , n, which we understand as a non-trainable representation of a 3D object, each with a resolution of N × N × 3 (Fig." }, { "figure_ref": [ "fig_2" ], "heading": ". 2 (d))", "publication_ref": [], "table_ref": [], "text": ". Such images will be part of the architecture and must be stored with weights of MultiPlaneNeRF model. Our neural network takes 3D point coordinates x = (x, y, z) and applies projection on each of given 2D images: [P r(x, I 1 ), . . . , P r(x,\nI n )] = [z I1 , . . . , z I k ] ∈ R 2n .\nThen we add RGB color from 2D images. Since rays do not cross the coordinates of pixels, we use linear interpolation to obtain color in point x Ii on image I i , see Fig. 4. In consequence, we obtain input to NeRF implicit decoder:\nZ (x,y,z) = [I 1 [z I1 ], z I1 , . . . , I 1 [z], z In ] ∈ R 5n ,\nwhere MultiPlaneNeRF for generalization Our model is a very small fully-connected architecture. In practice, we do not use 3D trainable representation. Consequently, we have less number of trainable parameters than other NeRF-based models. Furthermore, such an approach allows us to generalize the model to unseen objects.\nI i [z Ii ] ∈ R 3 is\nIn practice, we need a full data set of 3D objects. Our experiments used a ShapeNet data set divided into training and test sets.\nOur model is trained on many objects with one category. After training, we fixed the weight of the implicit decoder. Then we use images from the test set without additional training, see Fig. 7.\nIn such experiments, we add the cameras' positions to the input of an implicit decoder. In the case of training on one element, such positions do not change rendering results. Therefore we use camera positions only in generalization tasks.\nWhen we built input to the implicit decoder, we applied projection on 2D images, concatenated colors and positions of projected points as well as potions of cameras\nZ (x,y,z) = [I 1 [z I1 ], z I1 , P (I 1 ), . . . , I 1 [z], z In , P (I n )] ∈ R 8n ,\nwhere \nI i [z Ii ] ∈ R 3 is" }, { "figure_ref": [ "fig_1" ], "heading": "Synthetic renderings of objects", "publication_ref": [ "b26", "b18", "b19", "b20", "b16", "b34" ], "table_ref": [], "text": "We first show experimental results on two data sets of synthetic renderings of objects using the Diffuse Synthetic 360 • and Realistic Synthetic 360 • . We compare our results with classical approaches SRN [27], NV [19], LLFF [20], and original NeRF [21], and Voxel based NSFV [17]. In Tab. 1, we present a numerical comparison. We compare the metric reported by NeRF called PSNR (peak signal-tonoise ratio), SSIM (structural similarity index measure), LPIPS (learned perceptual image patch similarity) used to measure image reconstruction effectiveness. We obtain similar results as vanilla NeRF and voxel-based NSFV, using fewer parameters. In Fig. 3, we present the qualitative results of MultiPlaneNeRF.\nFFHQ\nMultiPlaneNeRF is a NeRF-based model with a non-trainable representation and mall implicit decoder, which obtain similar results as models with trainable representations and larger implicit decoder.\nMultiPlaneNeRF for generalization In the case of an experiment for generalization, we use a ShapeNet base data set containing 50 images of each element from the plane, chair, and car classes.\nFor each object: fifty 200x200 transparent background images from random camera positions. Such representation is perfect for training 3D models since each element has been seen from many views. The data was taken from [35], where authors train an autoencoder-based generative model. In Fig. 7, we compare new renders obtained on the test set. As we can see, MultiPlaneNeRF can generalize to unseen objects and obtain slightly better results than auto-encoder baser architecture.\nAs we can see, we obtain good-quality objects, see Tab 2. " }, { "figure_ref": [ "fig_5" ], "heading": "Experiments", "publication_ref": [ "b20", "b34" ], "table_ref": [], "text": "We evaluate the proposed MultiPlaneNeRF on classical rendering and generalization tasks. In the first case, we compare our solution using the Diffuse Synthetic 360 • and Realistic Synthetic 360 • 3D object rendering provided by the original NeRF author's paper [21]. In the generalization task, we use the data set dedicated to training the auto-encoder-based models Points2NeRF [35].\nFurthered more, our model can generalize across different classes. We train MultiPlaneNeRF separately in three classes and evaluated in test sets from different classes. As we can see, the model gives similar results in training classes and unseen ones; see Tab. 3.\nIn MultiPlaneNeRF, we use existing images as a non-trainable representation. In evaluation, we can render objects by mixing images from two objects. Fig 8 shows the transition between objects using k images from the first object and n -k from the second one. We us k = 0%, 20%, 40%, 60%80% respectively. As we can see, our model produces reasonable interpolation between objects. MultiPlaneNeRF decoder in GAN architecture TriPlane decoder was introduced as part of the EG3D GAN [6]. EG3D uses a classical 2D generator to produce the Tri-Plane representation and 2D discriminator." }, { "figure_ref": [ "fig_6", "fig_7" ], "heading": "EG3D MultiPlaneGAN", "publication_ref": [], "table_ref": [], "text": "Our MultiPlane decoder can be used as a part of larger architecture. We add a MultiPlane decoder to EG3D GAN to show such properties. As a result, we obtain MultiPlane-GAN, an analog of EG3D GAN [6] with MultiPlane decoder instead of TriPlane. In Tab. 4, we compare MultiPlaneGAN with other models on two datasets FFHQ and ShapeNet Cars, see Fig. 9 and Fig. 10. As we can see, we obtained the second-best score in both examples. We have slightly worse results than EG3D GAN, but we produce interpretable representation since our planes are 2D images with three RGB channels.\nAblation studies In MultiPlaneNeRF, we use existing images as a non-trainable representation of 3D objects. To verify the influence of the number of such images and the resolution, we train MultiPlaneNeRF on \"Lego\" model from NeRF dataset see Fig. 5 and6. As we can see, our model obtains better results when we increase the number of images in representations. Also, the higher resolution allows us to obtain better-quality renders. neural renderer aggregates features from each of the 32-channel tri-planes and predicts 32-channel feature images from a given camera pose. This is followed by a \"super-resolution\" module to upsample and refine these raw neurally rendered images. The generated images are critiqued by a slightly modified StyleGAN2 discriminator (Sec. 4.3 [6])." }, { "figure_ref": [ "fig_8", "fig_9" ], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "EG3D MultiPlaneGAN\nFig. 11 gives an overview of MultiPlaneGAN architecture, which uses MultiPlane representation instead tri-plane. MultiPlaneGAN produces 32 planes consisting of 3 channels posed on a sphere containing an object. To place planes in 3D space we use the icosphere, see Fig. 12. Then MultiPlane decoder aggregates information to produce input to EG3D super-resolution module and then to EG3D discriminator.\nMultiPlaneGAN The entire pipeline is trained end-to-end from random initialization, using the nonsaturating GAN loss function with R1 regularization, following the training scheme in StyleGAN2. " } ]
NeRF is a popular model that efficiently represents 3D objects from 2D images. However, vanilla NeRF has some important limitations. NeRF must be trained on each object separately. The training time is long since we encode the object's shape and color in neural network weights. Moreover, NeRF does not generalize well to unseen data. In this paper, we present MultiPlaneNeRF -a model that simultaneously solves the above problems. Our model works directly on 2D images. We project 3D points on 2D images to produce non-trainable representations. The projection step is not parametrized and a very shallow decoder can efficiently process the representation. Furthermore, we can train MultiPlaneNeRF on a large data set and force our implicit decoder to generalize across many objects. Consequently, we can only replace the 2D images (without additional training) to produce a NeRF representation of the new object. In the experimental section, we demonstrate that MultiPlaneNeRF achieves results comparable to state-of-the-art models for synthesizing new views and has generalization properties. Additionally, MultiPlane decoder can be used as a component in large generative models like GANs.
MultiPlaneNeRF: Neural Radiance Field with Non-Trainable Representation
[ { "figure_caption": "Figure 2 :2Figure 2: Neural implicit representations use fully connected layers with position encoding to represent a scene (a). Explicit voxel grids or hybrid variants using small implicit decoders are fast but scale poorly with resolution (b). Hybrid explicit-implicit TriPlane representation is fast and well scale, but we must train its parameters (c). In Hybrid explicit-implicit MultiPlane representation, we use existing images as a representation and use a small implicit decoder to aggregate information. By ref color, we marked trainable parameters of respected models.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visualization of renders produce by MultiPlaneNeRF on NeRF Synthetic dataset scenes: Lego, Mic, Ship, Hotdog, Drums, Ficus.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: For input 3D point x = (x, y, z) we apply its projection on image I and obtain 2D coordinate P r(x, I). Then we use linear interpolation of colors from four closes pixel colors RGB i,j , RGB i+1,j , RGB i,j+1 , RGB i+1,j+1 to the estimated color RGB P r(x,I) in position P r(x, I). Position and colors [RGB P r(x,I) , P r(x, I)] are input to implicit decoder.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: Visualization of PSNR metric concerning the resolution of images used from object representations. We train MultiPlaneNeRF for 40k epochs on different image resolution and then render the final image with 800 × 800 size. As we can see, our model obtains better results when trained images and expected image resolution match.", "figure_data": "", "figure_id": "fig_3", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "RGB color of position z Ii on the image I i , and P (I i ) is position of camera dedicated to image I i . Our implicit decoder F M ultiP laneN eRF aggregates color and positions to produce RGB colors and the volume density σ F M ultiP laneN eRF (x, d; Θ, I 1 , P (I 1 ), . . . , I n , P (I n )) = (c, σ).", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: In MultiPlaneNeRF, we use existing images as a non-trainable representation. In evaluation, we can render objects by mixing images from two objects. The figure shows the transition between objects using k images from the first object and n -k from the second one. As we can see, our model produces reasonable interpolation between objects.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Comparison samples generated by Multi-PlaneGAN and EG3D on ShapeNet Cars.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Comparison samples generated by Multi-PlaneGAN and EG3D on FFHQ. This paper presents a new NeRF model with easy-to-learn small architecture with generalization properties. We use existing images instead of trainable representations like a voxel or TriPlane. We split the initial set of 2D training images into two subsets. The first one builds a 2D representation and is used as input to a small implicit decoder. The second part is used as a training data set to train the decoder. Using existing images as part of NeRF can significantly reduce the parameters since we train only a small implicit decoder. Furthermore, we can train MultiPlaneNeRF on a large data set and force our implicit decoder to generalize across many objects. Therefore, we can only change the 2D image (without additional training) to generate NeRF representation of the new object. MultiPlaneN-eRF gives comparable results to state-of-the-art models on synthesizing new views task. Moreover can generalize to unseen objects and classes. Limitations The main limitation of the model is the trade-off between rendering quality and generalization properties. By training the model on a large dataset, we get a slightly worse quality than training each object separately.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: MultiPlaneGAN framework comprises several parts: a pose-conditioned StyleGAN2-based feature generator and mapping network, a MultiPlane 3D representation with a lightweight feature decoder, a neural volume renderer, a super-resolution module, and a poseconditioned StyleGAN2 discriminator with dual discrimination. Such architecture is based on EG3D GAN [6]. The main handicap of MultiPlaneGAN is using 2D image-based representations.", "figure_data": "", "figure_id": "fig_8", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: In MultiPlaneGAN we use MultiPlanes located on a sphere.To speed training, EG3D use a two-stage training strategy in which we train with a reduced (642) neural rendering resolution followed by a short fine-tuning period at full (1282) neural rendering resolution. Additional experiments found that regularization to encourage smoothness of the density field helped reduce artifacts in 3D shapes. The following sections discuss major components of our framework in detail. For additional descriptions, implementation details, and hyperparameters, please see the supplement.", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Numerical comparison of our model and classical SRN", "figure_data": "PSNR ↑Chair Drums Ficus Hotdog Lego Materials MicShipSRN26.96 17.1820.73 26.8120.85 18.0926.85 20.60NV28.33 22.5824.79 30.7126.08 24.2227.78 23.93LLFF28.72 21.1321.79 31.4124.54 20.7227.48 23.22NeRF33.00 25.0130.13 36.1832.54 29.6232.91 28.65NSFV33.19 25.1832.29 34.2732.68 27.9337.14 31.23MultiPlaneNeRF 32.81 24.2828.22 35.7528.49 30.8032.70 27.39SSIM ↑Chair Drums Ficus Hotdog Lego Materials MicShipSRN0.910 0.7660.809 0.9470.808 0.7570.923 0.849NV0.916 0.8730.880 0.9460.888 0.7840.944 0.910NeRF0.967 0.9250.961 0.9800.949 0.8560.974 0.964NSFV0.968 0.9310.960 0.9870.973 0.8540.980 0.973MultiPlaneNeRF 0.972 0.9210.950 0.9740.953 0.9400.983 0.865LPIPS ↓Chair Drums Ficus Hotdog Lego Materials MicShipSRN0.106 0.2670.200 0.0630.174 0.2990.100 0.149NV0.109 0.2140.175 0.1070.130 0.2760.109 0.162NeRF0.046 0.0910.050 0.0280.063 0.2060.121 0.044NSVF0.043 0.0690.029 0.0100.021 0.1620.025 0.017MultiPlaneNeRF 0.026 0.0710.047 0.0330.047 0.0550.014 0.153", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "RGB color of position z Ii on the image I i . Our implicit decoder F M ultiP laneN eRF aggregates color and positions to produce RGB colors and the volume density σ F M ultiP laneN eRF (x, d, I 1 , . . . , I n ; Θ) = (c, σ). Comparison of average PSNR metric between our model and autoencoder-based model [35] trained on three classes of the same ShapeNet data. As we can see, we obtain slightly better renders. In the case of the TriPlane model, it is difficult to give the number of parameters since TriPlane exists only as a part of a large model like GAN or diffiusions.", "figure_data": "The loss function and rendering procedure are directly taken from vanilla NeRF.The consequence of using non-trainable im-MultiPlaneNeRF Auto-encoderages is an explicit representation that usesplanes (train)25.2824.83fewer parameters than other hybrid methods, which need to have different data structures optimized during training. For evaluation, MultiPlaneNeRF network contains similar architecture to NeRF [21], where the net-work uses ∼ 0.5M parameters. On the otherplanes (test) cars (train) cars (test) chairs (train) chairs (test)24.26 26.21 24.79 25.28 24.2614.18 28.14 20.86 23.90 17.17hand, NSVF [17] contains only one trainablenetwork shared with other voxel grids. Al-though the render network is smaller, eachscene requires a voxel-grid representation tobe trained, which requires ∼ 3.2M parame-ters. TrainedRender withoncarschairsplanescars24.9122.1521.32chairs24.4122.5120.84planes 24.1921.6924.27", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "MultiPlaneNeRF can generalize across different classes. The model is trained separately in three classes and evaluated in test sets from several classes. As we can see, the model gives similar results in training classes and unseen ones.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation using FID, KID×100, for FFHQ and ShapeNet Cars.", "figure_data": "CarsFIDKIDFIDKIDGIRAFFE 256 231.5 1.992 27.3 1.703π-GAN 128 229.9 3.573 17.3 0.932Lift. SG 256 229.8---EG3D 128 2--2.75 0.097MultiPlaneGAN 128 2--6.4 0.309EG3D 512 24.7 0.132--MultiPlaneGAN 512 2 15.4 1.007--", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Dominik Zimny; Artur Kasymov; Adam Kania; Jacek Tabor; Maciej Zieba; Przemyslaw Spurek
[ { "authors": "P Achlioptas; O Diamanti; I Mitliagkas; L Guibas", "journal": "PMLR", "ref_id": "b0", "title": "Learning representations and generative models for 3d point clouds", "year": "2018" }, { "authors": "A Soltani; H Huang; J Wu; T D Kulkarni; J B Tenenbaum", "journal": "", "ref_id": "b1", "title": "Synthesizing 3d shapes via modeling multi-view depth maps and silhouettes with deep generative networks", "year": "2017" }, { "authors": "D Azinović; R Martin-Brualla; D B Goldman; M Nießner; J Thies", "journal": "", "ref_id": "b2", "title": "Neural rgb-d surface reconstruction", "year": "2022" }, { "authors": "J T Barron; B Mildenhall; M Tancik; P Hedman; R Martin-Brualla; P P Srinivasan", "journal": "", "ref_id": "b3", "title": "Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields", "year": "2021" }, { "authors": "J T Barron; B Mildenhall; D Verbin; P P Srinivasan; P Hedman", "journal": "", "ref_id": "b4", "title": "Mip-nerf 360: Unbounded anti-aliased neural radiance fields", "year": "2022" }, { "authors": "E R Chan; C Z Lin; M A Chan; K Nagano; B Pan; S De Mello; O Gallo; L J Guibas; J Tremblay; S Khamis", "journal": "", "ref_id": "b5", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "A Chen; Z Xu; F Zhao; X Zhang; F Xiang; J Yu; H Su", "journal": "IEEE Computer Society", "ref_id": "b6", "title": "Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo", "year": "2021" }, { "authors": "A Chen; Z Xu; A Geiger; J Yu; H Su", "journal": "Springer", "ref_id": "b7", "title": "Tensorf: Tensorial radiance fields", "year": "2022" }, { "authors": "C B Choy; D Xu; J Gwak; K Chen; S Savarese", "journal": "Springer", "ref_id": "b8", "title": "3d-r2n2: A unified approach for single and multi-view 3d object reconstruction", "year": "2016" }, { "authors": "K Deng; A Liu; J.-Y Zhu; D Ramanan", "journal": "", "ref_id": "b9", "title": "Depth-supervised nerf: Fewer views and faster training for free", "year": "2022" }, { "authors": "S Fridovich-Keil; A Yu; M Tancik; Q Chen; B Recht; A Kanazawa", "journal": "", "ref_id": "b10", "title": "Plenoxels: Radiance fields without neural networks", "year": "2022" }, { "authors": "R Girdhar; D F Fouhey; M Rodriguez; A Gupta", "journal": "Springer", "ref_id": "b11", "title": "Learning a predictable and generative vector representation for objects", "year": "2016" }, { "authors": "C Häne; S Tulsiani; J Malik", "journal": "IEEE", "ref_id": "b12", "title": "Hierarchical surface prediction for 3d object reconstruction", "year": "2017" }, { "authors": "J T Kajiya; B P Von Herzen", "journal": "ACM SIGGRAPH computer graphics", "ref_id": "b13", "title": "Ray tracing volume densities", "year": "1984" }, { "authors": "J Li; K Xu; S Chaudhuri; E Yumer; H Zhang; L Guibas", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b14", "title": "Grass: Generative recursive autoencoders for shape structures", "year": "2017" }, { "authors": "T Li; T Bolkart; M J Black; H Li; J Romero", "journal": "ACM Trans. Graph", "ref_id": "b15", "title": "Learning a model of facial shape and expression from 4d scans", "year": "2017" }, { "authors": "L Liu; J Gu; K Zaw Lin; T.-S Chua; C Theobalt", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Neural sparse voxel fields", "year": "2020" }, { "authors": "Z Liu; Y Zhang; J Gao; S Wang", "journal": "Pattern Recognition", "ref_id": "b17", "title": "Vfmvac: View-filtering-based multi-view aggregating convolution for 3d shape recognition and retrieval", "year": "2022" }, { "authors": "S Lombardi; T Simon; J Saragih; G Schwartz; A Lehrmann; Y Sheikh", "journal": "", "ref_id": "b18", "title": "Neural volumes: Learning dynamic renderable volumes from images", "year": "2019" }, { "authors": "B Mildenhall; P P Srinivasan; R Ortiz-Cayon; N K Kalantari; R Ramamoorthi; R Ng; A Kar", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b19", "title": "Local light field fusion: Practical view synthesis with prescriptive sampling guidelines", "year": "2019" }, { "authors": "B Mildenhall; P P Srinivasan; M Tancik; J T Barron; R Ramamoorthi; R Ng", "journal": "", "ref_id": "b20", "title": "Nerf: Representing scenes as neural radiance fields for view synthesis", "year": "2020" }, { "authors": "T Müller; A Evans; C Schied; A Keller", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b21", "title": "Instant neural graphics primitives with a multiresolution hash encoding", "year": "2022" }, { "authors": "M Niemeyer; J T Barron; B Mildenhall; M S Sajjadi; A Geiger; N Radwan", "journal": "", "ref_id": "b22", "title": "Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs", "year": "2022" }, { "authors": "B Roessle; J T Barron; B Mildenhall; P P Srinivasan; M Nießner", "journal": "", "ref_id": "b23", "title": "Dense depth priors for neural radiance fields from sparse input views", "year": "2022" }, { "authors": "D W Shu; S W Park; J Kwon", "journal": "Pattern Recognition", "ref_id": "b24", "title": "Wasserstein distributional harvesting for highly dense 3d point clouds", "year": "2022" }, { "authors": "A Sinha; J Bai; K Ramani", "journal": "Springer", "ref_id": "b25", "title": "Deep learning 3d shape surfaces using geometry images", "year": "2016" }, { "authors": "V Sitzmann; M Zollhöfer; G Wetzstein", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b26", "title": "Scene representation networks: Continuous 3d-structureaware neural scene representations", "year": "2019" }, { "authors": "C Sun; M Sun; H.-T Chen", "journal": "", "ref_id": "b27", "title": "Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction", "year": "2022" }, { "authors": "M Tancik; V Casser; X Yan; S Pradhan; B Mildenhall; P P Srinivasan; J T Barron; H Kretzschmar", "journal": "", "ref_id": "b28", "title": "Block-nerf: Scalable large scene neural view synthesis", "year": "2022" }, { "authors": "D Verbin; P Hedman; B Mildenhall; T Zickler; J T Barron; P P Srinivasan", "journal": "IEEE", "ref_id": "b29", "title": "Ref-nerf: Structured view-dependent appearance for neural radiance fields", "year": "2022" }, { "authors": "Q Wang; Z Wang; K Genova; P Srinivasan; H Zhou; J T Barron; R Martin-Brualla; N Snavely; T Funkhouser", "journal": "IEEE Computer Society", "ref_id": "b30", "title": "Ibrnet: Learning multi-view image-based rendering", "year": "2021" }, { "authors": "Y Wei; S Liu; Y Rao; W Zhao; J Lu; J Zhou", "journal": "", "ref_id": "b31", "title": "Nerfingmvs: Guided optimization of neural radiance fields for indoor multi-view stereo", "year": "2021" }, { "authors": "F Yang; F Davoine; H Wang; Z Jin", "journal": "Pattern Recognition", "ref_id": "b32", "title": "Continuous conditional random field convolution for point cloud segmentation", "year": "2022" }, { "authors": "A Yu; V Ye; M Tancik; A Kanazawa", "journal": "", "ref_id": "b33", "title": "pixelnerf: Neural radiance fields from one or few images", "year": "2021" }, { "authors": "Dominik Zimny; Tomasz Trzcinski; Przemyslaw Spurek", "journal": "", "ref_id": "b34", "title": "Points2nerf: Generating neural radiance fields from 3d point cloud", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b35", "title": "GAN [. EG3D uses a classical 2D generator to produce the tri-plane representation and 2D discriminator. Our MultiPlane decoder can be used as", "year": "" }, { "authors": "", "journal": "", "ref_id": "b36", "title": "with a MultiPlane decoder instead of tri-plane", "year": "" } ]
[ { "formula_coordinates": [ 4, 252.6, 224.52, 106.79, 9.68 ], "formula_id": "formula_0", "formula_text": "F N eRF (x, d; Θ) = (c, σ)." }, { "formula_coordinates": [ 4, 252.54, 334.32, 252.13, 22.58 ], "formula_id": "formula_1", "formula_text": "L = r∈R ∥ Ĉ(r) -C(r)∥ 2 2 ,(1)" }, { "formula_coordinates": [ 4, 172.13, 422.5, 328.67, 33.53 ], "formula_id": "formula_2", "formula_text": "Ĉ(r) = N i=1 T i (1 -exp(-σ i δ i ))c i , where T i = exp   - i-1 j=1 σ i δ i   (2" }, { "formula_coordinates": [ 4, 500.8, 436.44, 3.87, 8.64 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 4, 188.54, 639.93, 69.87, 11.72 ], "formula_id": "formula_4", "formula_text": "V = {v i,j ∈ R k }" }, { "formula_coordinates": [ 4, 235.77, 713.17, 140.46, 9.68 ], "formula_id": "formula_5", "formula_text": "F V oxelN eRF (x, d; Θ, V ) = (c, σ)." }, { "formula_coordinates": [ 5, 397.91, 382.91, 73.14, 63 ], "formula_id": "formula_6", "formula_text": "RGB i+1,j+1 RGB i+1,j RGB i,j RGB i,j+1" }, { "formula_coordinates": [ 6, 263.26, 638.14, 85.49, 11.72 ], "formula_id": "formula_7", "formula_text": "P r(x, I) = z I ∈ R 2 ," }, { "formula_coordinates": [ 8, 294.01, 94.9, 115.42, 12.39 ], "formula_id": "formula_8", "formula_text": "I n )] = [z I1 , . . . , z I k ] ∈ R 2n ." }, { "formula_coordinates": [ 8, 212.81, 148.82, 186.39, 12.03 ], "formula_id": "formula_9", "formula_text": "Z (x,y,z) = [I 1 [z I1 ], z I1 , . . . , I 1 [z], z In ] ∈ R 5n ," }, { "formula_coordinates": [ 8, 137.37, 168.52, 69.98, 11.23 ], "formula_id": "formula_10", "formula_text": "I i [z Ii ] ∈ R 3 is" }, { "formula_coordinates": [ 8, 183.52, 652.65, 244.95, 12.03 ], "formula_id": "formula_11", "formula_text": "Z (x,y,z) = [I 1 [z I1 ], z I1 , P (I 1 ), . . . , I 1 [z], z In , P (I n )] ∈ R 8n ," }, { "formula_coordinates": [ 8, 135.19, 672.34, 59.72, 11.22 ], "formula_id": "formula_12", "formula_text": "I i [z Ii ] ∈ R 3 is" }, { "formula_coordinates": [ 9, 396.1, 273.14, 25.46, 8.64 ], "formula_id": "formula_13", "formula_text": "FFHQ" } ]
10.18653/v1/2021.emnlp-main.564
2023-05-23
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b12", "b1", "b13", "b14", "b18", "b17", "b21", "b7", "b16", "b0", "b10" ], "table_ref": [], "text": "Most state-of-the-art transformer-based large language models (LLMs) fall into two classes: unidirectional (or autoregressive) models, where each token is generated based on its left context (e.g., GPT models; Radford et al., 2019), and bidirectional models, where a token is predicted from both left and right context tokens, some of which may be masked (e.g., BERT; Devlin et al., 2018). Often, it is beneficial to compare these models' performance on controlled sentence generation benchmarks. Whereas unidirectional architectures offer a Figure 1: Three different ways to compute the PLL score of a multi-token word (e.g., souvenir) during masked language modeling. Purple: target token, pink: within-word tokens that are available during inference, turquoise: within-word tokens that are masked during inference. Sentence tokens that do not belong to the current word are always available during inference.\nnatural way of calculating sentence log-likelihood (summing the log-likelihood scores of each sentence token given its left context), there is no direct way of estimating sentence log-likelihood for a bidirectional model.\nSo far, the best available method to score a sentence under a bidirectional LLM has been the pseudo-log-likelihood (PLL) scoring approach described by Salazar et al. (2020) (and initially used by Shin et al., 2019;Wang and Cho, 2019). The PLL of a sentence is calculated as the sum of PLL scores for each token given all other sentence tokens, thus providing a comparable metric to unidirectional models' log-likelihood (LL) sentence scoring. The PLL metric is extremely popular; it is used extensively in LLM studies tackling topics as diverse as effects of training data (Sinha et al., 2021;Zhang et al., 2021), model fluency (Laban et al., 2021), syntactic and conceptual knowledge (Sinclair et al., 2022;Bhatia and Richie, 2022), social biases (Nangia et al., 2020), and others. Some of these studies have already accrued dozens of citations.\nHere, we show that the metric proposed by Salazar et al. (PLL-original) has important shortcomings that limit its utility. Specifically, PLL-original overestimates the PLL of outof-vocabulary (OOV) words, which LLM tokenizers split into multiple tokens. As a result, PLL-original scores fail on several theoretically desired property tests: a robust inverse relationship between sentence length and sentence PLL (Section 4.1), a robust positive correlation between a word's frequency and its PLL score (4.2), and a positive correlation between unidirectional and bidirectional model scores for the same sentences (Section 5). To remedy these issues, we propose an adjusted PLL metric, PLL-word-l2r (l2r: leftto-right), which estimates token PLL when future within-word tokens are also masked (Figure 1). We show that the PLL-word-l2r metric outperforms both PLL-original and alternative PLLbased metrics. We therefore recommend to use the PLL-word-l2r metric when estimating sentence PLL under a bidirectional LLM.\n2 Motivation: score inflation for multi-token words\nThe PLL-original metric grossly overestimates the probability of OOV lexical items, such as souvenir (Figure 2). This is because OOV words are tokenized into subword tokens (e.g., so ##uven ##ir), and each subword token is predicted using the token's bidirectional context, which crucially includes the remaining tokens that make up the OOV word. Thus, even though the OOV word itself may be surprising given the sentence context, the individual parts of the OOV word are not surprising to a bidirectional model given a sentence context that includes all other subtokens of that word (e.g., it is easy to predict so given ##uven ##ir; see Appendix A for additional examples).\nTo mitigate this bias, we adjust the PLL sentence scoring algorithm such that the model cannot access future within-word tokens (PLL-word-l2r) or any within-word tokens (PLL-whole-word) when predicting the target.\nBelow, we conduct a rigorous investigation of our modified metrics to determine whether this intuitive benefit holds quantitatively." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [ "b9", "b20", "b13" ], "table_ref": [], "text": "For our analysis, we adapt the scorer module of the minicons library (Misra, 2022), an open-source wrapper library around HuggingFace transformers (Wolf et al., 2020) that enables efficient extraction of word-and sentence-level probabilities from LLMs. The MLM scoring procedure of the minicons library follows the procedure originally proposed by Salazar et al. (2020). For details on sentence preprocessing, see Appendix B." }, { "figure_ref": [], "heading": "PLL metrics", "publication_ref": [ "b18" ], "table_ref": [], "text": "PLL-original. In this metric, each sentence token s t of a sentence S with n tokens is consecutively replaced with a [MASK] and is predicted using all past and future tokens, irrespective of whether the context tokens belong to the same or a different word than the target token. Thus, inference is conditioned on the context S \\t := (s 1 , . . . , s t-1 , s t+1 , . . . , s n ). The final sentence score is obtained as the sum of the log probabilities of each sentence token given its context:\nPLL orig (S) := n t=1 log P MLM (s t | S \\t )(1)\nPLL-word-l2r. In this metric, a [MASK] is placed not only over the current target token (now: s wt ), but also over all future sentence tokens that belong to the same word s w as the target. Inference is then conditioned on a context that includes all preceding sentence tokens (including those belonging to the current word) and all sentence tokens from future words. The final score of a sentence S is obtained as the sum of the log probabilities of each of the |w| tokens in each of the |S| words: (2) PLL-whole-word. This metric is similar to PLL-word-l2r and differs from it only in that a [MASK] is placed over all sentence tokens that belong to the same word s w as the target (both preceding and future). Inference is then conditioned on a context that includes all sentence tokens except those belonging to the current word. The final score of a sentence S is obtained as the sum of the log probabilities of each of the |w| tokens in each of the |S| words in S given the token's context:\nPLL ww (S) := |S| w=1 |w| t=1 log P MLM (s wt | S \\sw ) (3)\nIn Appendix G, we also report results for a PLL metric where not only future within-word tokens, but all sentence tokens to the right of the target context are masked (PLL-sentence-l2r). Although this method is most similar to autoregressive LL scoring, sentence-l2r masking for BERT is known to produce poor quality generations (Wang and Cho, 2019); we therefore refrain from including this metric in the main text." }, { "figure_ref": [], "heading": "Models", "publication_ref": [], "table_ref": [], "text": "We report results for bert-base-cased (and gpt2-medium for comparison) unless stated otherwise. Results for larger models are provided in Appendices D-F." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b6", "b3", "b11", "b19" ], "table_ref": [], "text": "For our main analyses, we use the EventsAdapt dataset (Kauf et al., 2022, based on Fedorenko et al., 2020). It contains a curated set of 782 syntactically simple sentence pairs that describe plausible or implausible agent-patient interactions in active or passive voice (e.g., The traveler lost the souvenir). Sentences in this dataset are 5-7 words long (mean: 6.1, std: 1.05), with an average word log frequency of 10.95. We use this dataset because it contains a high number of OOV words (19.6% for BERT and 40.3% for GPT-2; see also Appendix C). In Appendices D-F, we show that our results generalize to two larger and more diverse corpora: the Brown corpus (Francis and Kucera, 1979) and the reference sentence set from the LibriSpeech corpus (Panayotov et al., 2015). We also apply our PLL metrics to score the sentences in the Benchmark of Linguistic Minimal Pairs (BLiMP) (Warstadt et al., 2020), a challenge set of 67k sentence pairs which target specific aspects of linguistic knowledge." }, { "figure_ref": [ "fig_1" ], "heading": "Evaluating PLL metric properties 4.1 Effects of sentence length", "publication_ref": [ "b13" ], "table_ref": [], "text": "Like Salazar et al. (2020), we expect that models should, on average, assign lower probability to longer sentences. Thus, negative PLL (which reflects model surprisal) should be positively correlated with sentence length. However, the PLL-original metric violates this expectation in our test sentence set, which shows a negative correlation between the number of tokens and negative PLL. In contrast, PLL-word-l2r and PLL-whole-word metrics exhibit a positive correlation between the number of sentence tokens and negative PLL, just as the negative LL scores for a unidirectional model, GPT2-medium (Figure 3A)." }, { "figure_ref": [ "fig_1" ], "heading": "Effects of word frequency", "publication_ref": [], "table_ref": [], "text": "An appropriate (P)LL metric should reflect the fact that LLMs are sensitive to distributional patterns in training text corpora. In particular, we expect more frequent words to have higher (P)LL scores in the absence of contextual effects. This is indeed the case for GPT2-medium; however, the score inflation for multi-token words means that the PLL-original metric grossly overestimates the scores for low-frequency words (Figure 3B). PLL-word-l2r scores restore this relationship: their correlation with word frequency is much higher than for PLL-original. PLL-whole-word also performs well, although its correlation with word frequency is lower than for PLL-word-l2r, suggesting that it excessively penalizes OOV words." }, { "figure_ref": [ "fig_2" ], "heading": "Correlation with GPT-2 scores", "publication_ref": [], "table_ref": [], "text": "We expect that PLL scores for bidirectional models should be at least somewhat consistent with LL scores for unidirectional models: both metrics are designed to serve are a proxy for sentence probability. Here, we show that the GPT-2/BERT score correlation for the PLL-original metric is very low, whereas correlation scores for PLL-word-l2r and PLL-whole-word are much higher (Figure 4), indicating the validity of this metric for cross-model comparison. As in Section 4.2, PLL-word-l2r slightly outperforms PLL-whole-word, likely because it does not penalize OOV words as severely.\nSee Appendices D-F for evidence that all three trends hold for larger models and for other datasets (although the effects in other datasets are attenuated due to a lower OOV ratio)." }, { "figure_ref": [], "heading": "Effects on benchmarking", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Here, we show that the choice of PLL metric affects benchmarking results for a popular, highly controlled, minimal pair linguistic benchmark: BLiMP. Despite the fact that the comparisons are highly controlled, different metrics yield different BLiMP scores. For all four tested models, PLL-word-l2r achieves the best overall BLiMP score (Table 1). See Appendix H for detailed scores." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We have shown that PLL-word-l2r is the preferred metric for evaluating sentence PLL under a masked language model, such as BERT. Although the results from studies using the PLL-original metric can still be informative, they become harder to interpret if the proportion of OOV words in their test set is high. Therefore, we recommend using PLL-word-l2r in future works." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b5" ], "table_ref": [], "text": "The proposed PLL-word-l2r metric has the same practical limitations as previous LL/PLL approaches. Most importantly, these scores can be influenced by many superfluous factors, such as the number of available synonyms (computer vs. laptop; Holtzman et al., 2021). We therefore expect our method to be most useful in highly controlled minimal pair or multiple choice setups.\nEven more accurate metrics may emerge in the future. For instance, our approach pre-specifies the number of tokens in a word, thus limiting the space of possible alternatives. Future approaches might investigate a way to normalize the PLL score distribution over words with a varying number of tokens. Further, it would be interesting to attempt to estimate the joint probability of all tokens in a word instead of predicting them left-to-right (as in PLL-word-l2r) or without any other within-word contextual information (as in PLL-whole-word).\nFinally, we test our approach on English text corpora; our results might not generalize to agglutinative languages (due to a high number of tokens per word and, therefore, increased uncertainty) and are of less relevance to isolating languages (where, if enough training data are available, most wordlevel items can be represented as single tokens)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In our proposed metric, word tokens are masked from left to right following the writing tradition in English; however, for speakers of languages such as Arabic, a \"right to left\" notation would be more intuitive. Note, however, that this is primarily a denotational difference that does not affect the score itself (LLMs do not discriminate left and right, only beginning and end). We do not anticipate any specific harms that would be intrinsically associated with the techniques described in this paper." }, { "figure_ref": [], "heading": "B Text preprocessing for (P)LL computation", "publication_ref": [ "b4" ], "table_ref": [], "text": "The minicons library borrows the MLM preprocessing algorithm from Salazar et al. ( 2020): [CLS] and [SEP] tokens are prepended and appended to the text, respectively, and are not masked during PLL computation. For CLMs, we minimally adjust the minicons scorer library default and necessarily prepend the beginning of sentence token, <|endoftext|>, to the text, which enables us to get a probability for the first actual sentence token (see also the lm-zoo library; Gauthier et al., 2020). The (P)LLs of all special tokens are not counted toward the final sentence/word score.\nWhen calculating the (P)LL score of individual words (to estimate word frequency effects), we place them in a neutral context My word is _. To ensure that the same pattern of results holds across multiple neutral contexts, we additionally test the context I opened the dictionary and randomly picked a word. It was _, as well as a nocontext setup. These additional results are reported in Appendix E.1.\nWord frequency was operationalized as the log of the number of occurrences of the word in the 2012 Google NGram corpus. Laplace smoothing was applied prior to taking the logarithm. The out-of-vocabulary (OOV) ratio per dataset, quantified as the number of words split into at least two tokens by a given model's tokenizer divided by the total number of words in the dataset." }, { "figure_ref": [], "heading": "C Quantification of out-of-vocabulary words per dataset", "publication_ref": [ "b12", "b8", "b1" ], "table_ref": [], "text": "GPT and RoBERTa models use byte-level Byte-Pair-Encoding tokenizers (Radford et al., 2019;Liu et al., 2019); BERT models use WordPiece tokenization (Devlin et al., 2018). " }, { "figure_ref": [], "heading": "D Effects of sentence length D.1 Larger LLM versions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.2 Larger datasets", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "E.2 Different datasets", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "F.2 Larger datasets", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "G Whole-sentence left-to-right token masking", "publication_ref": [ "b18", "b13", "b19" ], "table_ref": [], "text": "Here, we report results for the scoring algorithm that masks the target token, s t , and all sentence tokens to its right in a sentence S with n tokens (PLL-sentence-l2r). As in autoregressive language models, target token inference is thus conditioned solely on the token's leftward context: P MLM (s t | S <t ). The final sentence score is obtained as the sum of the log probabilities of each sentence token given its context:\nPLL sent (S) := n t=1 log P MLM (s t | S <t ) (4)\nOverall, the PLL-sentence-l2r metric satisfies the metric desiderata better than the PLL-original metric but worse than PLL-word-l2r. In addition, it is inferior to other metrics on the BLiMP evaluation benchmark (Appendix H), in line with previous reports of subpar generation quality (Wang and Cho, 2019). O v e r a l l Table 3: Unsupervised performance (forced choice accuracy) on all BLiMP benchmark paradigms, using the original and adjusted PLL sentence scoring methods. PLL-original scores replicate those reported in Salazar et al. (2020). Human scores are taken from Warstadt et al. (2020).\nA N A . A G R A R G S T R . B I N D I N G C T R L . R A I S . D -N A G R E L L I P S I S F I L L E R G A P I R R E G U L A R I S L A N D N P I Q U A N T I F I E R S S -V A G R BERT (" }, { "figure_ref": [], "heading": "H Detailed BLiMP benchmark results", "publication_ref": [ "b13" ], "table_ref": [ "tab_0" ], "text": "Table 3 shows results for each sentence suite within the BLiMP benchmark (in addition to the overall scores reported in the main text). All models shown in Tables 1 and3 are cased models. PLL-original scores replicate those reported in Salazar et al. (2020)." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank Jacob Andreas, Evan Hernandez, and the anonymous ACL reviewers for their insightful feedback. CK was supported by the K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center at MIT. AI was supported by MIT Quest for Intelligence." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "A Additional examples of score inflation " } ]
Estimating the log-likelihood of a given sentence under an autoregressive language model is straightforward: one can simply apply the chain rule and sum the log-likelihood values for each successive token. However, for masked language models (MLMs), there is no direct way to estimate the log-likelihood of a sentence. To address this issue, Salazar et al. (2020) propose to estimate sentence pseudolog-likelihood (PLL) scores, computed by successively masking each sentence token, retrieving its score using the rest of the sentence as context, and summing the resulting values. Here, we demonstrate that the original PLL method yields inflated scores for out-ofvocabulary words and propose an adapted metric, in which we mask not only the target token, but also all within-word tokens to the right of the target. We show that our adapted metric (PLL-word-l2r) outperforms both the original PLL metric and a PLL metric in which all within-word tokens are masked. In particular, it better satisfies theoretical desiderata and better correlates with scores from autoregressive models. Finally, we show that the choice of metric affects even tightly controlled, minimal pair evaluation benchmarks (such as BLiMP), underscoring the importance of selecting an appropriate scoring metric for evaluating MLM properties.
A Better Way to Do Masked Language Model Scoring
[ { "figure_caption": "Figure 2 :2Figure 2: The PLL-original metric inflates scores of multi-token words, such as souvenir; the adjusted metrics, PLL-word-l2r and PLL-whole-word, mitigate this issue. Example generated using the bert-base-cased model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Out of all PLL metrics, PLL-word-l2r best satisfies theoretical desiderata: (A) an inverse relationship between negative sentence PLL (a measure of model surprisal) and sentence length and (B) a positive correlation between word PLL and word log frequency. In (A), each dot is a sentence; in (B), each dot is a unique word from the dataset. Here and elsewhere, reported correlations are Pearson correlations.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Correlation between bidirectional model PLL scores and unidirectional model LL scores. Each dot is a sentence.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Sentence length effects for gpt2-xl and bert-large-cased on the EventsAdapt corpus.", "figure_data": "", "figure_id": "fig_3", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Sentence length effects for gpt2-medium and bert-base-cased on the LibriSpeech corpus.", "figure_data": "", "figure_id": "fig_4", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Sentence length effects for gpt2-medium and bert-base-cased on the Brown corpus.", "figure_data": "", "figure_id": "fig_5", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10:Word frequency effects for bert-base-cased on the EventsAdapt corpus. Word scores were retrieved with a neutral context: \"I opened a dictionary and randomly picked a word. It was _\".", "figure_data": "", "figure_id": "fig_6", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Word frequency effects for bert-base-cased on the EventsAdapt corpus. Word scores were retrieved without supporting context.", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12:Word frequency effects for bert-base-cased on the LibriSpeech corpus. Word scores were retrieved with a neutral context: \"My word is _\".", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13:Word frequency effects for bert-base-cased on the Brown corpus. Word scores were retrieved with a neutral context: \"My word is _\".", "figure_data": "", "figure_id": "fig_9", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Correlation between bert-large-cased and gpt2-xl scores on the EventsAdapt corpus.", "figure_data": "", "figure_id": "fig_10", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Correlation between bert-base-cased and gpt2-medium scores on the LibriSpeech corpus.", "figure_data": "", "figure_id": "fig_11", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 16 :16Figure 16: Correlation between bert-base-cased and gpt2-medium scores on the Brown corpus.", "figure_data": "", "figure_id": "fig_12", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Scores for the motivating example computed with PLL-sentence-l2r (bert-base-cased).", "figure_data": "", "figure_id": "fig_13", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Word frequency (A) and sentence length (B) effects for scores computed with PLL-sentence-l2r on the EventsAdapt corpus (bert-base-cased).", "figure_data": "", "figure_id": "fig_14", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Correlation between bert-base-cased and gpt2-medium scores computed with PLL-sentence-l2r on the EventsAdapt corpus.", "figure_data": "", "figure_id": "fig_15", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Bidirectional model performance on the BLiMP benchmark using different PLL metrics.", "figure_data": "ModelMetricOverall scorePLL-original84.2BERT (base)PLL-word-l2r84.7PLL-whole-word83.1PLL-original84.8BERT (large)PLL-word-l2r85.0PLL-whole-word82.6PLL-original85.4RoBERTa (base)PLL-word-l2r86.7PLL-whole-word85.4PLL-original86.5RoBERTa (large)PLL-word-l2r87.5PLL-whole-word85.9", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" } ]
Carina Kauf; Anna A Ivanova
[ { "authors": "Sudeep Bhatia; Russell Richie", "journal": "Psychological Review", "ref_id": "b0", "title": "Transformer networks of human conceptual knowledge", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b1", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Evelina Fedorenko; Idan Asher Blank; Matthew Siegelman; Zachary Mineroff", "journal": "Cognition", "ref_id": "b2", "title": "Lack of selectivity for syntax relative to word meanings throughout the language network", "year": "2020" }, { "authors": "Nelson Francis; Henry Kucera", "journal": "Letters to the Editor", "ref_id": "b3", "title": "Brown corpus manual", "year": "1979" }, { "authors": "Jon Gauthier; Jennifer Hu; Ethan Wilcox; Peng Qian; Roger Levy", "journal": "", "ref_id": "b4", "title": "Syntaxgym: An online platform for targeted evaluation of language models", "year": "2020" }, { "authors": "Ari Holtzman; Peter West; Vered Shwartz; Yejin Choi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Surface form competition: Why the highest probability answer isn't always right", "year": "2021" }, { "authors": "Carina Kauf; Anna A Ivanova; Giulia Rambelli; Emmanuele Chersoni; S Jingyuan; Zawad She; Evelina Chowdhury; Alessandro Fedorenko; Lenci", "journal": "", "ref_id": "b6", "title": "Event knowledge in large language models: the gap between the impossible and the unlikely", "year": "2022" }, { "authors": "Philippe Laban; Tobias Schnabel; Paul Bennett; Marti A Hearst", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Keep it simple: Unsupervised simplification of multi-paragraph text", "year": "2021" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b8", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Kanishka Misra", "journal": "", "ref_id": "b9", "title": "minicons: Enabling flexible behavioral and representational analyses of transformer language models", "year": "2022" }, { "authors": "Nikita Nangia; Clara Vania; Rasika Bhalerao; Samuel R Bowman", "journal": "", "ref_id": "b10", "title": "CrowS-pairs: A challenge dataset for measuring social biases in masked language models", "year": "2020" }, { "authors": "Vassil Panayotov; Guoguo Chen; Daniel Povey; Sanjeev Khudanpur", "journal": "IEEE", "ref_id": "b11", "title": "Librispeech: an asr corpus based on public domain audio books", "year": "2015" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b12", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Julian Salazar; Davis Liang; Toan Q Nguyen; Katrin Kirchhoff", "journal": "", "ref_id": "b13", "title": "Masked language model scoring", "year": "2020" }, { "authors": "Joonbo Shin; Yoonhyung Lee; Kyomin Jung", "journal": "", "ref_id": "b14", "title": "Effective sentence scoring method using bert for speech recognition", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Arabella Sinclair; Jaap Jumelet; Willem Zuidema; Raquel Fernández", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b16", "title": "Structural persistence in language models: Priming as a window into abstract language representations", "year": "2022" }, { "authors": "Koustuv Sinha; Robin Jia; Dieuwke Hupkes; Joelle Pineau; Adina Williams; Douwe Kiela", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Masked language modeling and the distributional hypothesis: Order word matters pre-training for little", "year": "2021" }, { "authors": "Alex Wang; Kyunghyun Cho", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "BERT has a mouth, and it must speak: BERT as a Markov random field language model", "year": "2019" }, { "authors": "Alex Warstadt; Alicia Parrish; Haokun Liu; Anhad Mohananey; Wei Peng; Sheng-Fu Wang; Samuel R Bowman", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b19", "title": "BLiMP: The benchmark of linguistic minimal pairs for english", "year": "2020" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b20", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Yian Zhang; Alex Warstadt; Xiaocheng Li; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "When do you need billions of words of pretraining data", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 323.47, 599.04, 201.67, 33.58 ], "formula_id": "formula_0", "formula_text": "PLL orig (S) := n t=1 log P MLM (s t | S \\t )(1)" }, { "formula_coordinates": [ 3, 78.87, 687.24, 210.99, 44.9 ], "formula_id": "formula_1", "formula_text": "PLL ww (S) := |S| w=1 |w| t=1 log P MLM (s wt | S \\sw ) (3)" }, { "formula_coordinates": [ 8, 321.98, 232.75, 203.16, 33.58 ], "formula_id": "formula_2", "formula_text": "PLL sent (S) := n t=1 log P MLM (s t | S <t ) (4)" }, { "formula_coordinates": [ 9, 85.93, 72.42, 436.38, 53.83 ], "formula_id": "formula_3", "formula_text": "A N A . A G R A R G S T R . B I N D I N G C T R L . R A I S . D -N A G R E L L I P S I S F I L L E R G A P I R R E G U L A R I S L A N D N P I Q U A N T I F I E R S S -V A G R BERT (" } ]
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b6", "b8" ], "table_ref": [], "text": "A cleft lip/palate is a medical condition where the lip/palate of a patient does not join completely before birth, which usually occurs in the early stages of pregnancy. In the UK, cleft lips are the most common facial birth defect, with one out of every 700 children suffering from cleft lip and palate every year [1]. This explains the importance of cleft lip and palate surgeries, which are usually performed on orofacial cleft patients at an average age of three months [2]. Although the surgical treatment for cleft lip and palate varies, their common objective is to achieve symmetry and enhance a nasolabial look [3].\nAs cleft lips are pre-born defects, many parents of the patients would find it hard to imagine what the non-cleft faces of their children would be like. To facilitate the understanding, awareness and discussion of cleft lip surgeries, we have worked with the UK's Royal Victoria Infirmary (RVI) to collect a dataset of cleft lips patients, and designed a system that allows the prediction of non-cleft faces from the cleft lip counterparts.\nA core challenge of our software design is to protect the privacy of cleft lip patients. Research has shown that due to the high memory capacity of deep learning models, it is possible to reconstruct original training samples from the network parameters, a scenario known as model leakage [4]. While it may be straightforward to formulate the non-cleft facial-prediction task using a style transfer [5] framework, training such systems requires both cleft and non-cleft facial images, resulting in a risk of model leakage. We present a novel software engineering design by tackling non-cleft facial-prediction with an image in-painting framework. This allows us to train the system using open facial datasets [6] with a tailor-made algorithm to mask out the mouth area, and test the system with cleft lip images. As a result, the model parameters do not store any cleft lip information.\nIn particular, we develop a PyTorch-based software that utilizes a state-of-the-art image inpainting network [7] as the backbone, and develop a multi-task system that predicts both images and the facial landmarks, thereby generating non-cleft faces. The facial landmark task provides geometric information that facilitates the image generation task. Compared to existing work that utilizes a multi-stage framework to first predict landmarks and then predict images [8,7], ours is superior as both tasks are performed at the same time, avoiding any error propagation from the first stage to the second stage.\nThe quality of images produced by our software has been evaluated by NHS surgeons, showcasing its superior performance to alternative designs. It has a fast inference speed and works with color images captured by consumer-level cameras, allowing an effective deployment process. It is open-source, facilitating research and development in this area.\nThe source code presented in this paper has been originally developed to implement the theory proposed in [9], which is accepted in a biomedicine-focused conference. In this paper, we explain the implementation details of this software and its impact in the real world. In particular, we focus on the design concepts of the software architecture and the details of the engineering considerations. This is further supported by a validated version of the source code in the CodeOcean environment." }, { "figure_ref": [], "heading": "System description", "publication_ref": [ "b4", "b4", "b9", "b6", "b6", "b7", "b10", "b7", "b6", "b11", "b5", "b12", "b3" ], "table_ref": [], "text": "To protect the privacy of patients' data, we decide to implement the non-cleft facial image prediction system as an image inpainting framework. One key software engineering decision in this research is the framework we use to implement the solution. Existing style transfer-based frameworks [5] allow effective facial image generation with different features. However, they require training data from both the source (i.e. cleft lip images in our case) and target (i.e. non-cleft lip images) domains, which may lead to model leakage where the trained model memorizes the training images. Conditional image translation frameworks using GAN [5] or VAEs [10] may resolve the issue, but those methods mainly focus on the synthesis of new color patterns instead of geometric structures. Our investigation led us to the image inpainting framework [7] as a suitable solution, as it does not necessitate using cleft facial data for training. Additionally, the binary mask effectively defines the lip area for synthesis with the rest of the face, serving as conditions, making it well-suited to our requirements.\nIn particular, to implement an image inpainting framework, we utilize the image generation network in [7] as the backbone, which is ameliorated from [8], given its good performance in image inpainting. We also re-implemented the gated convolution algorithm proposed in [11] to dynamically select features for each channel and location, resulting in better inpainting quality.\nOn top of the backbone, we implement a multi-task system that predicts both the non-cleft facial image and facial landmarks. Facial landmark has shown to be effective in assisting facial image inpainting [8,7], and is used extensively for cleft lip analysis [12]. Our work differs from existing approaches in that we employ a multi-task model, where two tasks share a part of a common network and facilitate each other.\nTo prepare the training data, we employ an open facial dataset and a tailor-made masking algorithm. In particular, we use the CelebA dataset [6], which consists of 202,599 face images of over 10,000 celebrities. To prepare the data for training our inpainting network, we apply an irregular mask algorithm following [13], such that our network can learn to inpaint any masked regions of the face.\nTo test the system, we work with the NHS to collect a dataset of cleft lip images. Due to the sensitive nature of the data, ethical approvals are obtained from the Research Ethics Committee (REC), the Health Research Authority (HRA), and Health and Care Research Wales (HCRW), under Approval Nos. 19/LO/1690 and under IRAS Project ID: 240451. Given a cleft lip image, we manually draw a mask that covers the mouth area. The masked image is fed into our multi-task network to create the non-cleft facial counterpart, with the facial landmark as a side-product. Since cleft lip images are only used in testing, we mitigate any risk of model leakage [4]." }, { "figure_ref": [ "fig_0" ], "heading": "Network Design and Implementation", "publication_ref": [ "b10", "b13", "b6", "b14" ], "table_ref": [], "text": "Here, we provide details for the design and implementation of our deep neural network, as shown in Figure 1.\nThe encoder is used to encode an image into a feature representation. We develop a gated convolution block that includes a gated convolution layer [11], a normalisation layer and an activation layer (ReLU). A masked image is fed into three gated convolution blocks with decreasing feature sizes from 256 × 256 to 64 × 64. Subsequently, the encoded feature is passed into multiple dilated residual blocks to extend the receptive field of the encoder. At the end of the encoder, we follow [14] to implement an attention mechanism to match the masked and unmasked regions. After a skip connection, the encoder outputs the shared feature map f share , which is practically a concise representation of the image. The feature map is passed to both the image generator and predictor.\nThe image generator is used to predict the non-cleft facial image. Given the shared feature f share , we employ a gated convolution block to implement upsampling. This is followed by two 1 × 1 convolution layers, F 1 and F 2 , for feature fusion. The first one is utilised to fuse the encoder feature with the skip connection, while the second one is responsible for parameter sharing to fuse the landmark indicator. This is followed by another The landmark predictor is used to predict the landmark of the non-cleft facial image. Following [7], the shared feature f share is passed into different 1 × 1 convolution layers followed by global average pooling to extract features of different numbers of channels. The feature with the largest number of channels (i.e. V ) is further passed into a PReLU [15] activation layer. These features are concatenated and fused with the image features to predict the landmark.\nWe develop a parameter-sharing mechanism to share information between the image generator and the landmark predictor. We implement an adaptive feature fusion algorithm, in which the image feature f 1 from the layer F 1 is passed from the image generator to the landmark predictor. This is followed by a fully connected layer to generate the 68 landmark points, L:\nL = F C(γ * f 1 ⊕ f lmk ), (1\n)\nwhere γ is a trainable parameter with zero initialization, ⊕ is element-wise addition, f 1 is obtained by passing f 1 through a global average pooling layer and f lmk is the extracted landmark feature map. The predicted landmark points L are mapped into a 128 × 128 image corresponding to the landmark position. The image is stacked channel-wise to increase its influence (68 times in our setup), and passed from the landmark predictor back to F 2 in the image generator." }, { "figure_ref": [], "heading": "Implementation details", "publication_ref": [], "table_ref": [], "text": "This multi-tasking image inpainting system is programmed in PyTorch. The main packages include numpy 1.15. Sample 4 images and corresponding landmark from dataloader." }, { "figure_ref": [], "heading": "5:", "publication_ref": [], "table_ref": [], "text": "Sample 4 irregular mask from dataloader." }, { "figure_ref": [], "heading": "6:", "publication_ref": [ "b15", "b16", "b5", "b17", "b8", "b18", "b7", "b6" ], "table_ref": [], "text": "x, k, L pixel , L landmark , L tv , L style , L perceptual , L g , L d ← M T θ (X, M, L).\n7:\nL G ← L pixel + L landmark + L tv + L style + L perceptual + L g 8: L D ← L d 9:\nFreeze the θ G in multi-task model, update discriminator with adversarial loss L D .\n10:\nFreeze the θ D in discriminator, update multi-task model with adversarial loss L G .\n11:\nt ← t + 1 12: Save the trained Multi-task model M T θ .\nimage, Face Align Network (FAN) generates 136 values to denote the x-and y-positions of the 68 landmark points.\nTo train the network, a GAN-based [16] training flow is employed as shown in Algorithm 1. We first apply FAN [17], a Face Align Network, to generate the ground truth landmark points from CelebA [6], following the default training and testing split of the dataset. We then apply Optuna [18] for hyperparameter tuning. Specifically, we fix all hyperparameters except the weight of landmark loss, and train our model with one epoch using Optuna to sample such weights. With the same method, we also tune other hyperparameters, such as the learning rate, the decay weight of the learning rate and batch size.\nThe collected dataset is used for inference. From both quantitative and qualitative results, our system generates semantically plausible non-cleft facial images [9]. The results are further evaluated by cleft lip surgeons, showcasing that our proposed network generates better images than state-of-the-arts [19,8,7].\nThe run-time cost of the proposed system is very low. Using our real-world cleft lip data, the inference step is implemented using an NVIDIA GeForce GTX 970 on a laptop, with an inference time of 200ms for a single image. This means that our trained system does not require a particularly powerful computing system to perform the inference, and a standard workstation or laptop computer can use our system. For training the network, one NVIDIA TITAN Xp is used for four days, which is typical in deep learning applications of a similar scale." }, { "figure_ref": [], "heading": "How to use", "publication_ref": [ "b5", "b12", "b16" ], "table_ref": [], "text": "To retrieve the training dataset for this image inpainting application, users are required to download the CelebA Dataset [6] and the irregular mask dataset [13] from the respective official websites. The CelebA dataset should then be divided into a standard training set and a validation set, according to the official instruction. Additionally, the corresponding landmark points should be generated with FAN [17]. Furthermore, the irregular mask dataset should be divided into three groups according to the mask ratios (0-20%, 20-40%, 40-60%). 3,300 masks are randomly selected from each group, resulting in a total of 9,900 mask images for training. Another 200 masks are selected from each group, resulting in a total of 600 mask images for verification. For the inference step, all cleft facial images and their corresponding masks serve as the image test set and the mask test set, respectively. The user should then run the provided \"./scripts/filst.py\" script to generate training, test and validation set file lists, and update the information in the \"config.yml\" file accordingly to set the model configuration. Once the python environment has been set up using the released \"requirements.txt\" file, the user may proceed to run the \"train.py\" script for training and the \"test.py\" script for testing. For the inference process, although we recommend using our system with GPUs for better speed, the system is fully runnable with only CPUs. Due to the sensitivity of patient privacy, we are not allowed to upload the cleft lip data for an online demonstration. Therefore, we show the reproducibility of our system with the images from CelebA and the irregular masks." }, { "figure_ref": [], "heading": "Impact overview", "publication_ref": [ "b19", "b20", "b21", "b22", "b3", "b23", "b24" ], "table_ref": [], "text": "While our method primarily focuses on cleft lips, the uses of the implemented source code can be extended to other applications. The key idea of this software is to mask out a particular region of a face, and to employ inpainting techniques for predicting the masked area. The versatility of our system allows for the implementation of extended facial applications, such as makeup and plastic surgery prediction. To utilize these capabilities, a customized dataset is required for training, such as the Facial Beauty Database [20] or a plastic surgery facial dataset [21]. The users then need to retrain our model according to section 2.3. The resulting model can then be tested using a corresponding mask that covers specific facial components, such as nose or eyebrows, to generate the image of the subject after makeup or plastic surgery. Therefore, it can also be used for supporting plastic surgeries and makeup prediction [22] on specific facial components. This would facilitate the understanding and discussion of those operations and applications among stakeholders.\nWe put a particular effort in selecting a software framework that is robust against model leakage and attack [23,4]. In particular, we propose the idea of excluding patient data in training deep learning models if possible, mitigating any privacy concerns and risk of data loss. The high-level concept of training with open data and testing with sensitive data can be employed in other machine learning applications to protect data privacy, particularly those in the healthcare domain or involving people of vulnerable groups.\nIn theory, our system is also capable of synthesising cleft facial images from non-clelf lip ones. In practice, due to the wide variety of cleft lip conditions, training such a system would require a large dataset of cleft images, which is currently not available. Should there be enough data (and we only need the lip area to protect patients' privacy), this system can be used to generate synthetic cleft lip facial images, which enable the training of machine learning algorithms. As the data is artificially created, there is no privacy or model leakage concern, and an unlimited amount of samples can be created. This aligns with the recent trend of using computer graphics techniques to mock up real-world data [24], facilitating the training of machine learning systems for patient-related applications [25]. Since the beginning of this research, there is raising awareness from both UK universities and hospitals in collecting cleft lip data for research purposes. We believe our vision will be made possible in the future." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work implements a multi-task image inpainting model to predict non-cleft lip facial images from cleft lip ones. We make an important software engineering decision to implement the system under an inpainting framework, which does not require patient data for training and mitigates model leakage risks. We design and develop a multi-task neural network that co-predicts a facial image and the corresponding facial landmarks, and we find that the two tasks support each other. Apart from detailing the design and implementation details of our software, we also discuss its impact within and beyond cleft lip applications. The source code is now publicly released on CodeOcean and Github." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Please fill in this column C1 Current code version v1 C2 Permanent link to code/repository used for this code version https://github.com/ChrisChen1023/INCLG C3 Permanent link to Reproducible Capsule https://codeocean.com/capsule/4388343/tree/v1" }, { "figure_ref": [], "heading": "Declaration of competing interest", "publication_ref": [], "table_ref": [], "text": "The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper." } ]
We present a software that predicts non-cleft facial images for patients with cleft lip, thereby facilitating the understanding, awareness and discussion of cleft lip surgeries. To protect patients' privacy, we design a software framework using image inpainting, which does not require cleft lip images for training, thereby mitigating the risk of model leakage. We implement a novel multi-task architecture that predicts both the non-cleft facial image and facial landmarks, resulting in better performance as evaluated by surgeons. The software is implemented with PyTorch and is usable with consumer-level color images with a fast prediction speed, enabling effective deployment.
INCLG: Inpainting for Non-Cleft Lip Generation with a Multi-Task Image Processing Network
[ { "figure_caption": "Figure 1 :1Figure 1: The overview of proposed multi-task architecture.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" } ]
Shuang Chen; Durham; Edmond S L Ho; Hubert P H Shum
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Cleft lip and palate", "year": "2019" }, { "authors": "W Wellens; V Poorten", "journal": "B ENT", "ref_id": "b1", "title": "Keys to a successful cleft lip and palate team", "year": "2006" }, { "authors": "D G Mosmuller; L M Mennes; C Prahl; G J Kramer; M A Disse; G M Van Couwelaar; B N Frank; J Don Griot", "journal": "The Cleft Palate-Craniofacial Journal", "ref_id": "b2", "title": "The development of the cleft aesthetic rating scale: a new rating scale for the assessment of nasolabial appearance in complete unilateral cleft lip and palate patients", "year": "2017" }, { "authors": "L Zhu; Z Liu; S Han", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b3", "title": "Deep leakage from gradients", "year": "2019" }, { "authors": "J.-Y Zhu; T Park; P Isola; A A Efros", "journal": "", "ref_id": "b4", "title": "Unpaired image-to-image translation using cycle-consistent adversarial networks", "year": "2017" }, { "authors": "Z Liu; P Luo; X Wang; X Tang", "journal": "", "ref_id": "b5", "title": "Deep learning face attributes in the wild", "year": "2015" }, { "authors": "Y Yang; X Guo; J Ma; L Ma; H Ling", "journal": "", "ref_id": "b6", "title": "Lafin: Generative landmark guided face inpainting", "year": "2019" }, { "authors": "K Nazeri; E Ng; T Joseph; F Qureshi; M Ebrahimi", "journal": "", "ref_id": "b7", "title": "Edgeconnect: Structure guided image inpainting using edge prediction", "year": "2019-10" }, { "authors": "S Chen; A Atapour-Abarghouei; J Kerby; E S L Ho; D C G Sainsbury; S Butterworth; H P H Shum", "journal": "IEEE", "ref_id": "b8", "title": "A feasibility study on image inpainting for non-cleft lip generation from patients with cleft lip", "year": "2022" }, { "authors": "N Nozawa; H P H Shum; Q Feng; E S L Ho; S Morishima", "journal": "Visual Computer", "ref_id": "b9", "title": "3d car shape reconstruction from a contour sketch using gan and lazy learning", "year": "2022" }, { "authors": "J Yu; Z Lin; J Yang; X Shen; X Lu; T S Huang", "journal": "", "ref_id": "b10", "title": "Free-form image inpainting with gated convolution", "year": "2019" }, { "authors": "Y Li; J Cheng; H Mei; H Ma; Z Chen; Y Li", "journal": "IEEE", "ref_id": "b11", "title": "Clpnet: cleft lip and palate surgery support with deep learning", "year": "2019" }, { "authors": "G Liu; F A Reda; K J Shih; T.-C Wang; A Tao; B Catanzaro", "journal": "", "ref_id": "b12", "title": "Image inpainting for irregular holes using partial convolutions", "year": "2018" }, { "authors": "C Zheng; T.-J Cham; J Cai", "journal": "", "ref_id": "b13", "title": "Pluralistic image completion", "year": "2019" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b14", "title": "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification", "year": "2015" }, { "authors": "J Goodfellow Ian; M Pouget-Abadie; B Mirza; D Xu; S Warde-Farley; A Ozair; Y Courville; Bengio ", "journal": "", "ref_id": "b15", "title": "Model inversion attacks that exploit confidence information and basic countermeasures", "year": "2014" }, { "authors": "A Bulat; G Tzimiropoulos", "journal": "", "ref_id": "b16", "title": "How far are we from solving the 2d & 3d face alignment problem? (and a dataset of 230,000 3d facial landmarks)", "year": "2017" }, { "authors": "T Akiba; S Sano; T Yanase; T Ohta; M Koyama", "journal": "", "ref_id": "b17", "title": "Optuna: A next-generation hyperparameter optimization framework", "year": "2019" }, { "authors": "X Guo; H Yang; D Huang", "journal": "", "ref_id": "b18", "title": "Image inpainting via conditional texture and structure dual generation", "year": "2021" }, { "authors": "L Zhang; H P Shum; L Liu; G Guo; L Shao", "journal": "Neurocomputing", "ref_id": "b19", "title": "Multiview discriminative marginal metric learning for makeup face verification", "year": "2019" }, { "authors": "C Rathgeb; D Dogan; F Stockhardt; M De Marsico; C Busch", "journal": "", "ref_id": "b20", "title": "Plastic surgery: An obstacle for deep face recognition?", "year": "2020" }, { "authors": "D Organisciak; E S L Ho; H P H Shum", "journal": "", "ref_id": "b21", "title": "Makeup style transfer on low-quality images with weighted multi-scale attention", "year": "2020-01" }, { "authors": "F Tramèr; F Zhang; A Juels; M K Reiter; T Ristenpart", "journal": "", "ref_id": "b22", "title": "Stealing machine learning models via prediction {APIs}", "year": "2016" }, { "authors": "N Hesse; S Pujades; J Romero; M J Black; C Bodensteiner; M Arens; U G Hofmann; U Tacke; M Hadders-Algra; R Weinberger; W Muller-Felber; A S Schroeder", "journal": "", "ref_id": "b23", "title": "Learning an infant body model from RGB-D data for accurate full body motion analysis", "year": "2018-09" }, { "authors": "H Zhang; H P H Shum; E S L Ho", "journal": "IEEE", "ref_id": "b24", "title": "Cerebral palsy prediction with frequency attention informed graph convolutional networks", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 251.72, 573.38, 313.11, 13.6 ], "formula_id": "formula_0", "formula_text": "L = F C(γ * f 1 ⊕ f lmk ), (1" }, { "formula_coordinates": [ 4, 564.83, 576.13, 4.65, 9.63 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 5, 48.44, 183.51, 294.48, 36.36 ], "formula_id": "formula_2", "formula_text": "L G ← L pixel + L landmark + L tv + L style + L perceptual + L g 8: L D ← L d 9:" }, { "formula_coordinates": [ 5, 43.84, 237.71, 210.46, 25.29 ], "formula_id": "formula_3", "formula_text": "t ← t + 1 12: Save the trained Multi-task model M T θ ." } ]
[ { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7" ], "table_ref": [], "text": "Face detection is an important task of computer vision and has been widely studied in the past decades. Nowadays, many emerging applications, such as security surveillance and identity authentication, hinge on face detection. As a special kind of object detection, the progress in face detection benefits from the developments in general object detection. The idea of object detection is to build a model with some fixed set of classes we are interested in. When an object belonging to a class appears in the input image, the bounding box is drawn around that object along with predicting its class label. The traditional stage was around 2000. Most of the methods proposed during this period were based on sliding windows and artificial feature extraction, which had the defects of high computational complexity and poor robustness in complex scenarios. Representative achievements include Viola-Jones detector [1] and HOG pedestrian detector [2]. The second stage is from 2014 to the present, starting with the R-CNN [3] proposed in 2014. These algorithms use Convolutional Neural Network (CNN) [4] to automatically extract hidden features in input images and classify and predict samples with higher accuracy. After R-CNN, there are many object detection methods based on CNN such as Fast R-CNN [5], Faster R-CNN [6], SSD [7], and YOLO series [8] In the test phase, CNN-based detection models output a large number of candidate bounding-boxes which contain a lot of redundancy. The CNN model also gives each box a score indicating the confidence that it surrounds an object correctly. Non-maximum suppression (NMS) is a commonly used post-processing method for discarding redundant predicted bounding-boxes. NMS is an iterative method to preserve local maximum and remove local non-maximum. In NMS, the candidate boxes are arranged in a list by sorting their scores in descending order. Then the box with the highest score is picked for calculating the Intersection over Union (IoU) values between it and all the other boxes. If an IoU value is larger than the pre-set threshold, the corresponding box with lower scores is deleted from the list. The picked box is also removed from the list and saved as a final box. The above process is repeated for the remaining list until the list is empty. As shown in Fig. 1, the Green box will definitely be preserved because it has the highest score. According to the above process of NMS, the Yellow box will also be preserved because the IoU between the Green box and Yellow box is less than the threshold and the Red box has been deleted before calculating the IoU between it and the Yellow box.\nThe disadvantage of NMS is obvious as shown in Fig 1 and this situation is common in practical applications. Therefore, in this paper, we propose Inverted NMS to eliminate such shortcomings. Instead of arranging the candidate boxes by sorting their scores in descending order, we arrange a candidate box list in ascending order. Then we pick the box with the lowest score and calculate the IoU values between it and all the other boxes. If one of the IoU values is larger than the threshold, we delete the picked box and then repeat the progress above. Finally, the rest boxes in the list are the results of our Inverted NMS. As shown in Fig 1, according to our Inverted NMS, the Yellow box is deleted first because the IoU value between it and the Red box is larger than the threshold. Then the Red box is deleted because the IoU value between it and the Green box is larger than the threshold. It is obvious that our method can achieve neater results and the experiment section demonstrates that our method can improve the performance of detection on hard and tiny face samples." }, { "figure_ref": [], "heading": "II. RELATED WORK", "publication_ref": [ "b10", "b11", "b5", "b12", "b13", "b14", "b15" ], "table_ref": [], "text": "NMS has very important applications in the field of computer vision. In edge detection [11], after calculating the gradient value and gradient direction, the amplitude value is suppressed along the gradient direction by non-maximum value, and the irrelevant points that do not constitute an edge are removed, so that the possibility of it being an edge is excluded. In face detection, Cascade CNN [12] uses Non-Maximum Suppression (NMS) to merge highly overlapping detection windows, and the remaining candidate detection windows will be normalized to 24×24 as the input of 24net, which will further eliminate the remaining nearly 90% detection windows. In object detection, Faster R-CNN [6] uses NMS in the proposal stage, the purpose is to remove the proposals that predict the same area with more serious overlap, keeping only the proposals with higher confidence. In the test phase of R-CNN, NMS is used for removing the low scored boxes that are overlapped with high score boxes.\nNMS has a potential disadvantage of manually set threshold. Several alternatives have been considered. Some improved NMS methods are based on learning method. For instance, ConvNMS [13] is used to solve the difficult problem of NMS setting in the threshold. If the IoU threshold is set too high, the suppression may not be sufficient, some unnecessary predicted bounding-boxes may still be kept. If the IoU threshold is set too low, multiple true positives may be merged together. Con-vNMS designs a convolutional network to combine the NMS results with different overlap thresholds and obtains the best output through the learning method. However, retraining and parameter tuning should be required in order to be effective in different scenarios. For the special application scenario of pedestrian detection in crowd, adaptive-NMS [14] applies a dynamic suppression strategy, the suppression threshold in the instance is dynamically changed according to the target density, so that in densely crowded places, the NMS threshold is larger to obtain higher Recall, and where the crowd is sparse, NMS chooses a small threshold to eliminate more redundant boxes.\nSome improved approaches for NMS include non-training procedures to progressively remove redundant boundingboxes. Soft-NMS [15] is a generalization of Traditional NMS, which is mainly aimed at alleviating the violent elimination of Traditional NMS. Soft-NMS introduces a re-scoring function, If the IoU is larger, the impact on score Si will be greater and Si will be smaller. In this way, the value of Si of each Box is updated, and the remaining Si, which is greater than a confidence threshold value, is retained to filter out candidate boxes. The Soft-NMS algorithm has improved on the standard datasets PASCAL VOC2007 (1.7% higher than R-FCN and Faster-RCNN) and MS-COCO (1.3% higher than R-FCN, 1.1% higher than Faster-RCNN). This iterative procedure is friendly to two-stage methods, but it may fail in some singlestage methods.\nIn Weighted NMS [16], the authors propose that the maximum score box selected by traditional NMS in each iteration may not be precisely positioned, and redundant boxes may also be well positioned. Weighted NMS is different from the direct culling mechanism, as its name implies, it is a weighted average of coordinates, and the objects of weighted average include instance in box set itself and adjacent boxes with IoU greater than NMS threshold. Weighted NMS usually achieves higher Precision and Recall, although the computational efficiency is lower than traditional NMS." }, { "figure_ref": [ "fig_1" ], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "Input: B = {b 1 , b 2 , ..., b n }, S = {s 1 , s 2 , ..., s n }, N t B is a set of predicted bounding boxes S is the corresponding predicted scores of B s 1 ≤ s 2 ≤ ... ≤ s n N t is the NMS threshold Output: B D ← {} for i = 1; i ≤ n -1; i + + do for j = i + 1; j ≤ n; j + + do if IoU (b i , b j ) ≥ N t then D ← b i ; Break; end end end B ← B -D Algorithm 1: Inverted NMS\nFor one image, a CNN-based detection method usually outputs a large number of candidate bounding-boxes and each bounding-box has a score indicating the confidence that it contains a face correctly. In common, as shown in Fig. 1, a face may correspond to many bounding-boxes. Among them, some bounding-boxes are good while some bounding-boxes are bad. To remove the bad ones, we first arrange the candidate bounding-boxes by sorting their scores in ascending order. Then from top to bottom, we select boxes one by one and calculate the IoU values between the selected box and the boxes below it. If the IoU between the selected box and one of the boxes below is larger than a threshold, we delete the selected box. The detailed process is described in Algorithm 1.\nOur method relies heavily on the calculation of IoU. We describe the detailed calculation process below. Set the coordinates of two bounding boxes as b 1 (x 1 , y 1 , x 2 , y 2 ) and b 2 (x 1 , y 1 , x 2 , y 2 ), where (x 1 , y 1 ) and (x 1 , y 1 ) are the upperleft corners and (x 2 , y 2 ) and (x 2 , y 2 ) are the lower-right corners. The area a 1 of b 1 and the area a 2 of b 2 can be obtained by\na 1 = (x 2 -x 1 ) × (y 2 -y 1 ), a 2 = (x 2 -x 1 ) × (y 2 -y 1 ).\n(\n)1\nThe intersecting area of the two boxes can be obtained by\na inter = max{0, [min(x 2 , x 2 ) -max(x 1 , x 1 )]} × max{0, [min(y 2 , y 2 ) -max(y 1 , y 1 )]}(2)\nThe IoU value is\nIoU (b 1 , b 2 ) = a inter a 1 + a 2 -a inter .(3)" }, { "figure_ref": [], "heading": "IV. EXPERIMENTS A. Setup", "publication_ref": [ "b8", "b9", "b16", "b17", "b18", "b19", "b20", "b2", "b15", "b14" ], "table_ref": [], "text": "We select five state-of-the-art object/face detection methods, YOLOv3 [9], YOLOv5 [10], DSFD [17], PyramidBox [18] and EXTD [19], as our face detectors. All the detectors are trained on the WIDER FACE [20] dataset by PyTorch [21]. WIDER FACE contains a large number of faces with a high degree of variability in scale, pose, and occlusion. The validation set of WIDER FACE are split into three subsets, easy, medium and hard, which contains 7,211, 13,319 and 31,958 faces, respectively. We compare our Inverted NMS with the original NMS which is described in R-CNN [3], Weighted NMS [16] and Soft NMS [15] to demonstrate the effectiveness of our method.\nIn NMS, the threshold used to determine whether a box should be removed typically varies between 0.3 and 0.7 in order to obtain the best results. In our experiments, we try each threshold for each NMS method to obtain the best performance. As a result, the threshold for soft NMS should be 0.3 and the threshold for the other methods should be 0.6." }, { "figure_ref": [ "fig_2" ], "heading": "B. Results", "publication_ref": [], "table_ref": [ "tab_0", "tab_1" ], "text": "Table I shows the detection results of different detectors that combine different NMS methods. Our Inverted NMS can improve the performance of detectors on the hard subset of we can find that our method is effective for detecting tiny faces. As shown in Table II, the detection performance of YOLOv5 with our Inverted NMS on tiny faces with side lengths less than 16 is significantly improved. Fig. 2 visualizes detection results of three face images. Compared with other NMS methods, our method has a good filtering performance on multiple boxes at some face clusters." }, { "figure_ref": [], "heading": "C. Complexity Analysis", "publication_ref": [], "table_ref": [], "text": "In the original NMS, after completing a traversal, it is possible that multiple boxes will be deleted, which reduces the number of comparisons for the next traversal and the number of traversals. In our method, we delete at most one box per traversal, which means that our method will consume more time than the original method. However, the time consumption of our method is still milliseconds, which is negligible compared to the time consumption of the object detection network." }, { "figure_ref": [], "heading": "V. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose an Inverted NMS to eliminate the redundant predicted bounding-boxes surrounding hard face samples. Our method deletes the bad bounding-boxes by comparing the IoU start from the box with the lowest score while the other NMS methods start from the box with the highest score. The experiments demonstrate that our method is more effective than the others for detecting hard and tiny face samples." } ]
CNN-based face detection methods have achieved significant progress in recent years. In addition to the strong representation ability of CNN, post-processing methods are also very important for the performance of face detection. In general, the face detection method predicts several candidate boundingboxes for one face. NMS is used to filter out inaccurate candidate boxes to get the most accurate box. The principle of NMS is to select the box with a higher score as the basic box and then delete the box which has a large overlapping area with the basic box but has a lower score. However, the current NMS method and its improved versions do not perform well when face image quality is poor or faces are in a cluster. In these situations, even after NMS filtering, there is often a face corresponding to multiple predicted boxes. To reduce this kind of negative result, in this paper, we propose a new NMS method that operates in the reverse order of other NMS methods. Our method performs well on low-quality and tiny face samples. Experiments demonstrate that our method is effective as a post-processor for different face detection methods. The source code has been released on https://github.com/.
Inverted Non-maximum Suppression for more Accurate and Neater Face Detection
[ { "figure_caption": "[9] [10]. Compared with the traditional object detection methods, the object detection methods based on CNN have the characteristics of high speed, strong accuracy, and high robustness.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 1 .1Fig. 1. Three bounding-boxes, Green(G), Red(R) and Yellow(Y), are produced by a face detection method. The scores for G, R and Y are 0.9, 0.8 and 0.7, respectively. Post-processing by our Inverted NMS can get a better and neater result.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Visualized results of different NMS methods. The reults of our method is neater.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "RESULTS ON WIDER FACE VAL SET.", "figure_data": "DetectorNMS MethodAverage PrecisionEasyMediumHardNMS0.9620.9610.907Weighted NMS0.9620.9610.906YOLOv5Soft NMS-L0.9620.9610.905Soft NMS-G0.9610.9590.901Inverted NMS0.9620.9610.924NMS0.9640.9560.894Weighted NMS0.9640.9560.893YOLOv3Soft NMS-L0.9640.9550.892Soft NMS-G0.9630.9540.888Inverted NMS0.9640.9560.911NMS0.9490.9350.847Weighted NMS0.9490.9350.847DSFDSoft NMS-L0.9490.9350.849Soft NMS-G0.9500.9360.844Inverted NMS0.9500.9370.856NMS0.9180.9050.828Weighted NMS0.9170.9040.825EXTDSoft NMS-L0.9200.9050.784Soft NMS-G0.9200.9040.782Inverted NMS0.9180.9050.832NMS0.9460.9340.853Weighted NMS0.9480.9360.851PyramidBoxSoft NMS-L0.9480.9370.854Soft NMS-G0.9470.9360.846Inverted NMS0.9480.9360.859", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "OF YOLOV5 ON FACES OF DIFFERENT SIZES (WIDER FACE VAL SET).", "figure_data": "Average PrecisionLonger Side of GT≤16(16, 64](64, 256]>256Number of GT16844177934482586NMS0.6100.9260.9610.672Weighted NMS0.6090.9250.9610.670Soft NMS-L0.6100.9240.9610.672Soft NMS-G0.6070.9220.9610.674Inverted NMS0.6860.9270.9610.672WIDER FACE validation set. Especially for YOLOv3 andYOLOv5, our method largely improves the performance of", "figure_id": "tab_1", "figure_label": "II", "figure_type": "table" } ]
Lian Liu; Liguo Zhou
[ { "authors": "P Viola; M Jones", "journal": "International journal of computer vision", "ref_id": "b0", "title": "Robust real-time object detection", "year": "2001" }, { "authors": "N Dalal; B Triggs", "journal": "Ieee", "ref_id": "b1", "title": "Histograms of oriented gradients for human detection", "year": "2005" }, { "authors": "R Girshick; J Donahue; T Darrell; J Malik", "journal": "", "ref_id": "b2", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2014" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b3", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "R Girshick", "journal": "", "ref_id": "b4", "title": "Fast r-cnn", "year": "2015" }, { "authors": "S Ren; K He; R Girshick; J Sun", "journal": "Advances in neural information processing systems", "ref_id": "b5", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "W Liu; D Anguelov; D Erhan; C Szegedy; S Reed; C.-Y Fu; A C Berg", "journal": "Springer", "ref_id": "b6", "title": "Ssd: Single shot multibox detector", "year": "2016" }, { "authors": "J Redmon; S Divvala; R Girshick; A Farhadi", "journal": "", "ref_id": "b7", "title": "You only look once: Unified, real-time object detection", "year": "2016" }, { "authors": "J Redmon; A Farhadi", "journal": "", "ref_id": "b8", "title": "Yolov3: An incremental improvement", "year": "2018" }, { "authors": "G Jocher", "journal": "", "ref_id": "b9", "title": "ultralytics/yolov5", "year": "2020-10" }, { "authors": "J Canny", "journal": "IEEE Transactions on pattern analysis and machine intelligence", "ref_id": "b10", "title": "A computational approach to edge detection", "year": "1986" }, { "authors": "H Li; Z Lin; X Shen; J Brandt; G Hua", "journal": "", "ref_id": "b11", "title": "A convolutional neural network cascade for face detection", "year": "2015" }, { "authors": "J Hosang; R Benenson; B Schiele", "journal": "Springer", "ref_id": "b12", "title": "A convnet for non-maximum suppression", "year": "2016" }, { "authors": "S Liu; D Huang; Y Wang", "journal": "", "ref_id": "b13", "title": "Adaptive nms: Refining pedestrian detection in a crowd", "year": "2019" }, { "authors": "N Bodla; B Singh; R Chellappa; L S Davis", "journal": "", "ref_id": "b14", "title": "Soft-nms-improving object detection with one line of code", "year": "2017" }, { "authors": "C Ning; H Zhou; Y Song; J Tang", "journal": "IEEE", "ref_id": "b15", "title": "Inception single shot multibox detector for object detection", "year": "2017" }, { "authors": "J Li; Y Wang; C Wang; Y Tai; J Qian; J Yang; C Wang; J Li; F Huang", "journal": "", "ref_id": "b16", "title": "Dsfd: dual shot face detector", "year": "2019" }, { "authors": "X Tang; D K Du; Z He; J Liu", "journal": "", "ref_id": "b17", "title": "Pyramidbox: A context-assisted single shot face detector", "year": "2018" }, { "authors": "Y Yoo; D Han; S Yun", "journal": "", "ref_id": "b18", "title": "Extd: Extremely tiny face detector via iterative filter reuse", "year": "2019" }, { "authors": "S Yang; P Luo; C.-C Loy; X Tang", "journal": "", "ref_id": "b19", "title": "Wider face: A face detection benchmark", "year": "2016" }, { "authors": "A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga", "journal": "", "ref_id": "b20", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" } ]
[ { "formula_coordinates": [ 2, 321.94, 521.16, 217.04, 192.42 ], "formula_id": "formula_0", "formula_text": "Input: B = {b 1 , b 2 , ..., b n }, S = {s 1 , s 2 , ..., s n }, N t B is a set of predicted bounding boxes S is the corresponding predicted scores of B s 1 ≤ s 2 ≤ ... ≤ s n N t is the NMS threshold Output: B D ← {} for i = 1; i ≤ n -1; i + + do for j = i + 1; j ≤ n; j + + do if IoU (b i , b j ) ≥ N t then D ← b i ; Break; end end end B ← B -D Algorithm 1: Inverted NMS" }, { "formula_coordinates": [ 3, 110.48, 296.12, 128.03, 24.6 ], "formula_id": "formula_1", "formula_text": "a 1 = (x 2 -x 1 ) × (y 2 -y 1 ), a 2 = (x 2 -x 1 ) × (y 2 -y 1 )." }, { "formula_coordinates": [ 3, 292.28, 304.01, 7.74, 8.64 ], "formula_id": "formula_2", "formula_text": ")1" }, { "formula_coordinates": [ 3, 75.91, 346.32, 224.11, 27.08 ], "formula_id": "formula_3", "formula_text": "a inter = max{0, [min(x 2 , x 2 ) -max(x 1 , x 1 )]} × max{0, [min(y 2 , y 2 ) -max(y 1 , y 1 )]}(2)" }, { "formula_coordinates": [ 3, 107.48, 391.2, 192.54, 23.22 ], "formula_id": "formula_4", "formula_text": "IoU (b 1 , b 2 ) = a inter a 1 + a 2 -a inter .(3)" } ]
10.48550/ARXIV.1810.04805
2023-05-17
[ { "figure_ref": [ "fig_2" ], "heading": "Introduction", "publication_ref": [ "b6", "b19", "b15", "b10", "b21", "b24", "b4", "b16", "b11", "b18", "b22", "b24", "b25", "b0", "b2", "b1", "b20", "b26" ], "table_ref": [], "text": "Cosine similarity is arguably the most popular word similarity measure used in numerous natural language processing (NLP) tasks, such as question answering (QA), information retrieval (IR) and machine translation (MT) (Echizen-ya et al., 2019;Oniani and Wang, 2020;Kim et al., 2022;Hanifi et al., 2022). First, a word is represented by a vector (aka embedding) and then the similarity between two words is computed as the cosine of the angle between the corresponding vectors (Rahutomo et al., 2012). Despite the good performance of cosine similarity as a similarity measure in various downstream tasks, Zhou et al. (2022) showed that it systematically underestimates the true similarity between highly frequent words, when computed using contextualised word embeddings obtained from MLMs such as BERT (Devlin et al., 2018).\nCompared to the problem of estimating similarity between highly frequent words, the opposite problem of estimating the similarity between (or involving) rare (low frequency) words has received greater attention, especially in the scope of static word embeddings (Levy and Goldberg, 2014;Hellrich and Hahn, 2016;Mimno and Thompson, 2017;Wendlandt et al., 2018). If a word is rare in a corpus, we might not have a sufficiently large number of contexts containing that word to learn an accurate embedding for it. This often leads to unreliable similarity estimations between words and has undesirable implications in downstream tasks such as the detection of analogies and social biases (Ethayarajh et al., 2019a,b).\nOn the other hand, Zhou et al. (2022) studied the impact of frequency on contextualised word embeddings and showed that the cosine similarity between highly frequent words are systematically underestimated. Unlike in the previously discussed low frequency word scenario, we do have adequate contexts to learn an accurate semantic representation for highly frequent words. Therefore, it might appear surprising at first that cosine similarity cannot be correctly estimated even for the highly frequent words. Zhou et al. (2021) show that the diversity (measured by the volume of the bounding hypersphere) of the contextualised embeddings of a target word, computed from multiple contexts containing the word, increases with the frequency of that word. They provide an explanation that holds true only for 2-dimensional embeddings, which relates diversity to the underestimation of cosine similarity. Unfortunately, this explanation does not extend to the high dimensional embeddings used in practice by the NLP community (e.g. BERT token embeddings are typically more than 768 di-Figure 1: Cosine similarity between two instances of the same word w in two contexts in the WiC train dataset. When the log-frequency of w in the corpus increases, cosine similarities computed for both contexts that express the same meaning of w as well as its different meanings decreases. mensional). More importantly, to the best of our knowledge, no solution has been proposed in the literature to address the cosine similarity underestimation problem associated with the highly frequent words.\nIn prior work, the 2 norm of a static word embedding has been shown to linearly correlate with the log-frequency of that word (Arora et al., 2016;Bollegala et al., 2018). On the other hand, we empirically study the 2 norm of the contextualised embedding of a word w averaged over all of its contexts, and find that it too approximately linearly correlates with the log-frequency of w in the corpus used to pretrain the MLM. Recall that the cosine similarity is defined as the inner-product between two embeddings, divided by the 2 norm of those embeddings. Therefore, we suspect that the underestimation of cosine similarity between highly frequent words is due to the larger 2 norms associated with those words.\nTo correct for this bias associated with the 2 norms of highly frequent words, we propose a linearly parameterised discounting scheme in the logfrequency space. Specifically, we use Monte-Carlo Bayesian Optimisation (Balandat et al., 2019) to find the optimal discounting parameters. Our proposed discounting method is shown to accurately correct the underestimation of cosine similarities between highly frequent words on the Word-in-Context (WiC) (Pilehvar and Camacho-Collados, 2019) dataset where human similarity ratings are available for the same word in two different con-texts. Source code for reproducing the experiments reported in this is paper is publicly available. We approximate the word frequencies in BERT pretraining corpus using the BookCorpus (Zhu et al., 2015). Let ψ w be the frequency of w in this corpus.\nWe use the WiC dataset, which contains 5428 pairs of words appearing in various contexts with annotated human similarity judgements. WiC dataset is split into official training and development sets, while a separate hidden test set is used by the leaderboard for ranking Word Sense Disambiguation systems.\n3 WiC dataset contains pairs of contexts labelled as having the same meaning (e.g. \"to drive sheep out of a field\" vs. \"to drive the cows into the barn\") and different meaning (e.g. \"the play lasted two hours\" vs. \"they made a futile play for power\").\nWe compute the cosine similarity between the two contextualised embeddings of a target word in two of its contexts to predict a similarity score. Figure 1 shows the predicted similarity scores for both contexts in which a target word has been used in the same or different meanings for all words in the WiC dataset against log(ψ w ). As seen from Figure 3, ψ w has a power-law distribution. Therefore, we plot its log instead of raw frequency counts in Figure 1.\nFrom Figure 1, we see that for both same as well as different meaning contexts, the predicted cosine similarities drop with the word frequencies. Moreover, the gradient of the drop for same meaning pairs (Pearson's r = -0.3001) is larger than that for the different meaning pairs (r = -0.2125), indicating that the underestimation of cosine similarity is more sever for the similar contexts of highly frequent words." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "2 norm Discounting", "publication_ref": [], "table_ref": [], "text": "To understand the possible reasons behind the cosine similarity underestimation for highly frequent words discussed in § 2, for each word w we compute its mean sibling embedding, ŵ, given by (1).\nŵ = 1 |S(w)| c∈S(w) f (w, c)(1)\nWe plot || ŵ|| against log(ψ(w)) in Figure 2 separately for a predefined set of stop words and all other words (i.e. non-stop words). For this purpose, we use the default 1466 stop words from NLTK and randomly selected 997,425 non-stop words from the BookCorpus. Pearson r values of stop words and non-stop words are respectively 0.1697 and 0.3754, while the lines of best fits for each class of words are superimposed. From Figure 2, we see that overall, || ŵ|| increases with log(ψ w ) for both stop and non-stop words, while the linear correlation is stronger in the latter class. Considering that stop words cover function words such as determiners and conjunctions that co-occur with a large number of words in diverse contexts, we believe that the 2 norm of stop words mostly remains independent of their frequency. Recall that the cosine similarity between two words is defined as the fraction of the inner-product of the corresponding embeddings, divided by the product of the 2 norms of the embeddings. Therefore, even if the inner-product between two words remain relatively stable, it will be divided by increasingly larger 2 norms in the case of highly frequent words. Moreover, this bias is further amplified when both words are high frequent due to the product of 2 norms in the denominator.\nTo address this problem, we propose to discount the 2 norm of a word w by a discounting term, α(ψ w ), and propose a discounted version of the cosine similarity given by (2).\ncos α (x, y) = x ⊤ y ||x|| α(ψ x ) ||y|| α(ψ y ) (2)\nFollowing Figure 2, we linearly parameterise α(ψ w ) separately for stop vs. non-stop words as in (3).\nα(ψ w ) = 1 + m s (b s -log(ψ w )) w is a stop word 1 + m n (b n -log(ψ w )) w is a non-stop word (3)\nThe scalar parameters m s , m n , b s and b n are estimated as follows. First, we randomly initialise all parameters uniformly in [0, 1] and use (2) to predict cosine similarity between two contexts in which a target word w occurs in the WiC train instances. We then make a binary similarity judgement (i.e. same or different meaning) for the pair of contexts in an instance depending on whether the predicted cosine similarity is greater than a threshold θ. Next, we compute the overall binary classification accuracy for the similarity predictions made on the entire WiC training dataset, Figure 4: Cosine similarity between two instances of the same word w in two contexts in the WiC train dataset, computed using the original (non-discounted) cosine similarity (shown in blue and green respectively for the same and different meaning pairs) and using the proposed 2 norm discounted (( 2)) (shown in orange and red respectively for the same and different meaning pairs). We see that the gradients of the drops have decreased for both same and different meaning pairs after applying the discounting. and use Bayesian Optimisation to find the optimal values: θ = 0.545, m s = 0.00422, b s = 0.643, m n = 0.00427 and b n = 4.821. Specifically we used the Adaptive Experimentation Platform4 for learning those optimal values. We found this is more efficient than conducting a linear search over the parameter space. We repeat the estimation five times and use the averaged parameter values in the remainder of the experiments. Note that m n > m s above, which indicates that non-stop words must be discounted slightly more heavily than the stop words. This makes sense since the impact of word frequency of non-stop words on their 2 -norm is stronger than that for the stop words as indicated by the slopes of the lines of best fit in Figure 2." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Results", "publication_ref": [ "b25" ], "table_ref": [], "text": "To evaluate the effect of the proposed 2 norm discounting when computing cosine similarity, we repeat the analysis presented in Figure 1 using (2) to predict the similarity between contextualised word embeddings. Comparing the lines of best fit for the original (blue, r = -0.3006) vs. discounted (orange, r = -0.1366) for the same meaning contexts, we see that the gradient of the drop has decreased by 51.65%. Likewise, comparing the lines of best fit for the original (green, r = -0.2125) vs. dis- counted (red, r = -0.0843) for the different meaning contexts, we see the gradient of the drop has decreased by 57.04%. This result clearly shows that the proposed 2 norm discounting method is able to reduce the underestimation of cosine similarities for the highly frequent words.\nGiven that the discounting parameters in (3) are learned from the WiC train data, it remains an open question as to how well the proposed discounting method generalises when predicting similarity between contextualised embeddings of unseen words. To evaluate this generalisability of the proposed method, we use (3) with its learned parameters from WiC train data, to predict the similarity between contextualised word embeddings in WiC dev data.\n5 Specifically, we predict binary (same vs. different meaning) similarity labels according to the similarity threshold θ learnt in § 3 and compare against the human judgements using binary classification accuracy.\nThe maximum accuracy on WiC dev split obtained using the original (non-discounted) cosine similarities is 0.6667, which indicates that the cosine similarity is somewhat predictive of the human binary judgements. The overall F1 is improved by 2.4% (0.68 with original cosine vs. 0.71 with the proposed discounting method) and recall is improved by 12% (0.75 with original cosine vs. 0.84 with the proposed). On the other hand, the drop in precision is 4.7% (from 0.64 to 0.61). Therefore, the proposed method solves the cosine similarity underestimation problem associated with high-frequent words, without significantly affecting the similarity scores for low-frequent ones Figure 5 shows the average proportion of instances predicted to be the same meaning as a function of frequency, grouped into ten bins, each with the same number of examples. From Figure 5, we see that in high frequency bins (i.e. bins 8, 9 and 10), the percentage of predicted instances as having the same meaning is consistently lower than that compared to the human judgements. This shows an underestimation of the true (human judged) similarity between contextualised word embeddings.\nOn the other hand, when we use the proposed 2 norm discounted cosine similarity (defined in (2)), in the highest frequent bin (i.e. 10) we see that the gap between human judgements vs. predicted similarities has reduced. Moreover, in the low frequency bins (i.e. 1-4), we see that the proposed discounting method does not affect the predictions made using cosine similarities. We see an overestimation of the cosine similarities in the low frequency bins as reported by Zhou et al. (2021). As discussed already in § 1, the word embeddings learnt for low frequency words tend to be unreliable due to data sparseness. Therefore, we believe it is important to focus on the problem of learning accurate word embeddings rather than to adjust cosine similarities between low-frequency words in a post-processing step.\nWe see that in bins 5, 6 and 7 the similarity scores are slightly increased by the proposed discounting method, which is a drawback that needs to be addressed in future work. More importantly however, the overall percentage recall across all bins for retrieving same meaning instances improves significantly from 74.7% to 83.7% compared to using respectively the original cosine similarity vs. the discounted cosine similarity. Overall, this result confirms the validity of the proposed discounting method for addressing the underestimation of cosine similarity involving highly frequent words." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed a method to solve the cosine similarity underestimation problem in highly frequent words. Specifically, we observed that the 2 norm of a contextualised word embedding increases with its frequency in the pretrain corpus and proposed a discounting scheme. Experimental results on WiC dataset confirmed the validity of the proposed method." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b24", "b12" ], "table_ref": [], "text": "We proposed a solution to the cosine similarity underestimation problem associated with contextualised word embeddings of highly frequent words. Our evaluations used only a single contextualised embedding model (i.e. BERT) with a single dimensionality (i.e. 768). Therefore, we believe that our proposed method must be evaluated with other (more recent) MLMs to test for its generalisability. Moreover, our evaluations were conducted only on the English language, which is known to be morphologically limited. Although in our preliminary experiments we considered discounting schemes based on the part-of-speech of words (instead of considering stop words vs. non-stop words), we did not find any significant improvements despite the extra complexity. However, these outcomes might be different for more morphologically richer languages. In order to evaluate similarity predictions in other languages, we must also have datasets similar to WiC annotated in those languages, which are difficult to construct. Although having stated that using a single MLM and single language as limitations of this work, we would like to point out that these are the same conditions under which Zhou et al. (2022) studied the cosine similarity underestimation problem.\nWe used only a single dataset (i.e. WiC) in our experiments in this short paper due to space constraints. Other contextual similarity datasets (e.g. Stanford Contextualised Word Similarity (SCWS) (Huang et al., 2012)) could be easily used to further validate the proposed discounting method in an extended version." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [ "b3", "b23", "b5", "b17" ], "table_ref": [], "text": "In this paper, we do not annotate novel datasets nor release any fine-tuned MLMs. Therefore, we do not see any direct ethical issues arising from our work. However, we are proposing a method to address the underestimation of cosine similarity scores computed using contextualised word embeddings obtained from (possibly socially biased) pretrained MLMs. We would therefore discuss the ethical implication of this aspect of our work in this section.\nCosine similarity has been used in various social bias evaluation measures such as the WEAT (Caliskan et al., 2017), SemBias (Zhao et al., 2018), WAT (Du et al., 2019), etc. These methods measure the cosine similarity between a gender and a set of pleasant or unpleasant set of attributes to compute a social bias evaluation score. Although originally these methods were developed for evaluating the social biases in static word embeddings, they have been later extended to contextualised word embeddings (Kaneko and Bollegala, 2022;Kaneko et al., 2022) and sentence embeddings (May et al., 2019), where cosine similarity still remains the main underlying metric. However, Ethayarajh et al. (2019c) showed that innerproducts to be superior over cosine similarity for social bias evaluation purposes. It remains unclear as to how the underestimation in cosine similarities discussed in our work would influence the social bias evaluations. In particular, the effect of the proposed 2 norm discounting scheme on social bias evaluation must be carefully studied in the future work." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Danushka Bollegala holds concurrent appointments as a Professor at University of Liverpool and as an Amazon Scholar. This paper describes work performed at the University of Liverpool and is not associated with Amazon." } ]
Cosine similarity between two words, computed using their contextualised token embeddings obtained from masked language models (MLMs) such as BERT has shown to underestimate the actual similarity between those words (Zhou et al., 2022). This similarity underestimation problem is particularly severe for highly frequent words. Although this problem has been noted in prior work, no solution has been proposed thus far. We observe that the 2 norm of contextualised embeddings of a word correlates with its log-frequency in the pretraining corpus. Consequently, the larger 2 norms associated with the highly frequent words reduce the cosine similarity values measured between them, thus underestimating the similarity scores. To solve this issue, we propose a method to discount the 2 norm of a contextualised word embedding by the frequency of that word in a corpus when measuring the cosine similarities between words. We show that the so called stop words behave differently from the rest of the words, which require special consideration during their discounting process. Experimental results on a contextualised word similarity dataset show that our proposed discounting method accurately solves the similarity underestimation problem.
Solving Cosine Similarity Underestimation between High Frequency Words by 2 Norm Discounting
[ { "figure_caption": "12Underestimation of Cosine Similarity Let us denote the d-dimensional contextualised word embedding produced by an MLM f for a target word w appearing in a context c by f (w, c)(∈ R d ). Moreover, let the set of contexts where w occurs in a given corpus be S(w). We refer to {f (w, c)|w ∈ S(w)} as the set of sibling embeddings of w. To study the relationship between the cosine similarity scores and the frequency of words, we use the 768-dimensional bert-base-uncased 2 as the contextualised embedding model. We use the token embedding of w from the final hidden layer of BERT as f (w, c).", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: 2 norm of the averaged contextualised word embedding of a word against its log-frequency in the pretrain corpus. Stop words and non-stop words are shown respectively in orange and blue dots. Lines of best fits for each category are superimposed.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Histogram of word frequencies in the BERT pretrain corpus. We see a Zipfian (power-law) distribution, which turns out to be approximately liner in the log-frequency space.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: Percentage of examples labelled as having the \"same meaning\". In high frequency words, we see that the cosine similarity-based predictions (orange/middle) are systematically underestimate the human similarity judgements (blue/left). However, after the proposed discounting method has been applied (green/right) the underestimation has reduced.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" } ]
Saeth Wannasuphoprasit; Yi Zhou; Danushka Bollegala
[ { "authors": "Sanjeev Arora; Yuanzhi Li; Yingyu Liang; Tengyu Ma; Andrej Risteski", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b0", "title": "A latent variable model approach to PMI-based word embeddings", "year": "2016" }, { "authors": "Maximilian Balandat; Brian Karrer; Daniel R Jiang; Samuel Daulton; Benjamin Letham; Andrew Gordon Wilson; Eytan Bakshy", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "BoTorch: A Framework for Efficient Monte-Carlo Bayesian Optimization", "year": "2019" }, { "authors": "Danushka Bollegala; Yuichi Yoshida; Ken-Ichi Kawarabayashi", "journal": "", "ref_id": "b2", "title": "Using k-way Co-occurrences for Learning Word Embeddings", "year": "2018" }, { "authors": "Aylin Caliskan; Joanna J Bryson; Arvind Narayanan", "journal": "Science", "ref_id": "b3", "title": "Semantics derived automatically from language corpora contain human-like biases", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Yupei Du; Yuanbin Wu; Man Lan", "journal": "", "ref_id": "b5", "title": "Exploring human gender stereotypes with word association test", "year": "2019" }, { "authors": "Kenji Hiroshi Echizen-Ya; Eduard Araki; Hovy", "journal": "", "ref_id": "b6", "title": "Word embedding-based automatic mt evaluation metric using word position information", "year": "2019" }, { "authors": "Kawin Ethayarajh; David Duvenaud; Graeme Hirst", "journal": "", "ref_id": "b7", "title": "Towards understanding linear word analogies", "year": "2019" }, { "authors": "Kawin Ethayarajh; David Duvenaud; Graeme Hirst", "journal": "", "ref_id": "b8", "title": "Understanding undesirable word embedding associations", "year": "2019" }, { "authors": "Kawin Ethayarajh; David Duvenaud; Graeme Hirst", "journal": "", "ref_id": "b9", "title": "Understanding undesirable word embedding associations", "year": "2019" }, { "authors": "Masih Hanifi; Hicham Chibane; Remy Houssin; Denis Cavallucci", "journal": "Engineering Applications of Artificial Intelligence", "ref_id": "b10", "title": "Problem formulation in inventive design using doc2vec and cosine similarity as artificial intelligence methods and scientific papers", "year": "2022" }, { "authors": "Johannes Hellrich; Udo Hahn", "journal": "", "ref_id": "b11", "title": "Bad Company-Neighborhoods in neural embedding spaces considered harmful", "year": "2016" }, { "authors": "Eric H Huang; Richard Socher; Christopher D Manning; Andrew Y Ng", "journal": "", "ref_id": "b12", "title": "Improving word representations via global context and multiple word prototypes", "year": "2012" }, { "authors": "Masahiro Kaneko; Danushka Bollegala", "journal": "", "ref_id": "b13", "title": "Unmasking the mask -evaluating social biases in masked language models", "year": "2022" }, { "authors": "Masahiro Kaneko; Aizhan Imankulova; Danushka Bollegala; Naoaki Okazaki", "journal": "", "ref_id": "b14", "title": "Gender bias in masked language models for multiple languages", "year": "2022" }, { "authors": "Suyoun Kim; Duc Le; Weiyi Zheng; Tarun Singh; Abhinav Arora; Xiaoyu Zhai; Christian Fuegen; Ozlem Kalinli; Michael L Seltzer", "journal": "", "ref_id": "b15", "title": "Evaluating User Perception of Speech Recognition System Quality with Semantic Distance Metric", "year": "2022" }, { "authors": "Omer Levy; Yoav Goldberg", "journal": "Advances in neural information processing systems", "ref_id": "b16", "title": "Neural word embedding as implicit matrix factorization", "year": "2014" }, { "authors": "Chandler May; Alex Wang; Shikha Bordia; R Samuel; Rachel Bowman; Rudinger", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "On measuring social biases in sentence encoders", "year": "2019" }, { "authors": "David Mimno; Laure Thompson", "journal": "", "ref_id": "b18", "title": "The strange geometry of skip-gram with negative sampling", "year": "2017" }, { "authors": "David Oniani; Yanshan Wang", "journal": "", "ref_id": "b19", "title": "A qualitative evaluation of language models on automatic question-answering for covid-19", "year": "2020" }, { "authors": "Mohammad Taher; Pilehvar ; Jose Camacho-Collados", "journal": "", "ref_id": "b20", "title": "WiC: the word-in-context dataset for evaluating context-sensitive meaning representations", "year": "2019" }, { "authors": "Faisal Rahutomo; Teruaki Kitasuka; Masayoshi Aritsugi", "journal": "", "ref_id": "b21", "title": "Semantic cosine similarity", "year": "2012" }, { "authors": "Laura Wendlandt; Jonathan K Kummerfeld; Rada Mihalcea", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Factors influencing the surprising instability of word embeddings", "year": "2018" }, { "authors": "Jieyu Zhao; Tianlu Wang; Mark Yatskar; Vicente Ordonez; Kai-Wei Chang", "journal": "", "ref_id": "b23", "title": "Gender bias in coreference resolution: Evaluation and debiasing methods", "year": "2018" }, { "authors": "Kaitlyn Zhou; Kawin Ethayarajh; Dallas Card; Dan Jurafsky", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Problems with cosine as a measure of embedding similarity for high frequency words", "year": "2022" }, { "authors": "Kaitlyn Zhou; Kawin Ethayarajh; Dan Jurafsky", "journal": "", "ref_id": "b25", "title": "Frequency-based distortions in contextualized word embeddings", "year": "2021" }, { "authors": "Yukun Zhu; Ryan Kiros; Rich Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b26", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015" } ]
[ { "formula_coordinates": [ 3, 119.81, 706.38, 169.32, 29.55 ], "formula_id": "formula_0", "formula_text": "ŵ = 1 |S(w)| c∈S(w) f (w, c)(1)" }, { "formula_coordinates": [ 3, 322.14, 475.35, 202.27, 27.88 ], "formula_id": "formula_1", "formula_text": "cos α (x, y) = x ⊤ y ||x|| α(ψ x ) ||y|| α(ψ y ) (2)" }, { "formula_coordinates": [ 3, 307.94, 570.04, 216.47, 33.49 ], "formula_id": "formula_2", "formula_text": "α(ψ w ) = 1 + m s (b s -log(ψ w )) w is a stop word 1 + m n (b n -log(ψ w )) w is a non-stop word (3)" } ]
10.1145/2528521.1508273
2023-05-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b57", "b66", "b40", "b7", "b37", "b64", "b67", "b20", "b42", "b69", "b22", "b12", "b69" ], "table_ref": [], "text": "Deep Learning (DL) has come to play an increasing role in a wide range of applications in the recent years. As their applications have become more and more complex, DL models themselves have increased in size and complexity. For inference serving as well as for training, these models place extreme demands on DL systems and hardware today.\nAn important source of complexity in DL models is the use of dynamic control flow as part of model execution. Unlike a static feed-forward model, the execution of a model with dynamic control flow, or a dynamic model can differ across different inputs to the model. This property has been used effectively to (1) model structured data such as parse trees (Socher et al., 2013a;2012) and images (Shuai et al., 2015), (2) perform better quality machine translations and text parsing by employing beam search (Wiseman & Rush, 2016;Koehn, 2004;Buckman et al., 2016), and (3) exit early out of convolutional (Kaya & Dumitras, 2018;Teerapittayanon et al., 2017) and transformer (Xin et al., 2020;Elbayad et al., 2019) models for reduced inference latency. The adaptability afforded by dynamic control flow is thus useful in a variety of situations.\nBatching is an important optimization that improves the throughput and hardware utilization during training and inference of a DL model. While straightforward for static DL computations, the presence of control flow divergence in dynamic computations makes manual batching difficult and error-prone. Thus, there has been significant past effort on performing automatic batching, or auto-batching, for dynamic DL computations. In order to handle the lack of execution knowledge of a dynamic computation during compilation, past works usually either (1) heavily rely on dynamic analyses, enabling them to handle general dynamic control flow (Neubig et al., 2017b;Looks et al., 2017), or (2) are specialized for specific control flow patterns or models, thus relying more on static analyses (Xu et al., 2018;Fegade et al., 2021). The former frameworks often incur high execution overheads caused by dynamic analysis, while the latter ones lack the generality to support the wide range of existing and future control flow patterns in DL computations.\nFurther, past work often heavily relies on vendor libraries such as cuDNN (Chetlur et al., 2014) and oneAPI (Intel, 2022). However, as implementing vendor libraries is an intensive process, they usually only implement commonly used, standard tensor operators. Further, as these kernels are optimized in isolation, without any contextual about the larger application they are used in, important optimizations such as kernel fusion can no longer be performed.\nIn order to overcome these limitations of past work, we propose ACROBAT 1 , an auto-batching framework for dynamic DL computations which relies on novel hybrid static+dynamic optimizations and end-to-end tensor kernel compilation. Our main insight in designing ACRO-BAT is that despite the lack of perfect execution knowledge during compilation for dynamic models, the compiler can often perform static analysis and optimizations to aid the dynamic analysis. This reduces execution overheads while effectively exploiting parallelism in the input computation. ACROBAT relies on traditional compiler techniques such Table 1. Comparison between ACROBAT and other solutions for auto-batching dynamic DL computations. Purely static or dynamic approaches can be overly conservative, or have high overheads respectively, unlike ACROBAT's hybrid analysis. as well as on minimal user annotations to enable such static analysis. Further, ACROBAT's end-to-end tensor kernel generation enables it to automatically generate kernels optimized and specialized to the larger computation again using static analysis to identify and exploit data reuse opportunities (as we see in §5). ACROBAT's generality allows one to express a wide variety of control flow patterns, ranging from simple conditional statements to complex recursive computations using a simple high-level language. Table 1 provides a qualitative comparison of ACROBAT with related work.\nIn short, this paper makes the following contributions: 1. We survey and characterize the dynamic control flow found in different DL computations.\n2. Our design employs novel hybrid static+dynamic optimizations and automated end-to-end kernel code generation to reduce execution overheads and to generate efficient tensor kernels that effectively exploit data reuse opportunities. In developing these optimizations, we heavily rely on traditional compilation techniques.\n3. We prototype ACROBAT, evaluate it against state-of-theart deep learning frameworks (Xu et al., 2018;Neubig et al., 2017a;Paszke et al., 2019) and report significant performance gains on Nvidia GPUs." }, { "figure_ref": [], "heading": "Background", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Dynamic Control Flow in DL computations", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "ACROBAT's primary objective is to exploit parallelism across tensor operators in the batched execution of dynamic DL computation. Such a computation may exhibit (1) Batch Parallelism that exists across different input instances in the mini-batch, and/or (2) Instance Parallelism which refers to the control flow parallelism as defined in Table 2. Beyond parallelism, table 2 summarizes other properties of control flow dynamism found in common DL models (more details can be found in §A of the appendix)." }, { "figure_ref": [], "heading": "Dynamic Batching", "publication_ref": [ "b42", "b53", "b30", "b13", "b70", "b18", "b10", "b57", "b60", "b19", "b66", "b55", "b43", "b21", "b37", "b64", "b20", "b31", "b28", "b27" ], "table_ref": [], "text": "ACROBAT builds upon dynamic batching (Looks et al., 2017;Neubig et al., 2017b), a past technique to perform auto-batching in the presence of dynamic control flow. (Rumelhart et al., 1986)/LSTM (Hochreiter & Schmidhuber, 1997)/GRU (Cho et al., 2014), GraphRNN (You et al., 2018) DIORA (Drozdov et al., 2019), Chinese Segmentation (Chen et al., 2015) DAG-RNN (Shuai et al., 2015), TreeL-STM (Socher et al., 2013a), MV-RNN (Socher et al., 2012) StackLSTM (Dyer et al., 2015) Beam search (Wiseman & Rush, 2016) with LSTM Mixture-of-experts (Shazeer et al., 2017;Ma et al., 2018;Fedus et al., 2021) Early exit models (Kaya & Dumitras, 2018;Teerapittayanon et al., 2017;Elbayad et al., 2019) No U-Turn Sampler (Hoffman & Gelman, 2011) Tree-to-tree NN (Chen et al., 2018b), Doubly Recurrent NN (Alvarez-Melis & Jaakkola, 2017) R-CNN (Girshick et al., 2013), Fast R-CNN (Girshick, 2015) Given a mini-batch of input instances, dynamic batching involves lazily executing the model computation for each input instance while building dataflow graphs (DFGs) of tensor operators for each instance in the background. The execution of these DFGs is triggered when the value of a particular tensor is requested (when the model contains tensor-dependent control flow, for example). During this execution, the runtime can identify batching opportunities within the DFGs and launch batched kernels appropriately." }, { "figure_ref": [], "heading": "ACROBAT: Overview and API", "publication_ref": [], "table_ref": [], "text": "Control flow dynamism necessitates reliance on potentially expensive runtime analysis for auto-batching. In ACROBAT, we observe that aggressive static analysis often provides sufficient information to reduce the overheads of such analyses. Such analyses further allow us to generate specialized and more efficient tensor kernels in an end-to-end manner.\nWe will now look at ACROBAT's compilation and execution workflows (illustrated in Fig. 1) that make use of the above insights. ACROBAT has been designed to take an unbatched DL computation expressed in a simple Turingcomplete functional language as an input. This enables ACROBAT users to easily express models with dynamic control flow, such as the ones discussed in §2.1. For example, Listing 1 illustrates a simple RNN model which ACROBAT can take as an input. " }, { "figure_ref": [], "heading": "Given an input computation", "publication_ref": [], "table_ref": [], "text": "= blockIdx.x // [0,BS] i = threadIdx.x // [0,256] O_ptr[b][i] = bias[i] + input_ptr[b][i] + state_ptr[b][i]\n10\nControl flow decisions depend on tensor values for the case of tensor dependent control flow." }, { "figure_ref": [], "heading": "Scheduling", "publication_ref": [ "b52" ], "table_ref": [], "text": "Inline Depth Computation ( §4.1) Figure 1. Overview of ACROBAT's workflow. Fig. 6 in the appendix shows a corresponding overview of DyNet, a past fully dynamic approach. Note how ACROBAT performs significant novel analysis and code generation at compile-time to reduce runtime overheads. Listing 1. A simple RNN model expressed in a functional language (here, Relay (Roesch et al., 2019) is used for illustration) as an input to ACROBAT. reuse opportunities and accordingly generates batched kernels 3 implementing the tensor operators used in the input program. Further, gather operator fusion ( §5.2) enables us to generate specialized kernels that minimize data movement. These unoptimized kernels are then optimized by an auto-scheduler 4 . Once optimized, target code 10 such as CUDA C++ can be generated for the batched kernels. Concurrently, the input program is further optimized and compiled 5 in an ahead-of-time (AOT) fashion to generate C++ code 7 . As part of this compilation, ACROBAT generates code to (1) enable low overhead scheduling via our inline depth computation approach, and (2) automatically enable concurrent execution in the presence of tensor dependent control flow ( §4.2).\nAt runtime, ACROBAT lazily executes the AOT compiled input program 7 on a mini-batch of inputs 6 , and constructs DFGs 8 . The ACROBAT runtime library will then schedule these DFGs (using inline depth computation as mentioned above) 9 , while looking for batching opportunities. Then, it will invoke the optimized batched kernels 10 for each identified batch of DFG nodes. If the input program exhibits tensor dependent control flow, the execution cycles back to the AOT compiled program which will execute further and create more DFGs.\nWe will now take a look at ACROBAT's hybrid optimizations in §4 and its tensor kernel generation in §5." }, { "figure_ref": [], "heading": "Hybrid Static+Dynamic Optimizations", "publication_ref": [], "table_ref": [], "text": "Dynamic control flow often precludes static program transformations. Therefore, ACROBAT takes a hybrid approach whereby it exploits static program knowledge by either (1) providing hints to the dynamic analysis ( §4.1), or (2) generating code that affords the dynamic analysis greater freedom in exploiting parallelism ( §4.2). Further, static analysis also allows us to perform optimizations such as kernel fusion, which is important for high performance ( §7.3). Below, we provide more details regarding our hybrid analysis." }, { "figure_ref": [], "heading": "Inline Depth Computation", "publication_ref": [ "b22", "b42", "b32", "b3", "b56" ], "table_ref": [ "tab_8" ], "text": "As past work (Fegade et al., 2021) has noted, prior fully dynamic approaches incur significant scheduling overheads. For instance, as we will show in Table 5, DyNet's scheduling overheads dominate the time spent in tensor computations for the TreeLSTM model. Instead, as described below, AC-ROBAT devises a scheme to perform scheduling as it constructs the DFGs, thereby lowering scheduling overheads greatly ( §7).\nA DFG scheduling algorithm has two goals: G.1 Correctness: Scheduling tasks such that dependences between the tasks are respected.\nG.2 Performance: Identifying and exploiting parallelism. Given a DFG(s), we can satisfy both these goals by executing DFG nodes (each of which represents one tensor operator) in the increasing order of their topological depth2 , such that nodes at the same depth are executed concurrently (Neubig et al., 2017a;Looks et al., 2017). We make the following two observations in order to compute these depths during DFG construction: Based on these observations, we set the depth of an operator to be equal to its position in the dependency ordering induced by the execution of the unbatched program, thus meeting goal G.1. Then, we rely on observation O.2 above in order to discover and exploit opportunities for parallelism by using the following techniques:\nfunA() { concurrent { funA(); funA(); } funC(); } Figure 2. Concurrent call annotation.\nInstance Parallelism: We note that instance parallelism often stems from recursion or the use of the functional @map function on a list of independent items (observation O.2). We ensure that such concurrent operators are assigned the same depth during the execution of the unbatched program. We rely on simple user annotations to obtain information about recursive parallelism3 . Fig. 2 shows an example where the two recursive calls to funA are annotated as concurrent. Note also that past work auto-parallelization (Hogen et al., 1992;Aleen & Clark, 2009) could potentially be used in lieu of such annotations. Listing 2 shows the AOT compiled code generated for the RNN model in Listing 1. We see, on line 23, how all invocations of the relu bias dense kernel inside the @map function are assigned the same depth.\nCombating Eagerness of Depth Scheduling: As noted in past work (Neubig et al., 2017b), a depth-based scheduling scheme, like the one ACROBAT uses, can often be too eager in executing tensor operators, leading to a sub-optimal amount of exploited parallelism. Past work has relied on agenda-based scheduling (Neubig et al., 2017b), a more expensive scheduling scheme, as an alternative to the depthbased scheme to alleviate this problem. ACROBAT instead relies on compile-time analysis. In the presence of conditional if statements, eager batching leads to sub-optimal batching as illustrated in the upper panes of Fig. 4. In such situations, ACROBAT can statically insert ghost operations to essentially delay the scheduling and execution of certain operators, as shown in the lower panes of the figure. On the other hand, when repetitive (recursive or iterative) control flow is present, we rely on program phases (Sherwood et al., 2003) to combat the aforementioned sub-optimality of the scheduling. Given knowledge of such program phases, AC-ROBAT waits to schedule and execute operators in a phase until operators in all previous phases have been scheduled and executed. We find that considering individual semantic stages of the input DL computation as individual phases is a good heuristic for dividing the computation into phases. ACROBAT also provides a way for users to override this heuristic by manually annotate program phases, though in our evaluation, we did not need such annotations. We provide more details and explanations about program phases and ghost operations in §B.3 of the appendix.\nFurther, ACROBAT is also able to statically hoist operators, which we describe in more detail in §B.1 of the appendix. As an example, in Listing 2, the invocation of the kernel bias dense on line 5 is assigned a statically computed depth of 0, which during runtime, effectively hoists the kernel invocation out of the recursion." }, { "figure_ref": [], "heading": "Tensor Dependent Control Flow", "publication_ref": [ "b44" ], "table_ref": [], "text": "ACROBAT executes the unbatched program lazily to create DFGs for each input instance in the batch. In the absence of tensor dependent control flow, we can first execute the unbatched program for each instance sequentially and trigger the batching and execution of all the DFGs at once. In the presence of tensor dependent control flow, however, such sequential execution would not allow us to exploit any batch parallelism as we would be required to trigger the execution at control flow decisions that depend on the value of intermediate tensors. While prior work places the burden of restructuring input computations to alleviate this issue on the user, ACROBAT automatically generates code to execute the unbatched program for each input instance concurrently by using fibers4 . This way, the unbatched programs can be executed for each instance to a point where none can progress without triggering the evaluation of the DFG. At this point, the evaluation can be performed, and the concurrent executions resumed after as illustrated in Fig. 3. Correspondingly, in order to exploit instance parallelism in the presence of tensor dependent control flow, ACROBAT launches concurrent fibers, similar to the fork-join model of parallelism (McCool et al., 2012). ACROBAT thus combines the static knowledge of parallelism with dynamic concurrent execution as part of its hybrid analysis to effectively exploit parallelism in the presence of tensor dependent control flow." }, { "figure_ref": [], "heading": "End-to-end Tensor Kernel Generation", "publication_ref": [], "table_ref": [], "text": "As we alluded to above, ACROBAT enables end-to-end, uniform and automatic tensor kernel code generation by avoiding the use of vendor libraries. This allows ACRO-BAT to support a larger set of operators without additional compiler development effort. More details about ACRO-BAT's tensor kernel generation are provided below." }, { "figure_ref": [], "heading": "Exploiting Parameter Reuse", "publication_ref": [], "table_ref": [], "text": "Given the input unbatched computation, ACROBAT needs to generate batched kernels implementing the tensor operators used in the computation. Generating these kernels is not straightforward because some input tensors (often model parameters) might be shared across calls to the operator. For example, across multiple calls to the element-wise addition operator add3 used in the input computation 1 in Fig. 1, the bias argument will be shared (as it is a model parameter) and hence should be reused across all values of the arguments input and state. This can be seen in the corresponding batched kernel ( 3 and 10 ) in Fig. 1.\nA completely dynamic approach to auto-batching, such as the one used in DyNet, is unable to accurately identify such parameter reuse, and instead relies on heuristics, which can be brittle, leading to sub-optimal performance ( §7.2). On the other hand, ACROBAT uses a 1-context sensitive 5 taint anal-5 Context sensitivity is a static analysis technique that allows the compiler to reason about a function in the different contexts it ysis to identify such shared arguments to tensor operators. The use of static analysis here allows ACROBAT to obtain accurate knowledge about the parameter reuse patterns.\nBeyond the analysis described above, ACROBAT further explores opportunities for data reuse by employing code duplication and horizontal fusion as described in §C.1." }, { "figure_ref": [], "heading": "Fusing Memory Gather Operations", "publication_ref": [], "table_ref": [], "text": "As ACROBAT identifies batching opportunities across the DFGs dynamically, the input tensors to all DFG nodes in a batch may not be laid out contiguously in the accelerator's memory. In this scenario, prior work performs a memory gather before operating on the tensors (by invoking vendor library kernels), leading to significant data movement ( §7.3). Instead, ACROBAT generates specialized batched kernels to directly operate on tensors scattered in memory, in effect fusing the expensive gather operation with the batched kernel. The generated batched kernel 10 in Fig. 1 illustrates this. This fusion can lead to a significant performance improvement as seen in §7." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b71", "b69", "b22", "b25", "b58" ], "table_ref": [], "text": "Out prototype of ACROBAT is built upon TVM (Chen et al., 2018a) v0.9.dev0, a DL framework and a tensor compiler. It thus accepts as input computations expressed in Relay. Our prototype, ACROBAT also performs the grain size coarsening optimization (Zha et al., 2019;Xu et al., 2018;Fegade et al., 2021;Gao et al., 2018;Silfa et al., 2020), which is discussed more in §B.2 of the appendix.\nAs demonstrated in §E.2 of the appendix, we find that using an interpreted virtual machine (VM) for executing the unbatched programs can incur significant VM overheads in the presence of control flow dynamism. Therefore, AC-ROBAT compiles the input computation to C++ in an AOT fashion (as discussed in the appendix in §D). Further, as TVM does not support training, we evaluate ACROBAT for may be called under leading to increased analysis precision. For the DL computations we worked with, we found that a 1-context sensitive analysis was sufficient. Deeper contexts might be useful, however, for more complex computations.\n(batched) inference of DL computations. Other implementation details, including those on ACROBAT's use of TVM's auto-scheduler, can be found in the appendix in §D." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b22" ], "table_ref": [], "text": "We now evaluate ACROBAT against Cortex and DyNet on an Nvidia GPU. Cortex and DyNet are both state-of-the-art auto-batching frameworks for DL computations exhibiting recursive and general unrestricted control flow respectively. They have been shown to be faster than generic frameworks like PyTorch and TensorFlow (Neubig et al., 2017a;b;Fegade et al., 2021). We also compare ACROBAT's performance with that of PyTorch, though due to space limitations, we include those results in §E.3 in the appendix6 ." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b17", "b17" ], "table_ref": [ "tab_5" ], "text": "Models: We use the models listed in Table 3 for the evaluation. For each model, we look at two model sizes-small and large. For the MV-RNN model, we use hidden sizes 64 and 128 for the small and large model sizes, while for the Berxit model, the small model uses the same hyperparameters as the BERT BASE model (Devlin et al., 2018), while the large model uses the same hyper-parameters as the BERT LARGE model (Devlin et al., 2018), except that we use 18 layers instead of 24 in this case. For the remaining models, the small and the large model sizes use hidden sizes of 256 and 512 respectively.\nExperimental Environment: We run our experiments on a Linux workstation with an AMD Ryzen Threadripper 3970X CPU (64 logical cores with 2-way hyperthreading) and an Nvidia RTX 3070 GPU. The machine runs Ubuntu 20.04, CUDA 11.1 and cuDNN 8.0.5. We compare against DyNet's commit 3e1b48c7 (March 2022) which uses the Eigen library (v3.3.90)." }, { "figure_ref": [], "heading": "Overall Performance", "publication_ref": [], "table_ref": [], "text": "In this section, we compare ACROBAT's performance with that of DyNet and Cortex. Note that, as detailed further in §E.2 of the appendix, we find that AOT compilation significantly reduces execution overheads, leading to up to 13.45× faster execution as compared to the default Relay VM. Therefore, for the rest of this section, we evaluate ACROBAT's performance with AOT compilation turned on." }, { "figure_ref": [], "heading": "PERFORMANCE COMPARISON WITH DYNET", "publication_ref": [], "table_ref": [ "tab_7", "tab_8" ], "text": "We now compare ACROBAT's performance with that of DyNet. As mentioned in §6, TVM does not support the training of DL models. Therefore, due to lack of access to\nThe execution latencies for DyNet and ACROBAT are shown in Table 4 7 . ACROBAT performs better than DyNet in most cases due to a number of reasons. While, overall, ACROBAT performs 2.3× better than DyNet across all model configurations, DyNet performs slightly better than ACROBAT for some configurations of the BiRNN and NestedRNN models. For the former, Table 5 shows that while ACROBAT incurs lower runtime overheads for DFG construction, scheduling and memory transfer, it spends a higher amount of time in kernel execution compared to DyNet. We believe that better tensor kernel optimizations can help reduce this performance gap.\nBeyond the above reasons, ACROBAT performs better on specific benchmarks for the reasons discussed below:\nAccurate parameter reuse inference and automated batched kernel generation: As mentioned in §5.1, AC-ROBAT's use of static analysis for inferring parameter reuse allows it to have accurate knowledge to statically generate the appropriate batched kernels. On the other hand, DyNet's heuristic-based approach is unable to batch instances of certain operators, forcing sequential unbatched execution which leads to low performance. Further, as described in §5, ACROBAT's end-to-end kernel generation leads to a broader coverage over tensor operators for which batching is supported as compared to approaches such as DyNet which rely on vendor libraries. As a result, DyNet does not support batching for certain operators, again leading to sequential execution and low performance. Both these cases are discussed further in §E.4 of the appendix.\nAutomated code generation for handling tensor dependent control flow: The DRNN model constructs a tree from an input vector representation in a top-down recursive manner. It exhibits both tensor-dependent control flow as well as instance parallelism (multiple sub-trees can be generated concurrently). We saw how ACROBAT can automatically exploit instance parallelism in the presence of tensor-dependent control flow with the use of fibers in §4.2. On the other hand, DyNet is unable to exploit this parallelism and therefore ACROBAT's performance on this model is significantly better than that of DyNet." }, { "figure_ref": [], "heading": "PERFORMANCE COMPARISON WITH CORTEX", "publication_ref": [], "table_ref": [ "tab_9", "tab_5" ], "text": "Table 6 compares the performance of ACROBAT with that of Cortex for the TreeLSTM, MV-RNN and the BiRNN models. Note that this is not an apples-to-apples comparison because, Cortex, being specialized for recursive computations, does not support general control flow (as is present in the other models in Table 3) unlike ACROBAT as mentioned in Table 1. Further, Cortex places a high development burden on users who are required to manually optimize and tune their models for specific hardware, unlike ACROBAT's automatic kernel generation8 . Similarly, while ACROBAT can automatically hoist the input linear transformations out of the recursive computation in the TreeLSTM and BiRNN models (as described in §B.1), they need to be manually hoisted and offloaded to cuBLAS in the case of Cortex.\nBeing highly specialized for recursive computations, Cortex is able to exploit aggressive kernel fusion, model persistence and incur low kernel call overheads, thus performing up to 1.87× better than ACROBAT for the TreeLSTM and BiRNN models. However, note that Cortex performs much worse than ACROBAT on the MV-RNN model. This is because Cortex's restrictive API necessitates additional copies of the embedding vectors for the leaves of the input parse trees, which ACROBAT can avoid due to its more flexible interface. Overall, ACROBAT delivers performance comparable to that of Cortex, while supporting a much wider range of DL computations with much lesser developer effort." }, { "figure_ref": [ "fig_1" ], "heading": "Benefits of Optimizations", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We now evaluate the relative benefits of the different optimizations ACROBAT performs. Fig. 5 shows the execution times for the models in Table 3 (for the large model size at a batch size of 64) as we progressively perform optimizations. Standard kernel fusion (i.e. kernel fusion not including gather operator fusion as discussed in §5.2) provides significant benefits for all models9 . Grain size coarsening and inline depth computation, both of which reduce scheduling overheads, are most beneficial for models with a relatively high amount of control flow such as TreeLSTM and MV-RNN. Further, in the case of the DRNN model, inline depth computation also enables ACROBAT to exploit the instance parallelism inherent in the computation ( §4.2) leading to lower execution time. The BiRNN model involves per-token output linear operators as in token classification. Here, program phases allow ACROBAT to batch all these operators together as described in §4.1. The StackRNN model executes different tensor operators depending on the current parser action, which involves a conditional statement. Ghost operators therefore enable more optimal exploitation of parallelism leading to better performance.\nGather operator fusion is advantageous for some benchmarks and but not others. Such fusion leads to indirect memory accesses which can cause a slowdown in the kernel execution. While ACROBAT does hoist such loads out of loops when appropriate, this is not always possible de- pending on the schedule generated by the auto-scheduler. Further, gather operator fusion leads to a slowdown mostly in models with iterative execution and little instance parallelism. As in DyNet, when gather operator fusion is turned off, ACROBAT perform the explicit memory gather only when the input tensors are not already contiguous in memory. This is more likely to the case in such iterative models, thus blunting the advantages of gather operator fusion. Also, in models such as Berxit, the relatively high tensor computation cost of a coarsened static block further reduces any benefits gather operator fusion might provide.\nOverall, models with a relatively lower amount of control flow or a higher amount of tensor computations such as Berxit or NestedRNN or models with the large size benefit less from optimizations that reduce scheduling overheads." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b6", "b0", "b1", "b24", "b51", "b51", "b25", "b58", "b50", "b36", "b63", "b49", "b41", "b2", "b65", "b34", "b9", "b29", "b33", "b5", "b59", "b48", "b23", "b26" ], "table_ref": [], "text": "Auto-Batching for Dynamic Control Flow: There has been significant work on auto-batching techniques for dynamic computations. Beyond dynamic batching (which is used in various forms in DyNet, TensorFlow Fold, Cavs and Cortex), static program transformations (Bradbury & Fu, 2018;Agarwal, 2019;Agarwal & Ganichev, 2019;Frostig et al., 2018;Radul et al., 2020) have also been explored for auto-batching. Such techniques are often unable to fully exploit all the available parallelism in the program as noted in (Radul et al., 2020). ACROBAT builds on these past techniques and effectively uses both static as well as dynamic analysis thus achieving lower runtime overheads while exploiting all the available parallelism. Online batching approaches for low latency RNN inferene such as Batch-Maker (Gao et al., 2018) and E-BATCH (Silfa et al., 2020) are complementary to ACROBAT. (Qiao & Taura, 2019) proposes improvements to the dynamic batching technique for back propagation. Further, while grain size coarsening has been explored in past work, we use it statically in the context of general purpose auto-batching framework.\nOptimizing Dynamic DL Computations: Beyond autobatching, there is a large body of work on optimizing the execution of dynamic DL computations. Past work (Jeong et al., 2019;Kim et al., 2021;Suhan et al., 2021) has explored the lazy creation of DFGs that can be optimized to accelerate dynamic models. These techniques, which do not perform batching, are complementary to ACROBAT's techniques. While ACROBAT builds upon TVM, our techniques can be implemented in other commonly used compiler frameworks with expressive representations (PyTorch, 2020;Lattner et al., 2020) in a straightforward manner.\nThe gather operator fusion optimization is similar to the gather and scatter fusion (CUTLASS, 2022) performed for sparse GEMM in the CUTLASS library though we perform this optimization automatically as part of compilation. As mentioned in §D.1, ACROBAT borrows some techniques from DietCode for efficient code generation. DietCode's techniques are complementary to ours and it can be fully integrated into ACROBAT for better kernel performance.\nTraditional Compiler Techniques: ACROBAT uses compilation techniques for programs written in general-purpose languages. These include context-sensitivity (Aho et al., 2007), taint analysis which is extensively used for security purposes (Tripp et al., 2009;Huang et al., 2015), profileguided optimization (Chen et al., 2006;Gupta et al., 2002) (as discussed in §D.1 of the appendix) and program phases, which have been used to adaptively optimize systems for different parts of a program for optimal performance (Huang et al., 2001;Barnes et al., 2002). ACROBAT's inline depth computation and DFG scheduling more generally are similar to work on static and dynamic instruction scheduling for pipelined and superscalar processors (Smith, 1989;Ponomarev et al., 2001;Fisher, 1981;Gibbons & Muchnick, 1986). However, ACROBAT applies these techniques in the context of a DL framework." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper presents ACROBAT, a compiler and runtime framework that performs auto-batching of dynamic DL computations. ACROBAT employs hybrid static+dynamic analysis to enable effective batching with low runtime overheads, and end-to-end code generation to generate highly optimized tensor kernels for efficient execution. While we evaluated these techniques only for the case of batched inference, we believe that they also apply to DL training. In the context of the rising importance of dynamism in DL computations, we believe that ACROBAT is an important step towards more collaborative relationships between various components of a DL framework such as the tensor compiler, the high-level language compiler as well as the runtime." }, { "figure_ref": [], "heading": "A. Dynamic Control Flow in DL computations", "publication_ref": [ "b19", "b37", "b64", "b67", "b20", "b55", "b43", "b21", "b31" ], "table_ref": [ "tab_0" ], "text": "In recent years, highly expressive DL models have often involved dynamism. Below, we take a look at the different kinds of control flow dynamism present in various DL computations in the context of the auto-batching problem.\nNote that given a computation involving control flow, there are often multiple ways to implement it. We consider the most natural way to implement a given computation. For example, a top-down tree traversal can be implemented as a breadth-first traversal (BFS) or a depth-first traversal (DFS). While a BFS traversal maybe more efficient, the DFS-based traversal is more natural to implement. The discussion below is also summarized in Table 2.\nControl Flow Surrounding Static Sub-Graphs: We observe that for most ML computations exhibiting control flow dynamism, the dynamic control flow surrounds tensor computations. Consider the simple sequential RNN model implemented by the @rnn function shown in Listing 1.\nHere, we see that the sequential control flow surrounds an RNN cell on lines 5 and 6, which is a static sub-graph of tensor computations with no intervening control flow.\nTensor-Dependent Control Flow: Control flow decisions often depend on the values of intermediate tensors in ML computations. Examples of such models and computations include beam search in machine translation, StackLSTMs (Dyer et al., 2015), Tree-to-Tree neural networks (T2TNN) (Chen et al., 2018b), models with early exits (Kaya & Dumitras, 2018;Teerapittayanon et al., 2017;Xin et al., 2020;Elbayad et al., 2019), Mixture-of-Experts (Shazeer et al., 2017;Ma et al., 2018;Fedus et al., 2021) and other ML computations such as the No U-Turn Sampler (NUTS) (Hoffman & Gelman, 2011). Meanwhile, in models such as TreeLSTM (Socher et al., 2013a), DAG-RNN, sequential RNNs and their variants, control flow only depends on the inputs and not on intermediate tensors." }, { "figure_ref": [], "heading": "Repetitive Control Flow:", "publication_ref": [ "b13", "b57", "b44" ], "table_ref": [], "text": "We say that a model exhibits repetitive control flow if it can be expressed as an iterative or recursive computation. This includes iterative models such as RNNs and their variants (LSTM and GRU (Cho et al., 2014) for example) and StackLSTMs, and recursive models such as TreeLSTM, Tree-to-Tree neural networks and DAG-RNNs (Shuai et al., 2015). On the other hand, Mixtureof-Experts and early exit models do not exhibit repetitive control flow. Such models contain conditional execution in an otherwise static feed-forward network. Repetitive control flow can often also be nested. The GraphRNN model, for example, executes two RNNs, one nested inside the other.\nSimilarly, the DRNN model, which is used for top-down recursive tree generation, involves iterative generation of children for a given tree node.\nThe presence of recursive, as opposed to iterative control flow, can often complicate static analysis as parallelism is more easily exploited with the latter. We see in §4.2 how exploiting parallelism across recursive calls at runtime, for example, can require the multiple concurrent execution contexts, similar to the fork-join parallelism paradigm (McCool et al., 2012).\nControl-Flow in Training and Inference: We see, in Table 2, that the computation for a lot of the models involve dynamic control flow during both training as well as inference. This is however, not the case for models with early exists, where during training, we often wish to train all the exit branches rather than evaluating one, as is the case during inference. Further, search procedures such as beam search are often used only during inference and hence the underlying model may not exhibit dynamism during training (unless the model computation itself involves dynamism, as in the case of RNN models, for example).\nControl Flow Parallelism: Dynamic control flow can lead to parallelism in DL computations. The amount of such parallelism differs widely across computations. Recursive models, often (though not always) have significant parallelism across different recursive calls. Correspondingly, iterative computations may contain loops that can be executed concurrently. An example is the call to the @map function call in the RNN implementation in Listing 1." }, { "figure_ref": [], "heading": "B. More Details on Hybrid Static+Dynamic Optimizations", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1. Operator Hoisting", "publication_ref": [], "table_ref": [], "text": "Given a recursive computation, such as the @rnn function in Listing 1, often certain tensor operators are not part of the sequential dependency induced by the recursion. For example, the linear transformation of the input on line 5 in Listing 1 can be hoisted out of the recursion. Instead of relying on a runtime scheduling algoritm to identify this as is done in past work, ACROBAT statically discovers such operators that can be hoisted. We achieve this by relying on a 1-context sensitive taint analysis to statically compute depths of such operators. We see, in Listing 2, how the invocation of the kernel bias dense on line 5 is assigned a statically computed depth of 0. During runtime, such operators are thus effectively hoisted out of the recursion. For the RNN example, this allows us to batch the linear transformations for all input word embeddings together rather than execute them one at a time." }, { "figure_ref": [ "fig_2" ], "heading": "B.2. Grain Size Coarsening", "publication_ref": [ "b71", "b69", "b22", "b25", "b58" ], "table_ref": [], "text": "Generally, scheduling is performed at the granularity of individual tensor operators i.e. each node in the DFG cor- responds to one tensor kernel call. We saw in §A, how DL computations frequently contain larger static sub-graphs embedded in the dynamic control flow. Therefore, AC-ROBAT performs scheduling at the coarser granularity of static sub-graphs, thus reducing scheduling overheads. As these blocks do not contain any control flow, coarsening the granularity this way does not lead to a loss of exploited parallelism. This optimization has also been explored in past work (Zha et al., 2019;Xu et al., 2018;Fegade et al., 2021;Gao et al., 2018;Silfa et al., 2020) and is illustrated in Fig. 7." }, { "figure_ref": [ "fig_3" ], "heading": "B.3. Combating Eagerness of Depth Scheduling", "publication_ref": [ "b54" ], "table_ref": [], "text": "We saw in §4.1 how ACROBAT relies on ghost operations and program phases to combating eagerness of depth scheduling. Below, we provide more detailed explanation of the same.\nGhost Operators: In upper panes of Fig. 4, we see that eager batching leads to a sub-optimal batching schedule in the presence of a conditional statement. Specifically, the instances of operator B for inputs Inp1 and Inp2 are batched eagerly and, more importantly, separately from the instances of operator B for inputs Inp3 and Inp4. In the lower panes, we insert a call to a ghost operator leading to an optimal schedule. ACROBAT statically identifies such cases and insert ghost operators as needed. Note that ghost operators merely affect scheduling and are ignored during kernel execution.\nProgram Phases: For our RNN example in Listing 1, in order to exploit the most parallelism for the output operator on line 19, one should wait until all the operators invoked in the @rnn functionhave been executed for all the input instances. This way, all output operators corresponding to all words in all input instances can be executed as one batched kernel invocation. This would require that all these output operators be assigned the same depth. However, this may not be the case as the length of each input sentence may vary. Semantically, we can divide the RNN computation into two semantic stages-the initial recursive computations, and the following output transformations. Given such program phases, ACROBAT schedules and executes operators in one phase before moving on to the next. This way, ACRO-BAT ensures that all the RNN functions are executed for all input instances before moving on to the output operators.\nC. More Details on ACROBAT's Tensor Kernel Generation (Schuster & Paliwal, 1997) computation. Here, we invoke the same @rnn function with different model parameters to implement the forward and backward RNNs. In this case, the tensor operators invoked by the @rnn function will not be statically determined to have any arguments constant across multiple calls, thereby precluding data reuse for the model parameters. In order to remedy this, before generating the batched kernels, ACROBAT recognizes such cases of data reuse (again using a context-sensitive taint analysis) and transitively duplicates the necessary functions to enable data reuse later when generating the batched kernels10 . In the case of the BiRNN example, for instance, ACROBAT will transitively duplicate the @rnn function (including the tensor operators it invokes) and use a different copy of the @rnn function for each of the two forward and backward calls in the listing below. ( * Type annotations are omitted in the listing for simplicity. * ) def @main(f_rnn_bias, f_rnn_i_wt, f_rnn_h_wt, f_rnn_init, b_rnn_bias, b_rnn_i_wt, b_rnn_h_wt, b_rnn_init, inps_list) { let rinps_list = @reverse_list(inps_list); let forward_res = @rnn(inps_list, f_rnn_init, f_rnn_bias, f_rnn_i_wt, f_rnn_h_wt); let backward_res = @rnn(rinps_list, b_rnn_init, b_rnn_bias, b_rnn_i_wt, b_rnn_h_wt); } Reuse Within Static Blocks: Given a tensor operator, the analysis discussed above takes into account parameters shared across calls made by different input instances in the mini-batch. This usually applies to model parameters as they are shared across multiple input instances. It is often the case, however, that multiple calls to the same tensor operator within the same static block share a parameter. For example, this is the case in the commonly used LSTM cell, where the computation of the four gates all involve concurrent linear transformations of the same input vector. In such cases, ACROBAT horizontally fuses such calls in order to exploit the parameter data reuse. This is illustrated in Fig. 8." }, { "figure_ref": [], "heading": "D. More Implementation Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D.1. Tensor Kernel Optimization", "publication_ref": [ "b73", "b72" ], "table_ref": [], "text": "Below, we discuss how ACROBAT relies on TVM's autoscheduler (Zheng et al., 2020) to automatically generate optimized implementations of batched versions of (potentially fused) tensor operators used in the input program.\nAuto-scheduler Operator Priorities: Given a DL computation consisting of a number of tensor operators, the auto-scheduler prioritizes the optimization of tensor operators based on their relative estimated execution cost. Among other factors, this estimated cost is proportional to the number of times the operator is invoked during the execution of the input program. In order to accurately estimate this execution frequency for a given operator in the presence of control flow (such as repetitive or conditional control flow), AC-ROBAT relies on profile-guided optimization (PGO). When PGO is not possible, ACROBAT also provides a simple static analysis to heuristically perform this estimation based on how deeply nested an operator call is in the recursion.\nHandling Variable Loop Extents: Due to the dynamic nature of ACROBAT's scheduling, the loop corresponding to the batch dimension in the generated unoptimized batched kernels has a variable extent (kernel 3 in Fig. 1, for example). In order to optimize these kernels, ACRO-BAT auto-schedules a corresponding kernel with a static loop extent for the batch dimension and automatically applies the generated schedule to the original kernel with the variable extent. Further, when generating code for loops with variable extents, we often have to insert conditional checks in order to avoid out of bounds accesses. We rely on the local padding and local partitioning techniques proposed in DietCode (Zheng et al., 2022) to eliminate these conditional checks when appropriate as they can be severely detrimental to performance" }, { "figure_ref": [], "heading": "D.2. Ahead-of-time Compilation", "publication_ref": [], "table_ref": [], "text": "We saw in §6 that ACROBAT compiles the input Relay computation to C++ in an ahead-of-time fashion. As part of this compilation, ACROBAT lowers all dynamic control flow as well as irregular data structures to native C++ control flow and classes. Relay handles scalars by modeling them as zero dimensional tensors. ACROBAT's AOT compiler lowers such zero-dimensional tensors and common arithmetic operators on them to native C++ scalars as well. We see, in §E.2, that this AOT compilation significantly reduces the execution overheads of dynamic control flow." }, { "figure_ref": [], "heading": "D.3. Other Details", "publication_ref": [], "table_ref": [], "text": "As discussed in §6 of the main text, we prototype AC-ROBAT by extending TVM. We find that TVM's operator fusion pass is limited and is often unable to fuse memory copy operators such as tensor reshape, concatenation and transpositions. Therefore, in our implementations of the DL computations, we manually provide fusion hints to the compiler to force the fusion of such operators with their consumers. Further, our current prototype only supports the functional subset of Relay. Specifically, side-effects via mutable references are currently not supported. ACROBAT's runtime system has been heavily optimized to reduce runtime overheads. We use arena allocation (both on the CPU as well as on the GPU) and asynchronous execution on the GPU. We also batch memory transfer operations between the CPU and GPU when possible to reduce the CUDA API overheads." }, { "figure_ref": [], "heading": "E. Supplementary Evaluation and Additional Details", "publication_ref": [], "table_ref": [], "text": "E.1. Simulating Tensor Dependent Control Flow using Pseudo-randomness\nAs mentioned in §6, we simulate tensor dependent control flow in the NestedRNN, DRNN, Berxit and StackRNN models using pseudo-randomness. We ensure that the pseudorandomness is uniform across the ACROBAT and DyNet implementations by using pre-determined random seeds for a fair comparison. An exception is the DRNN model when inline depth computation is performed. In this case, ACRO-BAT exploits DRNN's recursive instance parallelism using fibers ( §4.2) leading to a change in the random control flow decisions taken. We account for this by presenting the mean execution time across 50 different random seeds." }, { "figure_ref": [], "heading": "E.2. Benefits of AOT Compilation", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "We first look at the benefits of AOT compilation ( §6). The performance of the TreeLSTM, MV-RNN and BiRNN models11 when executed using the Relay VM and ACROBAT's AOT compiler (with the grain size coarsening, gather operator fusion and program phase optimizations turned on) is shown in Table 7. We see that overheads significantly slow down the execution (by up to 13.45×) as compared to the AOT compiled native code for these models. Therefore, for the rest of this section, we evaluate ACROBAT's performance with AOT compilation turned on." }, { "figure_ref": [], "heading": "E.3. Performance Comparison with PyTorch", "publication_ref": [], "table_ref": [], "text": "Fig. 9 compares ACROBAT's performance with that of Py-Torch (v1.9.0a0+gitf096245) for the TreeLSTM, MV-RNN and BiRNN models12 . PyTorch does not perform autobatching and is therefore unable to exploit any available instance or batch parallelism in the evaluated computations. Further, ACROBAT's kernel fusion and other static opti-mizations also increase its performance relative to PyTorch. The speedups are higher for the small model size as compared to the larger model sizes because the relative importance of exploiting instance and batch parallelism is lower for the large model size due to the increased parallelism in individual tensor operators. ACROBAT's relatively worse performance on the BiRNN model as compared to the other two can be attributed to the absence of instance parallelism in BiRNN leading to a lower amount of parallelism that AC-ROBAT can exploit. Similarly, due to TreeLSTM exhibiting a higher amount of static and tensor parallelism as compared to MV-RNN, the relative importance of exploiting instance and batch parallelism is lower, leading to performance lower than that of MV-RNN." }, { "figure_ref": [], "heading": "E.4. More Details about Limitations of DyNet", "publication_ref": [], "table_ref": [ "tab_13", "tab_13" ], "text": "Accurate parameter reuse inference and automated batched kernel generation: As we discussed in §7.2.1, we find that DyNet is unable to batch the execution of tensor operators in certain cases. We provide more details and specific examples below.\nBrittle heuristics: We mentioned in §7.2.1, how DyNet employs brittle heuristics to infer parameter reuse thus leading to missed opportunities. For instance, DyNet heuristically batches multiple instances of the matrix multiplication operator only when the first argument of all the instances is the same tensor. This usually works as the first argument is often a model parameter, usually as part of a linear transformation.\nOur DyNet implementation of the MV-RNN model, however, multiplies two intermediate tensor activations together, as a result of which DyNet is unable to batch instances of this operator, forcing sequential unbatched execution. When we modify DyNet's heuristic for matrix multiplication, its performance improves significantly as shown in Table 8.\nHigh framework development effort: Similarly, we saw that because DyNet heavily relies on vendor libraries, it does not support batching for all tensor operators. For example, DyNet does not support batched execution for the argmax operator, which the StackRNN model uses in order to determine the next parser action in every iteration based on the result of the embedded RNN cell. Similarly, the elementwise multiplication operator, used in the DRNN model, is executed in an unbatched manner when broadcasting needs to be performed. On the other hand, ACROBAT automatically generates optimized batched implementations of these tensor operators.\nWe also find that DyNet is unable to batch calls to the operator that constructs constant valued tensors. We use this operator to initialize the hidden states of tree leaves in the TreeLSTM model. ACROBAT, on the other hand, statically recognizes that a constant valued tensor can be reused and thereby only creates the tensor once. The performance of the TreeLSTM model improves when we exploit this reuse manually in DyNet, as Table 8 shows.\nTable 8 also shows the performance improvement obtained in DyNet for the DRNN model when the instance parallelsism exhibited by the model computation is manually exploited as detailed in §7.2.1 of the main paper." }, { "figure_ref": [], "heading": "E.5. Benefit of PGO in Tensor Kernel Auto-Scheduling", "publication_ref": [], "table_ref": [ "tab_14" ], "text": "We mentioned in §D.1 that ACROBAT uses invocation frequencies (obtained via PGO) to prioritize tensor operator optimization during auto-scheduling. In order to evaluate the benefit of this optimization, we look at the performance of NestedRNN with and without the optimization. This benchmark computation executes 30 iterations of the inner RNN loop per iteration of the outer GRU loop on an average. Therefore, the operators invoked in the RNN loop affect the performance of the benchmark much more than those invoked in the GRU loop. Table 9 shows the execution times of the benchmark with and without PGO for different iterations of the auto-scheduler13 which shows how AC-ROBAT can better prioritize auto-scheduling for the RNN operators with PGO turned on." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by grants from the National Science Foundation, Oracle and IBM, and by the Parallel Data Lab (PDL) Consortium (Amazon, Facebook, Google, Hewlett-Packard Enterprise, Hitachi, IBM, Intel, Microsoft, NetApp, Oracle, Pure Storage, Salesforce, Samsung, Seagate, TwoSigma and Western Digital). We would like to thank Saman Amarasinghe, Dominic Chen, Stephen Chou, Chris Fallin, Graham Neubig, Olatunji Ruwase and the Catalyst Research Group at Carnegie Mellon University for their valuable suggestions and feedback on our work." } ]
Dynamic control flow is an important technique often used to design expressive and efficient deep learning computations for applications such as text parsing, machine translation, exiting early out of deep models and so on. However, the resulting control flow divergence makes batching, an important performance optimization, difficult to perform manually. In this paper, we present ACRO-BAT, a framework that enables efficient automatic batching for dynamic deep learning computations by performing hybrid static+dynamic compiler optimizations and end-to-end tensor code generation. ACROBAT performs up to 8.5× better than DyNet, a state-of-the-art framework for automatic batching, on an Nvidia GeForce RTX 3070 GPU.
ACROBAT: Optimizing Auto-batching of Dynamic Deep Learning at Compile Time
[ { "figure_caption": "Figure 3 .Figure 4 .34Figure 3. Concurrent execution of the unbatched program in the presence of tensor-dependent control flow. Tensor op. Ghost op. Batch", "figure_data": "", "figure_id": "fig_0", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Benefits of different optimizations. The unfused executions of Berxit were killed due to out-of-memory errors.", "figure_data": "", "figure_id": "fig_1", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Grain size coarsening for the @rnn function in Listing 1.", "figure_data": "", "figure_id": "fig_2", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Horizontal fusion promotes parameter reuse.", "figure_data": "", "figure_id": "fig_3", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Control flow properties found in DL computations. Legend: ITE: iterative control flow, REC: recursive control flow, TDC: model exhibits tensor-dependent control flow (where control flow decisions are predicated on values on intermediate tensors), CFP: computation exhibits high control flow parallelism (i.e., interoperator parallelism that arises due to dynamic control flow dependences, such as recursive parallelism), ICF: model inference exhibits control flow, TCF: model training exhibits control flow.", "figure_data": "Deep Learning ComputationsITE REC TDC IFP ICF TCFRNN", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "O.1 The order in which the unbatched program invokes the tensor operators, i.e. the order in which nodes are added to the DFGs, is a valid dependency order. O.2 Information about instance parallelism (for example, recursive parallelism in the TreeLSTM model as seen in Table2) is often available during compilation.Listing 2. AOT compiled output for the RNN model in Listing 1, with inline depth computation code highlighted.", "figure_data": "List<Tensor> rnn(List<Tensor> inps, Tensor state, Tensor bias,Tensor i_wt, Tensor h_wt, int& depth) {if (inps == ListNil()) return ListNil();auto inp_linear = AcrobatRT.InvokeKernel(\"bias_dense\",0, {bias, i_wt, inps.head});auto new_state = AcrobatRT.InvokeKernel(\"sigmoid_add_dense\",depth++ , {inp_linear, h_wt, state});return ListCons(new_state, rnn(inps.tail, state, bias, i_wt,h_wt, depth )); }vector<Tensor> main(Tensor rnn_bias, Tensor rnn_i_wt,Tensor rnn_h_wt, Tensor rnn_init, Tensor c_wt,Tensor cbias, vector<List<Tensor>> inps_vec) {vector<Tensor> res;for (auto inps: inps_vec) {int depth = 0;/ * Recursive computation stage (program phase 1) * /auto rnn_res = rnn(inps, rnn_init, rnn_bias, rnn_i_wt,rnn_h_wt, depth );/ * Output transformations stage (program phase 2) * /depth++;res.push_back(map([&](Tensor p) { AcrobatRT.InvokeKernel(\"relu_bias_dense\", depth , {cbias, c_wt, p}); }, rnn_res)); }return res; }", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Models and datasets used in the evaluation.", "figure_data": "ModelDescriptionDatasetTreeLSTMTreeLSTMStanford sentiment treebank (Socher et al., 2013b)MV-RNNMV-RNNStanford sentiment treebankBiRNNBidirectional RNNsXNLI (Conneau et al., 2018)NestedRNNAn RNN loop nested inside a GRU loopGRU/RNN loops iterate for a random number of iterations in [20, 40].DRNNDoubly recurrent neural networks for top-down tree generationRandomly generated tensors.BerxitEarly exit for BERT inference (Xin et al., 2021). All layers share weights.Sequence length 128.StackRNNStackLSTM parser with LSTM cells replaced by RNN cells.XNLI", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Table5lists the time spent by the frameworks for different runtime activities for the TreeLSTM model. We see that ACROBAT's optimizations such as static kernel fusion and grain size coarsening reduce the number of tensor kernels invoked, thereby significantly reducing DFG construction and scheduling overheads. Further, inline depth computation allows ACRO-BAT to exploit available parallelism with lower overheads. Optimizations such as static kernel fusion and gather operator fusion enable ACROBAT to launch fewer GPU kernels, further reducing the time spent in the CUDA API. We look at the benefits of each of ACROBAT's optimizations in more detail in §7.3.", "figure_data": "", "figure_id": "tab_6", "figure_label": "", "figure_type": "table" }, { "figure_caption": "DyNet vs. ACROBAT: Inference latencies (DyNet/ACROBAT) in ms and speedups. The DyNet implementation of the Berxit model was killed due to out-of-memory errors for a batch size of 64.", "figure_data": "HiddenBatchTreeLSTMMV-RNNBiRNNNestedRNNDRNNBerxitStackRNNSizeSizeTimeSpeedupTimeSpeedupTimeSpeedupTimeSpeedupTimeSpeedupTimeSpeedupTimeSpeedupsmall84.31/1.482.932.11/0.543.963.13/2.161.4529.38/31.010.956.7/1.743.8763.54/38.491.6647.78/22.692.11small6426.18/5.814.5112.45/1.488.4712.04/4.862.4984.55/65.731.2925.3/5.244.84-/204.54-213.98/39.065.48large84.58/2.41.922.27/1.042.193.95/4.430.946.03/35.611.38.44/2.453.45113.18/64.491.7664.67/43.751.48large6426.53/11.442.3313.89/4.463.1312.11/13.110.9394.97/100.170.9526.5/9.992.66-/335.3-230.74/86.822.66", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Time spent (ms) in various activities 1 for DyNet and ACROBAT for batch size 64. The timings reported correspond to multiple runs, and were obtained using manual instrumentation and profiling using Nvidia Nsight Systems. Due to profiling overheads, the execution times may not match the ones in Table4.", "figure_data": "ActivityTreeLSTM, smallBiRNN, largeDyNetACROBATDyNetACROBATDFG construction8.81.54.51.0Scheduling9.70.43.30.4Mem. copy time3.10.12.30.2GPU kernel time 26.14.06.611.2#Kernel calls1653183580380CUDA API time 316.53.912.011.11", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Cortex vs. ACROBAT: Inference latencies in ms. Note that unlike ACROBAT, Cortex is limited to recursive computations, and does not support the other models in Table3. Further, Cortex places a high development burden on its users by relying on manual kernel optimization.", "figure_data": "HiddenBatchTreeLSTMMV-RNNBiRNNSizeSizeCortex ACROBAT Cortex ACROBAT Cortex ACROBATsmall80.791.481.140.541.282.16small643.625.816.921.483.484.86large81.842.45.31.042.474.43large6410.2311.4441.154.4610.7413.11", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Relay VM vs. ACROBAT's AOT compilation: Inference latencies in ms.", "figure_data": "HiddenBatchTreeLSTMMV-RNNBiRNNSizeSizeVMAOTVMAOTVMAOTsmall830.682.664.00.5529.882.23small6428.949.473.911.6328.885.47large831.643.854.341.0632.044.82large6429.4915.94.364.630.4313.72ACRoBat's Speedup0 25 50 75MV-RNN TreeLSTM Small Model Size BiRNN10 20 30Large Model Size1 2 4 8 16 32 64 1281 2 4 8 16 32 64 128Batch sizeBatch sizeFigure 9. Speedups obtained over PyTorch for the TreeLSTM, MV-RNN and BiRNN models.", "figure_id": "tab_12", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Model execution times in ms after the improvements described in §7.2 were made for the TreeLSTM, MV-RNN and DRNN models. DN, DN++ and AB stand for DyNet, DyNet with improvements and ACROBAT respectively.", "figure_data": "ModelBatchTreeLSTMMV-RNNDRNNSizeSizeDN DN++ ABDN DN++ AB DN DN++ ABsmall84.313.81.48 2.111.05 0.54 6.73.29 1.74small6426.18 22.69 5.81 12.45 3.15 1.48 25.3 18.51 5.24large84.584.142.42.271.83 1.04 8.44 3.82 2.45large6426.53 24.09 11.44 13.89 10.47 4.46 26.5 18.86 9.99", "figure_id": "tab_13", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "NestedRNN (small, batch size 8) execution times (without/with PGO), illustrating the benefits of using PGO invocation frequencies during auto-scheduling.Execution times (ms) 41.08/42.49 34.58/30.88 31.61/24.4 27.33/23.72 25.63/24.34 ", "figure_data": "Auto-scheduler iters.1002505007501000", "figure_id": "tab_14", "figure_label": "9", "figure_type": "table" } ]
Pratik Fegade; Tianqi Chen; Phillip B Gibbons; Todd C Mowry
[ { "authors": "A Agarwal", "journal": "PMLR", "ref_id": "b0", "title": "Static automatic batching in TensorFlow", "year": "2019-06-15" }, { "authors": "A Agarwal; I Ganichev", "journal": "", "ref_id": "b1", "title": "Auto-vectorizing tensorflow graphs: Jacobians, auto-batching and beyond", "year": "2019" }, { "authors": "A V Aho; M S Lam; R Sethi; J D Ullman", "journal": "Pearson Education India", "ref_id": "b2", "title": "Compilers: principles, techniques, & tools", "year": "2007" }, { "authors": "Aleen ; F Clark; N ", "journal": "SIGARCH Comput. Archit. News", "ref_id": "b3", "title": "Commutativity analysis for software parallelization: Letting program transformations see the big picture", "year": "2009-03" }, { "authors": "D Alvarez-Melis; T Jaakkola", "journal": "", "ref_id": "b4", "title": "Tree-structured decoding with doubly-recurrent neural networks", "year": "2017" }, { "authors": "R Barnes; E Nystrom; M Merten; W Hwu", "journal": "", "ref_id": "b5", "title": "Vacuum packing: extracting hardware-detected program phases for post-link optimization", "year": "2002" }, { "authors": "J Bradbury; C Fu", "journal": "", "ref_id": "b6", "title": "Automatic batching as a compiler pass in pytorch", "year": "2018" }, { "authors": "J Buckman; M Ballesteros; C Dyer", "journal": "", "ref_id": "b7", "title": "Transitionbased dependency parsing with heuristic backtracking", "year": "2016-11" }, { "authors": "T Chen; T Moreau; Z Jiang; L Zheng; E Yan; H Shen; M Cowan; L Wang; Y Hu; L Ceze; C Guestrin; A Krishnamurthy; Tvm", "journal": "USENIX Association", "ref_id": "b8", "title": "An automated end-to-end optimizing compiler for deep learning", "year": "2018-10" }, { "authors": "W Chen; -K; S Bhansali; T Chilimbi; X Gao; W Chuang", "journal": "SIGPLAN Not", "ref_id": "b9", "title": "Profile-guided proactive garbage collection for locality optimization", "year": "2006-06" }, { "authors": "X Chen; X Qiu; C Zhu; X Huang", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Gated recursive neural network for Chinese word segmentation", "year": "2015-07" }, { "authors": "X Chen; C Liu; D Song", "journal": "", "ref_id": "b11", "title": "Tree-to-tree neural networks for program translation", "year": "2018" }, { "authors": "S Chetlur; C Woolley; P Vandermersch; J Cohen; J Tran; B Catanzaro; E Shelhamer; Cudnn", "journal": "", "ref_id": "b12", "title": "Efficient primitives for deep learning", "year": "2014" }, { "authors": "K Cho; B Van Merrienboer; C ¸ Gülc ¸ehre; F Bougares; H Schwenk; Y Bengio", "journal": "", "ref_id": "b13", "title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation", "year": "2014" }, { "authors": "P Community", "journal": "", "ref_id": "b14", "title": "Github issue number 42487: Support recursive data type in TorchScript", "year": "2020-07-25" }, { "authors": "A Conneau; R Rinott; G Lample; A Williams; S R Bowman; H Schwenk; V Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "XNLI: Evaluating cross-lingual sentence representations", "year": "2018" }, { "authors": " Cutlass", "journal": "", "ref_id": "b16", "title": "Gather and Scatter Fusion", "year": "2022-07-25" }, { "authors": "J Devlin; M Chang; K Lee; K Toutanova; Bert", "journal": "", "ref_id": "b17", "title": "pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "A Drozdov; P Verga; M Yadav; M Iyyer; A Mccallum", "journal": "", "ref_id": "b18", "title": "Unsupervised latent tree induction with deep inside-outside recursive autoencoders", "year": "2019" }, { "authors": "C Dyer; M Ballesteros; W Ling; A Matthews; N A Smith", "journal": "", "ref_id": "b19", "title": "Transition-based dependency parsing with stack long short-term memory", "year": "2015" }, { "authors": "M Elbayad; J Gu; E Grave; M Auli", "journal": "", "ref_id": "b20", "title": "Depth-adaptive transformer", "year": "2019" }, { "authors": "W Fedus; B Zoph; N Shazeer", "journal": "", "ref_id": "b21", "title": "Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity", "year": "2021" }, { "authors": "P Fegade; T Chen; P Gibbons; T Mowry", "journal": "", "ref_id": "b22", "title": "Cortex: A compiler for recursive deep learning models", "year": "2021" }, { "authors": " Pdf; Fisher", "journal": "IEEE Transactions on Computers, C", "ref_id": "b23", "title": "Trace scheduling: A technique for global microcode compaction", "year": "1981" }, { "authors": "R Frostig; M Johnson; C Leary", "journal": "", "ref_id": "b24", "title": "Compiling machine learning programs via high-level tracing", "year": "2018" }, { "authors": "P Gao; L Yu; Y Wu; J Li", "journal": "Association for Computing Machinery", "ref_id": "b25", "title": "Low latency rnn inference with cellular batching", "year": "2018" }, { "authors": "P B Gibbons; S S Muchnick", "journal": "", "ref_id": "b26", "title": "Efficient instruction scheduling for a pipelined architecture", "year": "1986" }, { "authors": "R B Girshick; R-Cnn Fast", "journal": "", "ref_id": "b27", "title": "", "year": "2015" }, { "authors": "R B Girshick; J Donahue; T Darrell; J Malik", "journal": "", "ref_id": "b28", "title": "Rich feature hierarchies for accurate object detection and semantic segmentation", "year": "2013" }, { "authors": "R Gupta; E Mehofer; Y Zhang", "journal": "", "ref_id": "b29", "title": "Profile guided compiler optimizations", "year": "2002" }, { "authors": "S Hochreiter; J Schmidhuber", "journal": "Neural Comput", "ref_id": "b30", "title": "Long short-term memory", "year": "1997-11" }, { "authors": "M D Hoffman; A Gelman", "journal": "", "ref_id": "b31", "title": "The no-u-turn sampler: Adaptively setting path lengths in hamiltonian monte carlo", "year": "2011" }, { "authors": "G Hogen; A Kindler; R Loogen", "journal": "Springer-Verlag", "ref_id": "b32", "title": "Automatic parallelization of lazy functional programs", "year": "1992" }, { "authors": "M Huang; J Renau; J Torrellas", "journal": "", "ref_id": "b33", "title": "Profile-based energy reduction in high-performance processors", "year": "2001" }, { "authors": "W Huang; Y Dong; A Milanova; J Dolby", "journal": "", "ref_id": "b34", "title": "Scalable and precise taint analysis for Android", "year": "2015" }, { "authors": "", "journal": "", "ref_id": "b35", "title": "Intel. Intel oneAPI Deep Neural Network Library", "year": "2022-07-01" }, { "authors": "E Jeong; S Cho; G.-I Yu; J S Jeong; D.-J Shin; B.-G Chun; Janus", "journal": "USENIX Association", "ref_id": "b36", "title": "Fast and flexible deep learning via symbolic graph execution of imperative programs", "year": "2019-02" }, { "authors": "Y Kaya; T Dumitras", "journal": "", "ref_id": "b37", "title": "How to stop off-the-shelf deep neural networks from overthinking", "year": "2018" }, { "authors": "T Kim; E Jeong; G.-W Kim; Y Koo; S Kim; G Yu; B.-G Chun; Terra", "journal": "", "ref_id": "b38", "title": "Imperativesymbolic co-execution of imperative deep learning programs", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b39", "title": "", "year": "2021" }, { "authors": "P Koehn", "journal": "Springer", "ref_id": "b40", "title": "Pharaoh: a beam search decoder for phrase-based statistical machine translation models", "year": "2004" }, { "authors": "C Lattner; M Amini; U Bondhugula; A Cohen; A Davis; J Pienaar; R Riddle; T Shpeisman; N Vasilache; O Zinenko; Mlir", "journal": "", "ref_id": "b41", "title": "A compiler infrastructure for the end of Moore's law", "year": "2020" }, { "authors": "M Looks; M Herreshoff; D Hutchins; P Norvig", "journal": "", "ref_id": "b42", "title": "Deep learning with dynamic computation graphs", "year": "2017" }, { "authors": "J Ma; Z Zhao; X Yi; J Chen; L Hong; Chi ; E H ", "journal": "", "ref_id": "b43", "title": "Modeling task relationships in multi-task learning with multi-gate mixture-of-experts", "year": "2018" }, { "authors": "M Mccool; A D Robison; J Reinders", "journal": "Morgan Kaufmann", "ref_id": "b44", "title": "Chapter 8 -fork-join", "year": "2012" }, { "authors": "G Neubig; C Dyer; Y Goldberg; A Matthews; W Ammar; A Anastasopoulos; M Ballesteros; D Chiang; D Clothiaux; T Cohn; K Duh; M Faruqui; C Gan; D Garrette; Y Ji; L Kong; A Kuncoro; G Kumar; C Malaviya; P Michel; Y Oda; M Richardson; N Saphra; S Swayamdipta; P Yin; Dynet", "journal": "", "ref_id": "b45", "title": "The dynamic neural network toolkit", "year": "2017" }, { "authors": "G Neubig; Y Goldberg; C Dyer; A Paszke; S Gross; F Massa; A Lerer; J Bradbury; G Chanan; T Killeen; Z Lin; N Gimelshein; L Antiga; A Desmaison; A Kopf; E Yang; Z Devito; M Raison; A Tejani; S Chilamkurthy; B Steiner; L Fang; J Bai; S Chintala", "journal": "", "ref_id": "b46", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2017" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b47", "title": "", "year": "2019" }, { "authors": "D Ponomarev; G Kucuk; K Ghose", "journal": "", "ref_id": "b48", "title": "Reducing power requirements of instruction scheduling through dynamic allocation of multiple datapath resources", "year": "2001" }, { "authors": " Pytorch; Torchscript", "journal": "", "ref_id": "b49", "title": "", "year": "2020-09-09" }, { "authors": "Y Qiao; K Taura", "journal": "", "ref_id": "b50", "title": "An automatic operation batching strategy for the backward propagation of neural networks having dynamic computation graphs", "year": "2019" }, { "authors": "A Radul; B Patton; D Maclaurin; M Hoffman; A Saurous; R ", "journal": "", "ref_id": "b51", "title": "Automatically batching control-intensive programs for modern accelerators", "year": "2020" }, { "authors": "J Roesch; S Lyubomirsky; M Kirisame; L Weber; J Pollock; L Vega; Z Jiang; T Chen; T Moreau; Z Tatlock", "journal": "", "ref_id": "b52", "title": "Relay: A high-level compiler for deep learning", "year": "2019" }, { "authors": "D E Rumelhart; G E Hinton; R J Williams", "journal": "nature", "ref_id": "b53", "title": "Learning representations by back-propagating errors", "year": "1986" }, { "authors": "M Schuster; K K Paliwal", "journal": "IEEE transactions on Signal Processing", "ref_id": "b54", "title": "Bidirectional recurrent neural networks", "year": "1997" }, { "authors": "N Shazeer; A Mirhoseini; K Maziarz; A Davis; Q V Le; G E Hinton; J Dean", "journal": "", "ref_id": "b55", "title": "Outrageously large neural networks: The sparsely-gated mixture-of-experts layer", "year": "2017" }, { "authors": "T Sherwood; S Sair; B Calder", "journal": "ACM SIGARCH Computer Architecture News", "ref_id": "b56", "title": "Phase tracking and prediction", "year": "2003" }, { "authors": "B Shuai; Z Zuo; G Wang; B Wang", "journal": "", "ref_id": "b57", "title": "Dagrecurrent neural networks for scene labeling", "year": "2015" }, { "authors": "F Silfa; J Arnau; A González; E-Batch", "journal": "", "ref_id": "b58", "title": "energyefficient and high-throughput RNN batching", "year": "2020" }, { "authors": "J Smith", "journal": "Computer", "ref_id": "b59", "title": "Dynamic instruction scheduling and the astronautics zs-1", "year": "1989" }, { "authors": "R Socher; B Huval; C D Manning; A Y Ng", "journal": "", "ref_id": "b60", "title": "Semantic Compositionality Through Recursive Matrix-Vector Spaces", "year": "2012" }, { "authors": "R Socher; A Perelygin; J Wu; J Chuang; C D Manning; A Ng; C Potts", "journal": "", "ref_id": "b61", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013-10" }, { "authors": "R Socher; A Perelygin; J Wu; J Chuang; C D Manning; A Y Ng; C Potts", "journal": "", "ref_id": "b62", "title": "Recursive deep models for semantic compositionality over a sentiment treebank", "year": "2013" }, { "authors": "A Suhan; D Libenzi; A Zhang; P Schuh; B Saeta; J Y Sohn; D Shabalin", "journal": "", "ref_id": "b63", "title": "Lazytensor: combining eager execution with domain-specific compilers", "year": "2021" }, { "authors": "S Teerapittayanon; B Mcdanel; H T Kung; Branchynet", "journal": "", "ref_id": "b64", "title": "Fast inference via early exiting from deep neural networks", "year": "2017" }, { "authors": "O Tripp; M Pistoia; S J Fink; M Sridharan; O Weisman; Taj", "journal": "SIGPLAN Not", "ref_id": "b65", "title": "Effective taint analysis of web applications", "year": "2009-06" }, { "authors": "S Wiseman; A M Rush", "journal": "", "ref_id": "b66", "title": "Sequence-to-sequence learning as beam-search optimization", "year": "2016" }, { "authors": "J Xin; R Tang; J Lee; Y Yu; J Lin; Deebert", "journal": "", "ref_id": "b67", "title": "Dynamic early exiting for accelerating BERT inference", "year": "2020" }, { "authors": "J Xin; R Tang; Y Yu; J Lin; Berxit", "journal": "", "ref_id": "b68", "title": "Early exiting for bert with better fine-tuning and extension to regression", "year": "2021" }, { "authors": "S Xu; H Zhang; G Neubig; W Dai; J K Kim; Z Deng; Q Ho; G Yang; E P Xing; Cavs", "journal": "USENIX Association", "ref_id": "b69", "title": "An efficient runtime system for dynamic neural networks", "year": "2018-07" }, { "authors": "J You; R Ying; X Ren; W L Hamilton; J Leskovec; Graphrnn", "journal": "", "ref_id": "b70", "title": "A deep generative model for graphs", "year": "2018" }, { "authors": "S Zha; Z Jiang; H Lin; Z Zhang", "journal": "", "ref_id": "b71", "title": "Just-in-time dynamic-batching", "year": "2019" }, { "authors": "B Zheng; Z Jiang; C H Yu; H Shen; J Fromm; Y Liu; Y Wang; L Ceze; T Chen; G Pekhimenko", "journal": "", "ref_id": "b72", "title": "Dietcode: Automatic optimization for dynamic tensor programs", "year": "2022" }, { "authors": "L Zheng; C Jia; M Sun; Z Wu; C H Yu; A Haj-Ali; Y Wang; J Yang; D Zhuo; K Sen; J E Gonzalez; I Stoica; Ansor", "journal": "", "ref_id": "b73", "title": "Generating highperformance tensor programs for deep learning", "year": "2020-11" } ]
[ { "formula_coordinates": [ 3, 392.8, 161.76, 90.6, 17.92 ], "formula_id": "formula_0", "formula_text": "= blockIdx.x // [0,BS] i = threadIdx.x // [0,256] O_ptr[b][i] = bias[i] + input_ptr[b][i] + state_ptr[b][i]" }, { "formula_coordinates": [ 4, 223.42, 466.21, 74.15, 65.16 ], "formula_id": "formula_1", "formula_text": "funA() { concurrent { funA(); funA(); } funC(); } Figure 2. Concurrent call annotation." } ]
10.18653/v1/D18-1516
2023-10-20
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b20", "b29", "b38", "b22", "b3", "b24", "b33", "b18", "b23", "b16", "b36" ], "table_ref": [], "text": "Knowledge Graphs (KGs) are prevalent resources for representing real-world facts in a structured way. While traditionally, KGs have been utilized for representing static snapshots of \"current\" knowledge, recently, temporal KGs (TKGs) have gained popularity to preserve the complex temporal dynamics proaches such as employing graph neural networks to model interrelationships among entities and relations (Jin et al., 2020;Li et al., 2021;Han et al., 2021b,a), using reinforcement learning techniques (Sun et al., 2021), and utilizing logical rules (Zhu et al., 2021;Liu et al., 2022). However, these techniques have prominent limitations, including the need for large amounts of training data that include thorough historical information for the entities. Additionally, model selection is a computationally expensive challenge as the stateof-the-art approach differs for each dataset.\nIn this paper, we develop a TKG forecasting approach by casting the task as an in-context learning (ICL) problem using large language models (LLMs). ICL refers to the capability of LLMs to learn and perform an unseen task efficiently when provided with a few examples of input-label pairs in the prompt (Brown et al., 2020). Prior works on ICL usually leverage few-shot demonstrations, where a uniform number of examples are provided for each label to solve a classification task (Min et al., 2022;Wei et al., 2023). In contrast, our work investigates what the model learns from irregular patterns of historical facts in the context. We design a three-stage pipeline to control (1) the background knowledge selected for context, (2) the prompting strategy for forecasting, and (3) decoding the output into a prediction. The first stage uses the prediction query to retrieve a set of relevant past facts from the TKG that can be used as context (Section 3.1). The second stage transforms these contextual facts into a lexical prompt representing the prediction task (Section 3.3). The third stage decodes the output of the LLM into a probability distribution over the entities and generates a response to the prediction query (Section 3.4). Our experimental evaluation performs competitively across a diverse collection of TKG benchmarks without requiring the time-consuming supervised training, or custom-designed architectures.\nWe present extensive experimental results on common TKG benchmark datasets such as WIKI (Leblay and Chekol, 2018), YAGO (Mahdisoltani et al., 2014), andICEWS (García-Durán et al., 2018;Jin et al., 2020). Our findings are as follows: (1) LLMs demonstrate the ability to make predictions about future facts using ICL without requiring any additional training. Moreover, these models show comparable performance to supervised approaches, falling within the (-3.6%, +1.5%) Hits@1 margin, relative to the median approach for each dataset; (2) LLMs perform almost identically when we replace entities' and relations' lexical names with numerically mapped indices, suggesting that the prior semantic knowledge is not a critical factor for achieving such a high performance; and (3) LLMs outperform the best heuristic rule-based baseline on each dataset (i.e., the most frequent or the most recent, given the historical context) by (+10%, +28%) Hits@1 relative margin, indicating that they do not simply select the output using frequency or recency biases in ICL (Zhao et al., 2021)." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b3" ], "table_ref": [], "text": "In-Context Learning. ICL is an emergent capability of LLMs that aims to induce a state in the model to perform a task by utilizing contextual input-label examples, without requiring changes to its internal parameters (Brown et al., 2020). Formally, in ICL for classification, a prompt is constructed by linearizing a few input-output pair examples (x i , y i ) from the training data. Subsequently, when a new test input text x test is provided, ICL generates the output y test ∼ P LLM (y test | x 1 , y 1 , . . . , x k , y k , x test ) where ∼ refers to decoding strategy.\nTemporal Knowledge Graph Forecasting. Formally, a TKG, G = (V, R, E, T ), is comprised of a set of entities V, relations R, facts E, and timestamps T . Moreover, since time is sequential, G can be split into a sequence of time-stamped snapshots, G = {G 1 , G 2 , . . . , G t , . . .}, where each snapshot, G t = (V, R, E t ), contains the facts at a specific point in time t. Each fact f ∈ E t is a quadruple (s, p, o, t) where s, o ∈ V, p ∈ R, and t ∈ T . The TKG forecasting task involves predicting a temporally conditioned missing entity in the future given a query quadruple, (?, p, o, t) or (s, p, ?, t), and previous graph snapshots\nG 1:t-1 = {G 1 , G 2 , . . . , G t-1 }.\nHere, the prediction typically involves ranking each entity's assigned score." }, { "figure_ref": [], "heading": "In-context Learning for Temporal Knowledge Graph Forecasting", "publication_ref": [], "table_ref": [], "text": "In this work, we focus on 1) modeling appropriate history E q for a given query quadruple q, 2) converting {E q , q} into a prompt θ q , and 3) employing ICL to get prediction y q ∼ P LLM (y q | θ q ) in a zero-shot manner. Here, the history E q is modeled on the facts from the previous graph snapshots\nG 1:t-1 = {G 1 , G 2 , .\n. . , G t-1 }, and we employ token probabilities for y q to get ranked scores of candidate entities in a zero-shot manner. In the rest of this section, we study history modeling strategies (Sec 3.1), response generation approaches (Sec 3.2), prompt construction templates (Sec 3.3), and common prediction settings (Sec 3.4)." }, { "figure_ref": [], "heading": "History Modeling", "publication_ref": [], "table_ref": [], "text": "To model the history E q , we filter facts that the known entity or relation in the query q has been involved in. Specifically, given the query quadruple q = (s, p, ?, t) under the object entity prediction setting, we experiment with two different aspects of historical facts:\nEntity vs. Pair. Entity includes past facts that contain s, e.g., all historical facts related to Superbowl. In contrast, Pair includes past facts that contain both s and p, e.g., a list of (Superbowl, Champion, Year) as shown in Table 1.\nUnidirectional vs. Bidirectional. Unidirectional includes past facts F wherein s (Entity) or (s, p) (Pair) is in the same position as it is in q (e.g., Unidirectional & Pair -s and p served as subject and predicate in f ∈ F). Bidirectional includes past facts F wherein s (Entity) or (s, p) (Pair) appear in any valid position (e.g., Bidirectional & Entity -s served as subject or object in f ∈ F). As an example of the Bidirectional setting, given q = (Superbowl, Champion, ?, 2023), we include f = (Kupp, Played, Superbowl, 2022) because s (i.e., Superbowl) is present as the object in f . Moreover, in the Bidirectional setting, to preserve the semantics of the facts in the E q , we transform the facts where s appears as an object by 1) swapping the object and subject and 2) replacing the relation with its uniquely defined inverse relation (e.g.,\n(f s , f p , f o , f t ) → (f o , f -1 p , f s , f t ))." }, { "figure_ref": [], "heading": "Response Generation", "publication_ref": [ "b21" ], "table_ref": [], "text": "Given a prompt θ q , we pass it to an LLM to obtain the next token probabilities. Then, we use the obtained probabilities to get a ranked list of entities. However, obtaining scores for entities based on these probabilities is challenging as they may be composed of several tokens. To address this challenge, we utilize a mapped numerical label as an indirect logit to estimate their probabilities (Lin et al., 2022). " }, { "figure_ref": [], "heading": "Prompt Construction", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Given the history E q and query q, we construct a prompt using a pre-defined template θ. Specifically, given the query quadruple q = (s, p, ?, t) under the object entity prediction setting, we present two versions of the template θ with varying levels of information. Our assumption is that each entity or relation has an indexed I(•) (e.g., 0) and a lexical L(•) (e.g., Superbowl) form (See Table 2).\nIndex. Index displays every fact,\n(f s , f p , f o , f t ) ∈ E using the \"f t :[I(f s ), I(f p ), n fo . I(f o )]\" tem- plate where f s , f o ∈ V, f p ∈ R, f t ∈ T\n, n fo denotes an incrementally assigned numerical label (i.e., indirect logit), and I is a mapping from entities to unique indices. For example, in Table 1, we can use the following mappings are for the entities and relations, respectively: {Superbowl → 0, St Louis → 1, Baltimore → 2} and {Champion → 0}. The query q is then represented as \"t:[I(s), I(p),\", concatenated to the end of the prompt. For subject entity prediction, we follow the same procedure from the other side.\nLexical. Lexical follows the same process as Index but uses lexical form L(•) of entity and relation. Each fact in\n(f s , f p , f o , f t ) ∈ E is represented as \"f t :[L(f s ), L(f p ), n fo . L(f o )]\"\nand the query q is represented as \"t:[L(s), L(p),\", concatenated to the end of the prompt." }, { "figure_ref": [], "heading": "Prediction Setting", "publication_ref": [], "table_ref": [], "text": "All the historical facts in the dataset are split into three subsets, D train , D valid , and D test , based on the chronological order with train < valid < test. Given this split, during the evaluation phase, the TKG forecasting task requires models to predict over D test under the following two settings:" }, { "figure_ref": [], "heading": "Single", "publication_ref": [], "table_ref": [], "text": "Step. timestamp, the ground truth fact for that query is added to the history before moving to the test queries in the next timestamp." }, { "figure_ref": [], "heading": "Multi", "publication_ref": [], "table_ref": [], "text": "Step. In this setting, the model is not provided with ground truth facts from past timestamps in the test period and has to rely on its noisy predictions. Hence, after making predictions for a test query in a specific timestamp, instead of the ground truth fact for that query, we add the predicted response to the history before moving to the test queries in the next timestamp. This setting is considered more difficult as the model is forced to rely on its own noisy predictions, which can lead to greater uncertainty with each successive timestamp.\n4 Experimental Setup" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b18", "b23", "b6", "b16", "b8" ], "table_ref": [ "tab_1" ], "text": "For our experiments, we use the WIKI (Leblay and Chekol, 2018), YAGO (Mahdisoltani et al., 2014), ICEWS14 (García-Durán et al., 2018), and ICEWS18 (Jin et al., 2020) benchmark datasets with the unified splits introduced in previous studies (Gastinger et al., 2022). Additionally, we extract a new temporal forecasting dataset from the Armed Conflict Location & Event Data Project (ACLED) project 2 which provides factual data of crises in a particular region. We specifically focus on incidents of combat and violence against civilians in Cabo Delgado from January 1900 to March 2022, using data from October 2021 to March 2022 as our test set. This dataset aims to investigate whether LLMs leverage prior semantic knowledge to make predictions and how effective they are when deployed in real-world applications. Table 3 presents the statistics of these datasets.\n2 https://data.humdata.org/organization/acled Model Family Model Name # Params Instruction-tuned\nGPT2 gpt2 124M ✗ gpt2-medium 355M ✗ gpt2-large 774M ✗ gpt2-xl 1.5B ✗ GPT-J gpt-j-6b 6B ✗ GPT-NeoX gpt-neox-20b 20B ✗ InstructGPT gpt-3.5-turbo - ✓\nTable 4: Language Models used in the paper. Exact model size of gpt-3.5-turbo is unknown." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b8" ], "table_ref": [], "text": "We evaluate the models on well-known metrics for link prediction: Hits@k, with k = 1, 3, 10. Following (Gastinger et al., 2022), we report our results in two evaluation settings: 1) Raw retrieves the sorted scores of candidate entities for a given query quadruple and calculates the rank of the correct entity; and 2) Time-aware filter also retrieves the sorted scores but removes the entities that are valid predictions before calculating the rank, preventing them from being considered errors. To illustrate, if the test query is (NBA, Clinch Playoff, ?, 2023) and the true answer is Los Angeles Lakers, there may exist other valid predictions such as (NBA, Clinch Playoff, Milwaukee Bucks, 2023) or (NBA, Clinch Playoff, Boston Celtics, 2023). In such cases, the time-aware filter removes these valid predictions, allowing for accurate determination of the rank of the \"Los Angeles Lakers.\" In this paper, we present performance with the time-aware filter." }, { "figure_ref": [], "heading": "Models.", "publication_ref": [ "b26", "b31", "b2", "b26", "b16", "b20", "b29", "b38", "b22", "b8" ], "table_ref": [], "text": "As shown in Table 4, we perform experiments on four language model families. Among those, three are open-sourced: GPT2 (Radford et al., 2019), GPT-J (Wang, 2021), and GPT-NeoX (Black et al., 2022). All models employ the GPT-2 byte level BPE tokenizer (Radford et al., 2019) with nearly identical vocabulary size. In addition, we use the gpt-3.5-turbo model to analyze the performance of the instruction-tuned models. However, we do not directly compare this model to other models in terms of size since the actual model size is unknown. As for the TKG baselines, (i.e., RE-Net (Jin et al., 2020), RE-GCN (Li et al., 2021), TANGO (Han et al., 2021b), xERTE (Han et al., 2021a), TimeTraveler (Sun et al., 2021), CyGNet (Zhu et al., 2021), and TLogic (Liu et al., 2022)), we report the numbers presented in prior research (Gastinger et al., 2022). Appendix A.4 provides more details on baseline models." }, { "figure_ref": [], "heading": "Single-Step", "publication_ref": [ "b25", "b34" ], "table_ref": [], "text": "Train YAGO WIKI ICEWS14 ICEWS18 ACLED-CD22\nH@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 4.4 ICL Implementation Details.\nRE-GCN ✓ 0.\nWe implement our frameworks using Py-Torch (Paszke et al., 2019) and Huggingface (Wolf et al., 2020). We first collate the facts f ∈ D test based on the identical test query to eliminate any repeated inference. To illustrate, suppose there exist two facts in the test set denoted as (s, p, a, t) and (s, p, b, t) in the object prediction scenario.\nWe consolidate these facts into (s, p, [a, b], t) and forecast only one for (s, p, ?, t). Subsequently, we proceed to generate an output for each test query with history by utilizing the model, obtaining the probability for the first generated token in a greedy approach, and sorting the probability. The outputs are deterministic for every iteration.\nWe retain the numerical tokens corresponding to the numerical label n that was targeted, selected from the top 100 probability tokens for each test query. To facilitate multi-step prediction, we incorporate the top-k predictions of each test query as supplementary reference history. In this paper, we present results with k = 1. It is important to acknowledge that the prediction may contain minimal or no numerical tokens as a result of inadequate in-context learning. This can lead to problems when evaluating rank-based metrics. To mitigate this, we have established a protocol where the rank of the actual value within the predictions is assigned a value of 100, which is considered incorrect according to our evaluation metric.\nFor instruction-tuned model, we use the manual curated system instructions in Appendix A.3." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "In-context learning for TKG Forecasting", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In this section, we present a multifaceted performance analysis of ICL under Index & Unidirection prompt strategy for both Entity and Pair history.\nQ1 we run a comparative analysis between GPT-NeoX and heuristic-rules (i.e., frequency & recency) on the ICEWS14 dataset, with history length set to 100. frequency identifies the target that appears most frequently in the provided history while recency selects the target associated with the most recent fact in the provided history. The reason for our focus on ICEWS is that each quadruple represents a single distinct event in time. In contrast, the process of constructing YAGO and WIKI involves converting durations to two timestamps to display events across the timeline. This step has resulted in recency heuristics outperforming all of the existing models, showcasing the shortcoming of existing TKG benchmarks (See Appendix A.5). The experimental results presented in Table 6 demonstrate that ICL exhibits superior performance to rule-based baselines. This finding suggests that ICL does not solely rely on specific biases to make predictions, but rather it actually learns more sophisticated patterns from historical data.\nQ3: How does ICL use the sequential and temporal information of events? To assess the ability of LLMs to comprehend the temporal information of historical events, we compare the performance of prompts with and without timestamps. Specifically, we utilize the original prompt format, \"f t :[I(f s ), I(f r ), n fo . I(f o )]\", and the time-removed prompt format, \"[I(f s ), I(f r ), n fo . I(f o )]\", make the comparison (See Appendix A.2). Additionally, we shuffle the historical facts in the time-removed prompt format to see how the model is affected by the corruption of sequential information. Figure 1 shows that the absence of time reference can lead to a deterioration in performance, while the random arrangement of historical events may further exacerbate this decline in performance. This observation implies that the model has the capability to forecast the subsequent event by comprehending the sequential order of events." }, { "figure_ref": [ "fig_3", "fig_0", "fig_1" ], "heading": "Single-Step Prompt ICEWS14", "publication_ref": [], "table_ref": [], "text": "H@1 gpt-3.5-turbo index 0.1615 gpt-3.5-turbo lexical 0.1858\nTable 7: Performance (Hits@1) between index and lexical for gpt-3.5-turbo.\nQ4: How does instruction-tuning affect ICL's performance? To investigate the impact of instruction-tuning on ICL, we employ the gpt-3.5-turbo model with manually curated system instruction detailed in Appendix 4.4. Since the size of this model is not publicly disclosed, it is challenging to make direct comparisons with other models featured in this paper. Moreover, since this model does not provide output probabilities, we are only able to report the Hit@1 metric. Table 7 showcases that the performance of the lexical prompts exceeds that of the index prompts by 0.024, suggesting that instruction-tuned models can make better use of semantic priors. This behavior is different from the other foundation LLMs, where the performance gap between the two prompt types was insignificant (See Figure 4 (a)).\nQ5: How does history length affect ICL's performance? To evaluate the impact of the history length provided in the prompt, we conduct a set of experiments using varying history lengths. For this purpose, we use the best performing prompt format for each benchmark, i.e., Entity for WIKI, YAGO, ICEWS18, and Pair for ICEWS14. Our results, as shown in Figure 2, indicate a consistent improvement in performance as the history length increases. This suggests that the models learn better as additional historical facts are presented. This observation is connected to few-shot learning in other domains, where performance improves as the number of examples per label increases. However, in our case, the historical patterns presented in the prompt do not explicitly depict the input-label mapping but rather aid in inferring the next step.\nQ6: What is the relation between ICL's performance and model size? Here, we analyze the connection between model size and performance. Our results, as presented in Figure 3, conform to the expected trend of better performance with larger models. This finding aligns with prior works showing the scaling law of in-context learning performance. Our findings are still noteworthy since they show how scaling model size can facilitate more powerful pattern inference for forecasting tasks." }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Prompt Construction for TKG Forecasting", "publication_ref": [ "b31", "b33" ], "table_ref": [], "text": "To determine the most effective prompt variation, we run a set of experiments on all prompt variations, using GPT-J (Wang, 2021) and under the singlestep setting. Comprehensive results for prompt variations can be found in Appendix A.5.\nIndex vs. Lexical Our first analysis compares the performance of index and lexical prompts. This investigation aims to determine whether the model relies solely on input-label mappings or if it also incorporates semantic priors from pre-training to make predictions. Our results (Figure 4 (a)) show that the performance is almost similar (±4e -3 on average) across the datasets. This finding is aligned with previous studies indicating that foundation models depend more on input-label mappings and are minimally impacted by semantic priors (Wei et al., 2023).\nUnidirectional vs. Bidirectional We next analyze how the relation direction in the history modeling impacts the performance. This analysis aims to ascertain whether including historical facts, where the query entity or pair appears in any position, can improve performance by offering a diverse array of historical facts. Our results (Figure 4 (b)) show that there is a slight decrease in performance when Bidirectional history is employed, with a significant drop in performance observed particularly in the ICEWS benchmarks. These observations may be attributed to the considerably more significant number of entities placed in both subject and object positions in ICEWS benchmarks than YAGO and WIKI benchmarks (See Appendix A.5). This finding highlights the necessity of having robust constraints on the historical data for ICL to comprehend the existing pattern better. Entity vs. Pair Finally, we examine the impact of the history retrieval query on performance. Our hypothesis posits that when the query is limited to a single entity, we can incorporate more diverse historical facts. Conversely, when the query is a pair, we can acquire a more focused set of historical facts related to the query. Our results (Figure 4 (c)) indicate that the performance of the model is dependent on the type of data being processed. Specifically, the WIKI and ICEWS18 benchmarks perform better when the query is focused on the entity, as a broader range of historical facts is available. In contrast, the ICEWS14 benchmark performs better when the query is focused on pairs, as the historical facts present a more focused pattern." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b13", "b32", "b1", "b39", "b14", "b19", "b28", "b39", "b15", "b18", "b6", "b9", "b17", "b16", "b20", "b29", "b38", "b22", "b3", "b5", "b24", "b27", "b35", "b4", "b10", "b33", "b24", "b27", "b35", "b10", "b7", "b33" ], "table_ref": [], "text": "Event Forecasting. Forecasting is a complex task that plays a crucial role in decision-making and safety across various domains (Hendrycks et al., 2021). To tackle this challenging task, researchers have explored various approaches, including statistical and judgmental forecasting (Webby and O'Connor, 1996;Armstrong, 2001;Zou et al., 2022). Statistical forecasting involves leveraging probabilistic models (Hyndman and Khandakar, 2008) or neural networks (Li et al., 2018;Sen et al., 2019) to predict trends over time-series data. While this method works well when there are many past observations and minimal distribution shifts, it is limited to numerical data and may not capture the underlying causal factors and dependencies that affect the outcome. On the other hand, judgmental forecasting involves utilizing diverse sources of information, such as news articles and external knowledge bases, to reason and predict future events. Recent works have leveraged language models to enhance reasoning capabilities when analyzing unstructured text data to answer forecasting inquiries (Zou et al., 2022;Jin et al., 2021). (Leblay and Chekol, 2018;García-Durán et al., 2018;Goel et al., 2020;Lacroix et al., 2020);\n(2) Extrapolation aims to predict future facts beyond t n . Recent studies have treated TKGs as a sequence of snapshots, each containing facts corresponding to a timestamp t i , and proposed solutions by modeling multi-relational interactions among entities and relations over these snapshots using graph neural networks (Jin et al., 2020;Li et al., 2021;Han et al., 2021b,a), reinforcement learning (Sun et al., 2021) or logical rules (Zhu et al., 2021;Liu et al., 2022). In our work, we focus on the extrapolation setting.\nIn-context Learning. In-context learning (ICL) has enabled LLMs to accomplish diverse tasks in a few-shot manner without needing parameter adjustments (Brown et al., 2020;Chowdhery et al., 2022).\nIn order to effectively engage in ICL, models can leverage semantic prior knowledge to accurately predict labels following the structure of in-context exemplars (Min et al., 2022;Razeghi et al., 2022;Xie et al., 2022;Chan et al., 2022;Hahn and Goyal, 2023), and learn the input-label mappings from the in-context examples presented (Wei et al., 2023).\nTo understand the mechanism of ICL, recent studies have explored the ICL capabilities of LLMs with regards to the impact of semantic prior knowl-edge by examining their correlation with training examples (Min et al., 2022;Razeghi et al., 2022;Xie et al., 2022), data distribution (Chan et al., 2022), and language compositionality (Hahn and Goyal, 2023) in the pre-training corpus. Other recent works show that LLMs can actually learn input-label mappings from in-context examples by showing the transformer models trained on specific linear function class is actually predicting accurately on new unseen linear functions (Garg et al., 2022). More recently, there is a finding that largeenough models can still do ICL using input-label mappings when semantic prior knowledge is not available (Wei et al., 2023)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we examined the forecasting capabilities of in-context learning in large language models. To this end, we experimented with temporal knowledge graph forecasting benchmarks. We presented a framework that converts relevant historical facts into prompts and generates ranked link predictions through token probabilities. Our experimental results demonstrated that without any finetuning and only through ICL, LLMs exhibit comparable performance to current supervised TKG methods that incorporate explicit modules to capture structural and temporal information. We also discovered that using numerical indices instead of entity/relation names does not significantly affect the performance, suggesting that prior semantic knowledge is not critical for overall performance. Additionally, our analysis indicated that ICL helps the model learn irregular patterns from historical facts, beyond simply making predictions based on the most common or the most recent facts in the given context. Together, our results and analyses demonstrated that ICL can be a valuable tool for predicting future links using historical patterns, and also prompted further inquiry into the potential of ICL for additional capabilities." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b30" ], "table_ref": [], "text": "There are certain limitations to our experiments. First, computing resource constraints restrict our experiments to small-scale open-source models. Second, our methodologies have constraints regarding models where the tokenizer vocabulary comprises solely of single-digit numbers as tokens, such as LLAMA (Touvron et al., 2023). The performance of such models exhibits a similar trend in terms of scaling law concerning model size and history length, but these models demonstrate inferior performance compared to other models of the same model size. Third, our methodologies have certain limitations with respect to link prediction settings.\nWhile real-world forecasting can be performed in the transductive setting, where the answer can be an unseen history, our approach is constrained to the inductive setting, where the answer must be one of the histories observed. There are further directions that can be pursued. The first is to explore transductive extrapolation link prediction using LLMs.\nThe second is to analyze the effects of fine-tuning on the results. Lastly, there is the opportunity to investigate the new capabilities of ICL." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4" ], "heading": "A.1 Prompt Example", "publication_ref": [], "table_ref": [], "text": "Given the test query at timestamp 571, prompt examples for Index and Lexical are shown in Figure 5. Here, we assume the entity dictionary contains \"Islamist Militia (Mozambique)\" as index 0, \"Meluco\" as 10, \"Namatil\" as 36, \"Muatide\" as 53, \"Limala\" as 54, and \"Nacate\" as 55, while relation dictionary contains \"Battles\" as index 1 and \"Violence against civilians\" as 4. Also, the history setting is unidirectional entity setting where the history length is set to 5. " }, { "figure_ref": [ "fig_5" ], "heading": "A.2 Prompt Example for Analysis", "publication_ref": [ "b16", "b20", "b16", "b20", "b29", "b38", "b22" ], "table_ref": [], "text": "To assess the ability of LLMs to comprehend the sequential information of historical events, we compare the performance of prompts with and without timestamps (See Section 5.1 Q3). Figure 6 shows the prompt examples for time-removed and shuffled version of prompts. A.3 System Instruction for Instruction-tuned models.\nFor the instruction-model, we use the manual curated system instructions to provide task descriptions and constraint the output format as follow:\nYou must be able to correctly predict the next {object_label} from a given text consisting of multiple quadruplets in the form of \"{time}:[{subject}, {relation}, {object_label}. {object}]\" and the query in the form of \"{time}:[{subject}, {relation},\" in the end.\nYou must generate only the single number for {object_label} without any explanation.\nA.4 Baseline Models RE-Net (Jin et al., 2020) leverages an autoregressive architecture that employs a two-step process for learning temporal dependency from a sequence of graphs and local structural dependency from the vicinity. The model represents the likelihood of a fact occurring as a probability distribution that is conditioned on the sequential history of past snapshots.\nRE-GCN (Li et al., 2021) also employs autoregressive architecture while it utilizes multi-layer relation-aware GCN on each graph snapshot to capture the structural dependencies among concurrent facts. Furthermore, the static properties of entities such as entity types, are also incorporated via a static graph constraint component to obtain better entity representations.\nTANGO (Han et al., 2021b) employs autoregressive architecture as well but the use of continuous-time embedding in encoding temporal and structural information is a distinguishing feature of the proposed method, as opposed to RE-Net (Jin et al., 2020) (Li et al., 2021) and RE-GCN which operate on a discrete level with regards to time.\nxERTE (Han et al., 2021a) employs an attention mechanism that can effectively capture the relevance of important aspects by selectively focusing on them. It employs a sequential reasoning approach over local subgraphs. This process begins with the query and iteratively selects relevant edges of entities within the subgraph, subsequently propagating attention along these edges. After multiple rounds of expansion, the final subgraph represents the interpretable reasoning path towards the predicted outcomes.\nTimeTraveler (Sun et al., 2021) employs reinforcement learning for forecasting. The approach involves the use of an agent that navigates through historical knowledge graph snapshots, commencing from the query subject node. Thereafter, it sequentially moves to a new node by leveraging temporal facts that are linked to the current node, with the ultimate objective of halting at the answer node.\nTo accommodate the issue of unseen-timestamp, the approach incorporates a relative time encoding function that captures time-related information when making decisions.\nCyGNet (Zhu et al., 2021) leverages the statistical relevance of historical facts, acknowledging the recurrence of events in the temporal knowledge graph datasets. It incorporates two inference modes, namely Copy and Generation. The Copy mode determines the likelihood of the query being a repetition of relevant past facts. On the other hand, the Generation mode estimates the probability of each potential candidate being the correct prediction, using a linear classifier. The final forecast is obtained by aggregating the outputs of both modes.\nTLogic (Liu et al., 2022) mines cyclic temporal logical rules by extracting temporal random walks from a graph. This process involves the extraction of temporal walks from the graph, followed by a lift to a more abstract, semantic level, resulting in the derivation of temporal rules that can generalize to new data. Subsequently, the application of these rules generates answer candidates, with the body groundings in the graph serving as explicit and easily comprehensible explanations for the results obtained.\nA.5 Full Experimental Results" }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was funded in part by the Defense Advanced Research Projects Agency (DARPA) and Army Research Office (ARO) under Contract No. W911NF-21-C-0002 and Contract No. HR00112290106, and with support from the Keston Exploratory Research Award and Amazon.\nThe views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of DARPA, ARO or the U.S. Government." }, { "figure_ref": [], "heading": "Prompt", "publication_ref": [], "table_ref": [], "text": "History YAGO WIKI ICEWS14 ICEWS18 ACLED\nH@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 Index Unidirectional Entity 0.777 0. H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 frequency 0.766 0.859 0. H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 H@1 H@3 H@10 frequency 0. " } ]
Temporal knowledge graph (TKG) forecasting benchmarks challenge models to predict future facts using knowledge of past facts. In this paper, we develop an approach to use in-context learning (ICL) with large language models (LLMs) for TKG forecasting. Our extensive evaluation compares diverse baselines, including both simple heuristics and state-of-the-art (SOTA) supervised models, against pre-trained LLMs across several popular benchmarks and experimental settings. We observe that naive LLMs perform on par with SOTA models, which employ carefully designed architectures and supervised training for the forecasting task, falling within the (-3.6%, +1.5%) Hits@1 margin relative to the median performance. To better understand the strengths of LLMs for forecasting, we explore different approaches for selecting historical facts, constructing prompts, controlling information propagation, and parsing outputs into a probability distribution. A surprising finding from our experiments is that LLM performance endures (±0.4% Hit@1) even when semantic information is removed by mapping entities/relations to arbitrary numbers, suggesting that prior semantic knowledge is unnecessary; rather, LLMs can leverage the symbolic patterns in the context to achieve such a strong performance. Our analysis also reveals that ICL enables LLMs to learn irregular patterns from the historical context, going beyond frequency and recency biases 1 .
Temporal Knowledge Graph Forecasting Without Knowledge Using In-Context Learning
[ { "figure_caption": "Figure 2 :2Figure 2: Performance (Hit@1) adheres to the scaling law based on the history length.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (Hit@1) adheres to the scaling law based on the model size.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Entity vs. Pair", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Performance (Hit@1) Analysis on Prompt Variation. The comparable performance exhibited by both the Index and Lexical models indicates that these models rely heavily on learning patterns and are less influenced by semantic priors. Moreover, the Unidirectional model typically outperforms the Bidirectional model, suggesting that the robust constraints on historical data enable the model to comprehend observed patterns better. Finally, the performance of the Entity and Pair models varies depending on the dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Prompt examples for Index and Lexical settings.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Prompt examples for time-removed and shuffled version.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Prompt Example.", "figure_data": "Category Prompt2000: [Superbowl, Champion, 0. St Louis]Lexical2001: [Superbowl, Champion, 1. Baltimore]L(•). . .2023: [Superbowl, Champion,2000: [0, 0, 0. 0]Index2001: [0, 0, 1. 1]I(•). . .2023: [0, 0,", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Data statistics. Each dataset consists of entities, relations, and historical facts, with the facts within the same time interval identified by the same timestamp. The facts are divided into three subsets based on time, where train < valid < test.", "figure_data": "In this setting, for each test query,the model is provided with ground truth facts frompast timestamps in the test period. Hence, aftermaking predictions for a test query in a specific", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Performance (Hits@K) comparison between supervised models and ICL for single-step (top) and multistep (bottom) prediction. The first group in each table consists of supervised models, whereas the second group consists of ICL models, i.e., GPT-NeoX with a history length of 100. The best model for each dataset in the first group is shown in bold, and the second best is underlined.", "figure_data": "717 0.7760.8170.594 0.6480.6780.278 0.4210.5750.195 0.3260.4750.421 0.4640.502RE-Net✓0.534 0.6130.6620.472 0.5070.5300.278 0.4080.5490.184 0.3140.4610.238 0.4450.563CyGNet✓0.613 0.7420.8340.525 0.6240.6750.266 0.4020.5450.166 0.2950.4440.408 0.5000.588TLogic✓0.631 0.7060.7150.613 0.6630.6820.265 0.3950.5310.155 0.2720.4120.009 0.0450.094GPT-NeoX (Entity)✗0.686 0.7930.8400.543 0.6220.6550.247 0.3630.4710.136 0.2240.3210.319 0.4170.500GPT-NeoX (Pair)✗0.688 0.7930.8390.570 0.6250.6520.236 0.3240.3950.155 0.2450.3310.289 0.4100.464Single-StepICEWS14ICEWS18Multi-StepICEWS14ICEWS18H@1 H@3 H@10 H@1 H@3 H@10H@1 H@3 H@10 H@1 H@3 H@10frequency0.243 0.3870.5320.141 0.2650.409frequency0.222 0.3490.4600.121 0.2070.307recency0.228 0.3870.5360.120 0.2420.403recency0.151 0.2680.4230.074 0.1490.266GPT-NeoX (Entity) 0.324 0.4600.5650.192 0.3130.414GPT-NeoX (Entity) 0.247 0.3630.4710.136 0.2240.321GPT-NeoX (Pair)0.297 0.4080.4820.196 0.3070.402GPT-NeoX (Pair)0.236 0.3240.3950.155 0.2450.331(a) Single-step(b) Multi-step", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Performance (Hits@K) with rule-based predictions. The best model for each dataset is shown in bold.", "figure_data": "", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Temporal Knowledge Graph. Temporal knowledge graph (TKG) reasoning models are commonly employed in two distinct settings, namely interpolation and extrapolation, based on the facts available from t 0 to t n . (1) Interpolation aims to predict missing facts within this time range from t 0 to t n , and recent works have utilized embeddingbased algorithms to learn low-dimensional representations for entities and relations to score candidate facts", "figure_data": "", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
Dong-Ho Lee; Kian Ahrabian; Woojeong Jin; Fred Morstatter; Jay Pujara
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Superbowl, Champion", "year": "2000" }, { "authors": "Jon Scott; Armstrong ", "journal": "Springer", "ref_id": "b1", "title": "Principles of forecasting: a handbook for researchers and practitioners", "year": "2001" }, { "authors": "Sid Black; Stella Biderman; Eric Hallahan; Quentin Anthony; Leo Gao; Laurence Golding; Horace He; Connor Leahy; Kyle Mcdonell; Jason Phang", "journal": "", "ref_id": "b2", "title": "Gpt-neox-20b: An open-source autoregressive language model", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b3", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "C Y Stephanie; Adam Chan; Andrew Santoro; Jane X Kyle Lampinen; Aaditya K Wang; Pierre Singh; James Harvey Richemond; Felix Mcclelland; Hill", "journal": "", "ref_id": "b4", "title": "Data distributional properties drive emergent in-context learning in transformers", "year": "2022" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b5", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Alberto García-Durán; Sebastijan Dumančić; Mathias Niepert", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Learning sequence encoders for temporal knowledge graph completion", "year": "2018" }, { "authors": "Shivam Garg; Dimitris Tsipras; Percy S Liang; Gregory Valiant", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b7", "title": "What can transformers learn in-context? a case study of simple function classes", "year": "2022" }, { "authors": "Julia Gastinger; Timo Sztyler; Lokesh Sharma; Anett Schuelke", "journal": "", "ref_id": "b8", "title": "On the evaluation of methods for temporal knowledge graph forecasting", "year": "2022" }, { "authors": "Rishab Goel; Seyed Mehran Kazemi; Marcus Brubaker; Pascal Poupart", "journal": "", "ref_id": "b9", "title": "Diachronic embedding for temporal knowledge graph completion", "year": "2020" }, { "authors": "Michael Hahn; Navin Goyal", "journal": "", "ref_id": "b10", "title": "A theory of emergent in-context learning as implicit structure induction", "year": "2023" }, { "authors": "Zhen Han; Peng Chen; Yunpu Ma; Volker Tresp; ; ", "journal": "", "ref_id": "b11", "title": "Explainable subgraph reasoning for forecasting on temporal knowledge graphs", "year": "2021" }, { "authors": "Zhen Han; Zifeng Ding; Yunpu Ma; Yujia Gu; Volker Tresp", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Learning neural ordinary equations for forecasting future links on temporal knowledge graphs", "year": "2021" }, { "authors": "Dan Hendrycks; Nicholas Carlini; John Schulman; Jacob Steinhardt", "journal": "", "ref_id": "b13", "title": "Unsolved problems in ml safety", "year": "2021" }, { "authors": "J Rob; Yeasmin Hyndman; Khandakar", "journal": "Journal of statistical software", "ref_id": "b14", "title": "Automatic time series forecasting: the forecast package for r", "year": "2008" }, { "authors": "Woojeong Jin; Rahul Khanna; Suji Kim; Dong-Ho Lee; Fred Morstatter; Aram Galstyan; Xiang Ren", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "ForecastQA: A question answering challenge for event forecasting with temporal text data", "year": "2021" }, { "authors": "Woojeong Jin; Meng Qu; Xisen Jin; Xiang Ren", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Recurrent event network: Autoregressive structure inferenceover temporal knowledge graphs", "year": "2020" }, { "authors": "Timothée Lacroix; Guillaume Obozinski; Nicolas Usunier", "journal": "", "ref_id": "b17", "title": "Tensor decompositions for temporal knowledge base completion", "year": "2020" }, { "authors": "Julien Leblay; Melisachew Wudage Chekol", "journal": "", "ref_id": "b18", "title": "Deriving validity time in knowledge graph", "year": "2018" }, { "authors": "Yaguang Li; Rose Yu; Cyrus Shahabi; Yan Liu", "journal": "", "ref_id": "b19", "title": "Diffusion convolutional recurrent neural network: Data-driven traffic forecasting", "year": "2018" }, { "authors": "Zixuan Li; Xiaolong Jin; Wei Li; Saiping Guan; Jiafeng Guo; Huawei Shen; Yuanzhuo Wang; Xueqi Cheng", "journal": "", "ref_id": "b20", "title": "Temporal knowledge graph reasoning based on evolutional representation learning", "year": "2021" }, { "authors": "Stephanie Lin; Jacob Hilton; Owain Evans", "journal": "Transactions on Machine Learning Research", "ref_id": "b21", "title": "Teaching models to express their uncertainty in words", "year": "2022" }, { "authors": "Yushan Liu; Yunpu Ma; Marcel Hildebrandt; Mitchell Joblin; Volker Tresp", "journal": "", "ref_id": "b22", "title": "Tlogic: Temporal logical rules for explainable link forecasting on temporal knowledge graphs", "year": "2022" }, { "authors": "Farzaneh Mahdisoltani; Joanna Biega; Fabian Suchanek", "journal": "", "ref_id": "b23", "title": "Yago3: A knowledge base from multilingual wikipedias", "year": "2014" }, { "authors": "Sewon Min; Xinxi Lyu; Ari Holtzman; Mikel Artetxe; Mike Lewis; Hannaneh Hajishirzi; Luke Zettlemoyer", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Rethinking the role of demonstrations: What makes in-context learning work?", "year": "2022" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Köpf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b25", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019-12-08" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b26", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Yasaman Razeghi; I V Robert L Logan; Matt Gardner; Sameer Singh", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Impact of pretraining term frequencies on few-shot numerical reasoning", "year": "2022" }, { "authors": "Rajat Sen; Hsiang-Fu Yu; Inderjit S Dhillon", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Think globally, act locally: A deep neural network approach to high-dimensional time series forecasting", "year": "2019" }, { "authors": "Haohai Sun; Jialun Zhong; Yunpu Ma; Zhen Han; Kun He", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "TimeTraveler: Reinforcement learning for temporal knowledge graph forecasting", "year": "2021" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b30", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Ben Wang", "journal": "", "ref_id": "b31", "title": "Mesh-Transformer-JAX: Model-Parallel Implementation of Transformer Language Model with JAX", "year": "2021" }, { "authors": "Richard Webby; O' Marcus; Connor", "journal": "International Journal of forecasting", "ref_id": "b32", "title": "Judgemental and statistical time series forecasting: a review of the literature", "year": "1996" }, { "authors": "Jerry Wei; Jason Wei; Yi Tay; Dustin Tran; Albert Webson; Yifeng Lu; Xinyun Chen; Hanxiao Liu; Da Huang; Denny Zhou", "journal": "", "ref_id": "b33", "title": "Larger language models do in-context learning differently", "year": "2023" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Remi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Sang Michael Xie; Aditi Raghunathan; Percy Liang; Tengyu Ma", "journal": "", "ref_id": "b35", "title": "An explanation of in-context learning as implicit bayesian inference", "year": "2022" }, { "authors": "Zihao Zhao; Eric Wallace; Shi Feng; Dan Klein; Sameer Singh", "journal": "", "ref_id": "b36", "title": "Calibrate before use: Improving few-shot performance of language models", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b37", "title": "", "year": "" }, { "authors": "Cunchao Zhu; Muhao Chen; Changjun Fan; Guangquan Cheng; Yan Zhang", "journal": "", "ref_id": "b38", "title": "Learning from history: Modeling temporal knowledge graphs with sequential copy-generation networks", "year": "2021" }, { "authors": "Andy Zou; Tristan Xiao; Ryan Jia; Joe Kwon; Mantas Mazeika; Richard Li; Dawn Song; Jacob Steinhardt; Owain Evans; Dan Hendrycks", "journal": "", "ref_id": "b39", "title": "Forecasting future world events with neural networks", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 306.14, 611.74, 218.27, 24.18 ], "formula_id": "formula_0", "formula_text": "G 1:t-1 = {G 1 , G 2 , . . . , G t-1 }." }, { "formula_coordinates": [ 3, 70.87, 74.37, 85.61, 10.63 ], "formula_id": "formula_1", "formula_text": "G 1:t-1 = {G 1 , G 2 , ." }, { "formula_coordinates": [ 3, 131.43, 605.52, 154.49, 13.65 ], "formula_id": "formula_2", "formula_text": "(f s , f p , f o , f t ) → (f o , f -1 p , f s , f t ))." }, { "formula_coordinates": [ 3, 306.14, 337.61, 220.08, 37.73 ], "formula_id": "formula_3", "formula_text": "(f s , f p , f o , f t ) ∈ E using the \"f t :[I(f s ), I(f p ), n fo . I(f o )]\" tem- plate where f s , f o ∈ V, f p ∈ R, f t ∈ T" }, { "formula_coordinates": [ 3, 306.14, 549.93, 218.27, 24.32 ], "formula_id": "formula_4", "formula_text": "(f s , f p , f o , f t ) ∈ E is represented as \"f t :[L(f s ), L(f p ), n fo . L(f o )]\"" }, { "formula_coordinates": [ 4, 328.83, 86.73, 152.54, 68.87 ], "formula_id": "formula_5", "formula_text": "GPT2 gpt2 124M ✗ gpt2-medium 355M ✗ gpt2-large 774M ✗ gpt2-xl 1.5B ✗ GPT-J gpt-j-6b 6B ✗ GPT-NeoX gpt-neox-20b 20B ✗ InstructGPT gpt-3.5-turbo - ✓" }, { "formula_coordinates": [ 5, 91.34, 181.89, 73.49, 6.33 ], "formula_id": "formula_6", "formula_text": "RE-GCN ✓ 0." } ]
10.18653/v1/D19-1224
[ { "figure_ref": [ "fig_4" ], "heading": "Introduction", "publication_ref": [ "b3", "b12", "b21", "b4", "b5", "b24", "b8", "b25", "b13" ], "table_ref": [ "tab_0", "tab_0" ], "text": "When training large deep neural networks on the same data and hyperparameters can lead to many distinct solutions with similar loss, we say the model is underspecified (D'Amour et al., 2022). One tangible manifestation of underspecification is that a model prediction on a single data point can change across different training runs, without any change in the training data or hyperparameter settings, due to stochasticity in the training procedure. This extreme sensitivity of model output, which has been termed as model variance/instability or model jitter/churn (Hidey et al., 2022;Milani Fard et al., 2016), is highly undesirable as it prohibits comparing models across different experiments (Dodge et al., 2019). We refer to this problem as local instability 1 , a term that highlights our focus on the non-uniformity of instability across data points. Local instability can lead to highly undesirable consequences for deployed industrial systems, as it can cause inconsistent model behavior across time, eroding trust on AI systems (Dodge et al., 2020;D'Amour et al., 2020a). The problem is further exacerbated by the fact that industry models are typically more complex and trained on diverse datasets with potentially higher proportion of noise. [0.002-0.97], 0.17 (high) lists:26, IOT:6, general:6, play:5, news:3, social:1, calendar:1 search for gluten free menus (cooking) [0.002-0.693], 0.06 (low) lists:28, takeaway:18, social:1, music:1, cooking:1, play:1 p is the prediction score on gold labels and σ m is the standard deviation over multiple model outputs p1 , . . . , p 50 . For example, start house cleanup with gold label IOT is predicted to label lists 26 out of the 50 model runs. Its prediction score on IOT ranges between 0.002 and 0.97. green: low variability, predictions match gold label, red: high predicted label switching Table 1 shows examples of local instability for a domain classification problem, where we used a pre-trained language model DistilBERT (Sanh et al., 2019) to train 50 independent classifiers 1 We use local instability to mean local model instability arXiv:2305.10625v2 [cs.LG] 19 May 2023 (with random initial conditions) on Massive dataset (FitzGerald et al., 2022). It shows that a validation set utterance start house cleanup with gold label IOT gets assigned seven different predicted labels over the 50 runs, with the predicted confidence on gold label p ranging between 0.002 and 0.97, with high σ m (the standard deviation of { pi } 50 i=1 ) of 0.17. In comparison, search for gluten free menus gets 6 different predicted labels over 50 runs, with a relatively low σ m of 0.06. The differences in stability across examples demonstrates that the phenomenon is localized to certain data points. See Figures 4 and5 in Appendix. Examples in table 1 also highlight that variability in confidence is not perfectly aligned with stability of predictions.\nMeasuring Local Model Instability While detecting and quantifying local instability across multiple runs is trivial for toy problems, it becomes infeasible with much larger industrial datasets. (Swayamdipta et al., 2020) suggested to use singlerun training dynamics to estimate the variance in prediction scores over multiple epochs. However, as shown in Table 1 low prediction variance does not always lead to less label switching, which is the defining feature of local instability. Instead, here we introduce label switching entropy as a new metric for characterizing local instability. Furthermore, we demonstrate that label switching entropy calculated over training epochs of a single run is a good proxy for label switching over multiple runs, so that data points with high prediction instability over time also exhibit high instability across training runs.\nMitigating Local Model Instability One straightforward strategy of mitigating local instability is to train an ensemble of n models and average their weights or their predictions. Unfortunately, ensembling neural networks such as large language models is often computationally infeasible in practice, as it requires multiplying both the training cost and the test time inference cost by a factor of n. Therefore, we propose and compare more economical options for mitigating local instability.\nHere we propose a more efficient smoothingbased approach where we train just two models. The first (teacher) model is trained using the onehot encoded gold labels as the target. Once the model has converged and is no longer in the transient learning regime (after N training or optimiza-tion steps), we compute the temporal average predicted probability vector over K classes after each optimization step, which is then adjusted by temperature T to obtain the smoothed predicted probability vector. A student model is then trained using these \"soft\" labels instead of the one-hot encoded gold labels. We call this Temporal Guided Temperature Scaled Smoothing (TGTSS). TGTSS allows local mitigation of local instability as each datapoint is trained to its unique label in the student model. In contrast to existing methods such stochastic weight averaging (Izmailov et al., 2018) or regularizing options such as adding L2-penalty, TGTSS significantly outperforms existing methods and reaches within 90% of the gold standard of ensemble averaging.\nWe summarize our contributions as follows:\n• We propose a new measure of local instability that is computationally efficient and descriptive of actual prediction changes.\n• We introduce a data-centric strategy to mitigate local instability by leveraging temporally guided label smoothing.\n• We conduct extensive experiments with two public datasets and demonstrate the effectiveness of the proposed mitigation strategy compared to existing baselines." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b23", "b21", "b2", "b21", "b16", "b1", "b12", "b22", "b0", "b25", "b13", "b19", "b9", "b28", "b14", "b10" ], "table_ref": [], "text": "Sophisticated, real-world applications of Deep Neural Networks (DNNs) introduce challenges that require going beyond a myopic focus on accuracy. Uncertainty estimation is increasingly important for deciding when a DNN's prediction should be trusted, by designing calibrated confidence measures that may even account for differences between training and test data (Nado et al., 2021). Progress on uncertainty estimation is largely orthogonal to another critical goal for many engineered systems: consistency and reliability. Will a system that works for a particular task today continue to work in the same way tomorrow? One reason for inconsistent performance in realworld systems is that even if a system is re-trained with the same data, predictions may significantly change, a phenomenon that has been called model churn (Milani Fard et al., 2016). The reason for this variability is that neural networks are underspecified (D'Amour et al., 2020b), in the sense that there are many different neural networks that have nearly equivalent average performance for the target task. While randomness could be trivially removed by fixing seeds, in practice tiny changes to data will still significantly alter stochasticity and results. We will explore the case of altering training data in future studies. Studying how stochasticity affects model churn addresses a key obstacle in re-training engineered systems while maintaining consistency with previous results.\nThe most common thread for reducing model churn focuses on adding constraints to a system so that predictions for re-trained system match some reference model. This can be accomplished by adding hard constraints (Cotter et al., 2019) or distillation (Milani Fard et al., 2016;Jiang et al., 2021;Bhojanapalli et al., 2021).\nWe adopt a subtly different goal which is to train at the outset in a way that reduces variability in predictions due to stochasticity in training. (Hidey et al., 2022) suggest a co-distillation procedure to achieve this. Label smoothing, which reduces over-confidence (Müller et al., 2019), has also been suggested to reduce variance, with a local smoothing approach to reduce model churn appearing in (Bahri and Jiang, 2021).\nA distinctive feature of our approach is a focus on how properties of the data lead to instability. Inspired by dataset cartography (Swayamdipta et al., 2020) which explored variance in predictions over time during training of a single model, we investigate how different data points vary in predictions across training runs. Non-trivial patterns emerge, and we use sample-specific instability to motivate a new approach to reducing model churn.\nOur work draws connections between model stability and recent tractable approximations for Bayesian learning (Izmailov et al., 2018;Maddox et al., 2019). Recent Bayesian learning work focuses on the benefits of Bayesian model ensembling for confidence calibration, but an optimal Bayesian ensemble would also be stable. Bayesian approximations exploit the fact that SGD training dynamics approximate MCMC sampling, and therefore samples of models over a single training run can approximate samples of models across training runs, although not perfectly (Fort et al., 2019;Wenzel et al., 2020;Izmailov et al., 2021). We study connections between prediction variability within a training run and across training runs, and use this connection to devise practical metrics and mitigation strategies. Similar to BANNs (Furlanello et al., 2018), our teacher and corresponding student models use the same model architecture with same no. of parameters rather than using a high-capacity teacher model, however, unlike BANNS, our work is geared towards addressing model instability. Architecturally, our methodology (TGTSS) uses a temperature scaled temporally smoothed vector that is obtained from the last N checkpoints from the teacher model instead of the finalized teacher model and not use the annotated labels for the utterances." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Model instability measurement", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The examples in Table 1 show that re-training a model with different random seeds can lead to wildly different predictions. The variance of predictions across models, σ 2 m , is intuitive, but is expensive to compute and does not necessarily align with user experience since changes in confidence may not change predictions. A changed prediction, on the other hand, may break functionality that users had come to rely on. Hence we want to include a metric which measures how often predictions change.\nTherefore, we propose to study the label switching entropy. Given a setup with training data {x i , y i } ∈ X where X are utterances, y ∈ {1, ..., K} are the corresponding gold labels, the multi-run Label Entropy (LE m ) over N independent runs for an utterance x i can be computed as,\nLE (i) m = K k=1 - n (i) k N log( n (i) k N ) (1)\nwhere, n k is the number of times utterance i was predicted to be in class k across N models trained with different random seeds. For example, if an utterance gets labeled to three classes A, B and C for 90%, 5% and 5% of the time respectively, then its multi-run label entropy (LE (i) m ) will be -(0.9 * log(0.9)+0.05 * log 0.05+0.05 log 0.05) = 0.39. Similarly, an utterance that is consistently predicted to belong to one class over N runs will have a LE (i) m of 0 (even if it is consistently put in the wrong class). We can compute the overall LE m by averaging LE (i) m for all the utterances. Empirically, we also observe a relatively strong linear relationship between LE m and σ m (Figure 1).\nSince computing LE m is computationally expensive due to training N independent models, we pro- pose using single-run Label Entropy (LE s ) that can be computed over a single model run. Mathematically, the formula for label entropy stays consistent for both multi-run and single-run, however, LE s is computed across different model checkpoints.\nIn our analyses, we computed LE s by accumulating the predicted class after each optimization step whereas LE m was computed by accumulating the final predicted class across N models on the validation set. Empirically, we found that there exists a strong linear relationship between LE s and LE m (Figure 2). This demonstrates that utterances that suffer from local instability across multiple independent runs exhibit similar instability across multiple optimization steps for a single model. This finding supports our hypothesis that LE s is a suitable proxy for LE m in real world production settings for NLU systems." }, { "figure_ref": [], "heading": "Model instability mitigation", "publication_ref": [], "table_ref": [], "text": "In our study, we have explored 3 baseline mitigation strategies to address model instability: ensembling, stochastic weight averaging (SWA) and uniform label smoothing. These methodologies have been used in numerous other works to improve generalization as well as predictive accuracy across a diverse range of applications. Performance of the ensembling strategy serves as our upper bound in reducing model instability. We propose a novel model instability mitigation strategy, temporal guided temperature scaled label smoothing, that is able to recover 90% of the reduction in model instability as ensembling at a fraction of model training time and computational cost. We describe all the mitigation strategies below." }, { "figure_ref": [], "heading": "Ensemble averaging and regularizing", "publication_ref": [], "table_ref": [], "text": "In this setting, we trained N independent models, initialized with different random seeds, using the standard cross-entropy loss, computed between the ground truth labels and the predicted probability vector. For every utterance in the test set, we recorded the mean predicted probability of the gold label, the predicted label and our proposed local instability metric, label entropy, across N models. We also trained another baseline by leveraging L2 regularization. No other mitigation strategies were used in the process since our aim was to emulate the current model training scenario in natural language understanding(NLU) production settings." }, { "figure_ref": [], "heading": "Stochastic Weight Averaging", "publication_ref": [ "b13" ], "table_ref": [], "text": "Stochastic weight averaging(SWA) (Izmailov et al., 2018) is a simple yet effective model training methodology that improves generalization performance in deep learning networks. SWA performs an uniform average of the weights traversed by the stochastic gradient descent based optimization algorithms with a modified learning rate. In our implementation, we equally averaged the weights at the end of the last two training epochs. We also explored equal averaging of weights from two randomly selected epochs out of the final 3 epochs but that strategy did not yield better results. We left the work of using a modified learning rate to a future study with a significantly larger training dataset." }, { "figure_ref": [], "heading": "Label smoothing", "publication_ref": [ "b26" ], "table_ref": [], "text": "Label smoothing (Szegedy et al., 2016) is a popular technique to improve performance, robustness and calibration in deep learning models. Instead of using \"hard\" one-hot labels when computing the cross-entropy loss with the model predictions, label smoothing introduces \"soft\" labels that are essentially a weighted mixture of one-hot labels with the uniform distribution. For utterances {x i , y i } where y ∈ {1, ..., K} for K classes, the new \"soft\" label is given by y LS = (1 -α) * y + α/K where α is the label smoothing parameter. The \"soft\" labels are then used in the softmax cross-entropy loss." }, { "figure_ref": [], "heading": "Ensemble baseline", "publication_ref": [], "table_ref": [], "text": "To obtain consistent predictions with low local instability, ensembling is often utilized as the default mitigation strategy. Given a problem setup with training data {x i , y i } ∈ X where X are utterances, y ∈ {1, ..., K} are the corresponding gold labels, then intuitively, ensembling over N independent models,where N is sufficiently large, will converge to the average predicted probability by the law of large numbers. Hence, using a sufficiently large ensemble of independently trained models would give stable predictions in general.\nIn our study, we used ensembling to aggregate (uniform average) predictions for each utterance across N independently trained models. Each model was trained using the softmax cross-entropy loss between the predicted logit z i over K classes and the one-hot encoded vector representing the gold label. For an utterance x i , the uniform average predicted probability vector pi across N models over all class K (softmax probability vector of size k = (1, K)) is adjusted by a temperature T , to obtain the smoothed predicted probability vector q i :\nq i = pi T K k=1 pk T(2)\nThe temperature T can be used to control the entropy of the distribution. The smoothed probability vector q is now used as the \"soft\" labels to train a model instead of the \"hard\" one hot encoded gold labels and the resultant model is robust to local instability. One challenge for ensembling is that it requires training, storing and running inference on a large number of models which is often infeasible for large scale NLU systems." }, { "figure_ref": [], "heading": "Temporal guided temperature scaled smoothing (TGTSS)", "publication_ref": [], "table_ref": [], "text": "Since ensembling is infeasible for large models in practice, we propose temporal guided label smoothing that does not require training large ensembles to compute the soft labels.\nIn this setup, we train a pair of models as opposed to training a large ensemble of models. The first (teacher) model is trained using the one-hot encoded gold labels as the target. Once the model has converged and is no longer in the transient training state (after N training or optimization steps), we compute the uniform average predicted probability vector ( pi ) after each optimization step of the model, which is then adjusted by temperature T to obtain the smoothed predicted probability vector q i using (2). A suitable N can be chosen by looking at the cross-entropy loss curve for the validation dataset. The second (student) model is now trained using q i as the \"soft\" label instead of the one-hot encoded gold labels.\nThe significant advantage of TGTSS over ensembling is that it does not require training, storing, or inferring over large ensembles. A key feature of TGTSS is that it uniformly averages predictions over numerous training steps instead of averaging predictions over numerous independent models. This saves the cost of training multiple models. Moreover, we never need to store multiple models for TGTSS since we can store a running average of the predictions over time. Finally, at inference time we only need to call a single model (the trained student model), as opposed to N models for the ensemble." }, { "figure_ref": [], "heading": "Experimental setup and results for mitigation 5.1 Base model architecture", "publication_ref": [ "b24", "b27" ], "table_ref": [], "text": "For all our experiments, we used DistilBERT (Sanh et al., 2019) as the pre-trained language model. We used the implementation of DistilBERT-baseuncased from the Huggingface library by leveraging AutoModelForSequenceClassification. The pretrained language model is then fine-tuned on the benchmark datasets by using the training set. Distil-BERT is a widely used pre-trained language model that is currently used in production in many large scale NLU systems. One key advantage of using DistilBERT is that it is able to recover more than 90% performance of the larger BERT-base-uncased model while using 40% less parameters on the GLUE language understanding benchmark (Wang et al., 2018). Using other BERT models as the pretrained language model was outside the scope of this study." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b8" ], "table_ref": [ "tab_1" ], "text": "To study local instability and compare different mitigation strategies, we used two open source benchmark datasets (Table 2): Massive and Clinc150.\n• Massive: Massive (FitzGerald et al., 2022) dataset is an open source multilingual NLU dataset from Amazon Alexa NLU system consisting of 1 million labeled utterances spanning 51 languages. For our experiments, we only used the en-US domain utterances for domain classification task across 18 domains (alarm, audio, general, music, recommendation, etc.). " }, { "figure_ref": [], "heading": "Training and Evaluation Protocol", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "We compared the performance of our proposed mitigation strategy, temporal guided temperature scaled smoothing (TGTSS), with other baseline mitigation strategies such as ensembling averaging, L2 regularization, uniform label smoothing, SWA and ensembling. We trained 50 independent models with the same hyper-parameters for each mitigation strategy using different random initialization seeds. We reported the mean ± std. dev domain classification accuracy for the Massive dataset and mean ± std. dev intent classification accuracy for the Clinc150 dataset. For both the datasets, we also reported the percentage reduction in LE m when compared to the control baseline over 50 independent model runs for all the utterances as well as for high label entropy utterances whose label entropy was over 0.56 in the control baseline. For each method, we computed the sum of LE m over all the N utterances in the test set as N i=1 LE m i . The ∆LE m is then computed as the percentage reduction among these values for each method and the control baseline. We do similar computations for ∆LE s in Table 4.\nThe LE m value 0.56 for an utterance indicates that if the utterance was assigned to 2 different labels over 50 independent model runs, then its membership is split 75%-25% between the two labels. A lower value of label entropy indicates better model robustness and consequently, lower local instability. An utterance will have LE m = 0 if it is consistently predicted to be the same label across 50 independent model runs. All the results for both the benchmark datasets have been reported on an unseen holdout set. A model having high overall accuracy and low label entropy is usually preferred." }, { "figure_ref": [], "heading": "Hyper-parameters", "publication_ref": [ "b17" ], "table_ref": [], "text": "In our empirical analyses, all the models across different mitigation strategies were trained using the ADAM (Kingma and Ba, 2014) optimizer with a learning rate of 0.0001. For both the benchmark datasets, all the models were trained for 5 epochs with a batch size of 256. For the control baseline with L2 regularization, we selected a weight decay value of 0.001. For the ensemble baseline, we selected N as 200 i.e. the pre-temperature scaled \"soft\" labels were computed after uniformly averaging outputs from 200 independent models for each utterance in the training set. In the uniform label smoothing mitigation strategy, we used α as 0.5 for the Clinc150 dataset and α as 0.1 for the Massive dataset. For SWA, we equally averaged the model weights after the last 2 epochs. For experiments using temporal guided temperature scaled smoothing on the Clinc150 dataset, we used N as 200 where as for the Massive dataset, we set N as 180. This indicates that model outputs after first 200 training or optimization steps were recorded for the Clinc150 dataset and uniformly averaged for each utterance before temperature scaling. Similarly, for the Massive dataset, model outputs were recorded after 180 training steps. For both the ensemble guided and temporal guided temperature scaled smoothing mitigation strategies, we set the temperature T at 0.5." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We compared the proposed mitigation strategy with other baselines described in Section 4.1. We highlight the effectiveness of our proposed local instability metric, label entropy, in capturing local instability over 50 independent model runs as well as a single model run." }, { "figure_ref": [], "heading": "Ensemble is the best mitigation strategy", "publication_ref": [], "table_ref": [], "text": "In our empirical analyses, we found that ensemble baseline is often the best performing mitigation strategy in terms of both model accuracy and LE m for both the benchmark datasets(Table 3)." }, { "figure_ref": [], "heading": "TGTSS is comparable to ensembing at a fraction of computation cost", "publication_ref": [], "table_ref": [], "text": "We found that TGTSS is able to recover about 91% of the performance of ensembling in the multirun experiments. TGTSS trains only one teacherstudent pair and drastically reduces the computational cost of ensembling. Hence, it is much more feasible to deploy TGTSS in production NLU systems. We also found that TGTSS is significantly better than model-centric local instability mitigation strategies such as SWA and L2 regularization.\nHowever, as mentioned in Section 4.5, TGTSS computes \"soft\" labels across multiple optimization steps which leads to multiple inference cycles.\nIn our experiments, we ran inference after each optimization step once the model is no longer in the transient training state. However, it may be possible to further reduce the number of inference cycles by running inference after every X optimization steps and this is left for future studies." }, { "figure_ref": [ "fig_7" ], "heading": "Efficacy of single run label entropy (LE s ) as a local instability metric", "publication_ref": [], "table_ref": [ "tab_3", "tab_5" ], "text": "In Table 3, we demonstrated how TGTSS is able to reduce local instability in terms of our proposed metric LE m over multiple independent runs of the model and recover 91% of the performance of ensembling. We propose LE s as a more practical metric for local instability. We show that TGTSS is still able to recover more than 90% of the performance of ensembling for the Clinc150 and the Massive datasets (Table 4). For high LE m utterances in the control baseline, TGTSS was able to considerably reduce LE s (Appendix Table 6).\nIn figure 3 we observe that TGTSS significantly reduces variation in prediction scores compared to the control baseline. In the top panels we show utterances that are easy to learn and the classifier converges to the gold label within 2 epochs. In bottom panels, we show examples that exhibit high variation in prediction scores through the training process, and consequently, high LE s . After mitigation by TGTSS, the bottom right panel shows the significant reduction in prediction score variation and LE s . Figure 8 in Appendix shows more examples of reduction in LE s over the course of training." }, { "figure_ref": [], "heading": "Global label smoothing is not as effective", "publication_ref": [], "table_ref": [], "text": "In our empirical analyses, we found that uniform label smoothing reduces local instability by 7-9% compared to the control baseline but falls short of ensembling and TGTSS. Label smoothing involves computing a weighted mixture of hard targets with the uniform distribution where as both ensembling and TGTSS uses the model's average predictions over multiple runs and multiple optimization steps, respectively. Tuning the smoothing factor (α) did not improve model stability in terms of label entropy." }, { "figure_ref": [], "heading": "Importance of temperature scaling for TGTSS", "publication_ref": [], "table_ref": [], "text": "We conducted ablation studies to understand how temperature scaling affects the performance of TGTSS. Temperature scaling uses a parameter T < 1 for all the classes to scale the uniformly averaged predictions. We found that the proposed methodology reduces label entropy by 17.5% over the control baseline without temperature scaling for the Massive dataset on the validation set (31.5% reduction with temperature scaling). This also indicates that temporal uniform averaging is independently able to significantly reduce label entropy." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this work, we study the problem of model instability/churn in deep neural networks in the context of large scale NLU systems. Assigning different labels to the same training data over multiple training runs can be detrimental to many applications based on DNNs. We notice that the instability of model predictions are non-uniform over the data, hence we call it local instability. We propose a new metric, label switching entropy, that is able to quantify model instability over multi-runs as well as a single training run. We also introduce Temporal Guided Temperature Scaled Smoothing that reduces model churn by a considerable margin. We show in experiments that TGTSS is able to recover up to 91% of the performance of ensembling at a fraction of computational cost for training and storing, thereby providing a viable alternative to ensembling in large scale production systems. Future directions of research include expanding our analysis to multi-modal data and further dissecting the root causes behind local model instability." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Even though our proposed methodology, TGTSS, was able to significantly reduce model instability, there is still a gap in performance with the gold standard ensembling techniques. More work needs to be done to bridge this gap. In our empirical analysis, we used two open source datasets, Massive and Clinc150. Both these datasets are small and may not represent the complexity in real world production datasets which may contain substantially large noise. In our proposed methodology, we train a pair of models successively, a teacher and a student, which is significantly better than ensembling in terms of computational cost. However, this setup may still be challenging in many sophisticated real world production NLU systems. More work needs to be done to reduce the computational complexity of training and inference for these systems." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_4", "fig_6", "fig_5" ], "heading": "A.1 Variance confidence plots", "publication_ref": [], "table_ref": [], "text": "We have plotted the mean confidence and the variance of utterances in the validation dataset for both the Massive (Figure 4) and Clinc150 (Figure 5) datasets. From our analysis, we see that there are utterances that exhibit high variance and medium confidence (around 0.5) which often leads to predicted label flips or model churn over multiple training runs of the model. We also see that there are utterances that possess low confidence corresponding to the gold label and has very low variance. These utterances are probably annotation errors. The bulk of the utterances have high confidence on average corresponding to the gold label and low confidence which signifies that the model predictions are mostly consistent on these utterances. As shown earlier in the massive dataset, there is a strong relationship between LEm and µm. We observe a similar trend in the Clinc150 dataset as well (Figure 7). We also observe a similar relationship between single run and multiple run label entropy (LE) for Clinc150 dataset (Figure 6). This finding supports our analysis that label entropy is a suitable proxy for model churn." }, { "figure_ref": [], "heading": "A.3 LE m & LE s reduction for high entropy samples", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "We computed the percentage reduction in LEm and LEs post mitigation for utterances that have high LEm in the control baseline. In our empirical studies, we showed that TGTSS was able to considerably reduce LEm and LEs across multirun and single-run experiments when compared to the gold standard ensembling (Appendix Tables 5,6)." }, { "figure_ref": [ "fig_7" ], "heading": "A.4 Label entropy over optimization steps", "publication_ref": [], "table_ref": [], "text": "We have used LEs as a suitable proxy for LEm. In Figure 8, we provide empirical evidence that our proposed methodology, TGTSS, was able to reduce label entropy as the model is " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "The authors would like to thank the anonymous reviewers and area chairs for their suggestions and comments." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "The authors foresee no ethical concerns with the research presented in this work." } ]
Deep Neural Networks (DNNs) are becoming integral components of real world services relied upon by millions of users. Unfortunately, architects of these systems can find it difficult to ensure reliable performance as irrelevant details like random initialization can unexpectedly change the outputs of a trained system with potentially disastrous consequences. We formulate the model stability problem by studying how the predictions of a model change, even when it is retrained on the same data, as a consequence of stochasticity in the training process. For Natural Language Understanding (NLU) tasks, we find instability in predictions for a significant fraction of queries. We formulate principled metrics, like per-sample "label entropy" across training runs or within a single training run, to quantify this phenomenon. Intriguingly, we find that unstable predictions do not appear at random, but rather appear to be clustered in dataspecific ways. We study data-agnostic regularization methods to improve stability and propose new data-centric methods that exploit our local stability estimates. We find that our localized data-specific mitigation strategy dramatically outperforms data-agnostic methods, and comes within 90% of the gold standard, achieved by ensembling, at a fraction of the computational cost.
Measuring and Mitigating Local Instability in Deep Neural Networks
[ { "figure_caption": "Figure 1 :1Figure 1: LE m vs σ m for Massive dataset shows a strong linear relationship. Each data point is an utterance with LE (i) m vs σ (i) m values.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: LE s vs LE m for Massive dataset shows a strong linear relationship. Each data point is an utterance with LE (i) s vs LE (i) m values. Zero entropy corresponds to utterances with confidence scores close to 1 for a class with very low variability.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Table 3 :Figure 3 :33Figure3: Training trajectories between pre-mitigation and post-mitigation stages show that TGTSS was able to significantly reduce the variability of raw confidence scores on the gold labels as well as reduce model churn in Massive dataset.[Top] shows some utterances where the model predictions are stable (no label switching),[Bottom] shows some utterance where TGTSS significantly reduced model churn as measured using LE s .", "figure_data": "", "figure_id": "fig_3", "figure_label": "33", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Plot of multi-run confidence(µ m ) and standard deviations(σ m ) of prediction scores for Massive data (validation dataset), from the domain classifier model", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: LE s vs LE m for Clinc150 dataset (validation set) shows a strong linear relationship. Each data point is an utterance with LE (i) s vs LE (i) m values. Zero entropy corresponds to utterances with confidence scores close to 1 for a class with very low variability.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: LE m vs σ m for Clinc150 dataset (validation set) shows a strong linear relationship. Each data point is an utterance with LE (i) m vs σ (i) m values.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: Training trajectories between pre-mitigation and post-mitigation stages show that TGTSS was able to significantly reduce label entropy as the model is trained.[Top] shows some utterances where the model predictions are stable as label entropy is always 0,[Bottom] shows some utterance where TGTSS significantly reduced model churn as measured using LE s .", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Utterances from Massive data show different predictions over 50 model runs with different seeds.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Benchmark dataset statistics", "figure_data": "• Clinc150 DialoGLUE: Clinc150 (Larsonet al., 2019) is an open source dataset fromDialoGLUE (Mehri et al., 2020), a conver-sational AI benchmark collection. We uti-lized Clinc150 for intent classification taskacross 150 intents (translate, transfer, time-zone, taxes, etc).AttributeMASSIVECLINC150SourceAmazonDialoGLUEAlexa AIDomains18-Intents60150Train11,51415,000Holdout(Unseen)29743,000Balanced?No.Yes. 100per intentClassification taskDomainIntent", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "↑ % of E b Accuracy(%) ∆LEm(%) ↑ % of E b", "figure_data": "MassiveClinc150Methods Accuracy(%) ∆LEm(%) Control baseline 90.6 ± 0.6 --95.1 ± 0.8--Ensemble baseline (E b )91.3 ± 0.534.5-95.4 ± 0.631.1-L2 Regularization90.3 ± 0.5-2.3-794.9 ± 0.7-0.6-2SWA91.0 ± 0.517.65195.2 ± 0.77.323Label Smoothing90.8 ± 0.55.71795.2 ± 0.86.120TGTSS (Ours)91.3 ± 0.631.49195.3 ± 0.826.786", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Empirical analyses highlights Temporal guided temperature scaled smoothing (TGTSS) reduces LE s with respect to the single run control baseline model across different optimization steps when a single model is trained. ∆LE", "figure_data": "", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Empirical analyses highlights TGTSS reduces LE m for high LE m samples of the control baseline by a considerable margin in multi-run experiments. The column ∆LE m (%) ↑ is computed as percentage reduction between the sum of per-utterance LE m for each method and that of the control baseline. A higher value indicates greater reduction in LE m over control baseline.", "figure_data": "MethodsMassive Clinc150Label Smoothing14.920.7Ensemble baseline36.440.7TGTSS (Ours)31.533.6", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Empirical analyses highlights TGTSS reduces LE s for high LE m samples of the control baseline by a considerable margin in single-run experiments. The column ∆LE s (%) ↑ is computed as percentage reduction between the sum of per-utterance LE s for each method and that of the control baseline.", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" } ]
Arghya Datta; Subhrangshu Nandi; Jingcheng Xu; Greg Ver Steeg; He Xie; Anoop Kumar; Aram Galstyan
[ { "authors": "Dara Bahri; Heinrich Jiang", "journal": "", "ref_id": "b0", "title": "Locally adaptive label smoothing for predictive churn", "year": "2021" }, { "authors": "Srinadh Bhojanapalli; Kimberly Wilber; Andreas Veit; Ankit Singh Rawat; Seungyeon Kim; Aditya Menon; Sanjiv Kumar", "journal": "", "ref_id": "b1", "title": "On the reproducibility of neural network predictions", "year": "2021" }, { "authors": "Andrew Cotter; Heinrich Jiang; Maya R Gupta; Serena Wang; Taman Narayan; Seungil You; Karthik Sridharan", "journal": "J. Mach. Learn. Res", "ref_id": "b2", "title": "Optimization with non-differentiable constraints with applications to fairness, recall, churn, and other goals", "year": "2019" }, { "authors": "Katherine Alexander D'amour; Dan Heller; Ben Moldovan; Babak Adlam; Alex Alipanahi; Christina Beutel; Jonathan Chen; Jacob Deaton; Matthew D Eisenstein; Farhad Hoffman; Neil Hormozdiari; Shaobo Houlsby; Ghassen Hou; Alan Jerfel; Mario Karthikesalingam; Yian Lucic; Cory Ma; Diana Mclean; Akinori Mincu; Andrea Mitani; Zachary Montanari; Vivek Nado; Christopher Natarajan; Thomas F Nielson; Rajiv Osborne; Kim Raman; Rory Ramasamy; Jessica Sayres; Martin Schrouff; Shannon Seneviratne; Harini Sequeira; Victor Suresh; Max Veitch; Xuezhi Vladymyrov; Kellie Wang; Steve Webster; Taedong Yadlowsky; Xiaohua Yun; D Zhai; Sculley", "journal": "Journal of Machine Learning Research", "ref_id": "b3", "title": "Underspecification presents challenges for credibility in modern machine learning", "year": "2022" }, { "authors": "Jesse Dodge; Suchin Gururangan; Dallas Card; Roy Schwartz; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Show your work: Improved reporting of experimental results", "year": "2019" }, { "authors": "Jesse Dodge; Gabriel Ilharco; Roy Schwartz; Ali Farhadi; Hannaneh Hajishirzi; Noah Smith", "journal": "", "ref_id": "b5", "title": "Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping", "year": "2020" }, { "authors": "Katherine Alexander D'amour; Dan Heller; Ben Moldovan; Babak Adlam; Alex Alipanahi; Christina Beutel; Jonathan Chen; Jacob Deaton; Eisenstein; Matthew D Hoffman", "journal": "Journal of Machine Learning Research", "ref_id": "b6", "title": "Underspecification presents challenges for credibility in modern machine learning", "year": "2020" }, { "authors": "Katherine Alexander D'amour; Dan Heller; Ben Moldovan; Babak Adlam; Alex Alipanahi; Christina Beutel; Jonathan Chen; Jacob Deaton; Eisenstein; Matthew D Hoffman", "journal": "Journal of Machine Learning Research", "ref_id": "b7", "title": "Underspecification presents challenges for credibility in modern machine learning", "year": "2020" }, { "authors": "Jack Fitzgerald; Christopher Hench; Charith Peris; Scott Mackie; Kay Rottmann; Ana Sanchez; Aaron Nash; Liam Urbach; Vishesh Kakarala; Richa Singh", "journal": "", "ref_id": "b8", "title": "Massive: A 1m-example multilingual natural language understanding dataset with 51 typologically-diverse languages", "year": "2022" }, { "authors": "Stanislav Fort; Huiyi Hu; Balaji Lakshminarayanan", "journal": "", "ref_id": "b9", "title": "Deep ensembles: A loss landscape perspective", "year": "2019" }, { "authors": "Tommaso Furlanello; Zachary Lipton; Michael Tschannen; Laurent Itti; Anima Anandkumar", "journal": "", "ref_id": "b10", "title": "Born again neural networks", "year": "2018" }, { "authors": " Pmlr", "journal": "", "ref_id": "b11", "title": "", "year": "" }, { "authors": "Christopher Hidey; Fei Liu; Rahul Goel", "journal": "", "ref_id": "b12", "title": "Reducing model jitter: Stable re-training of semantic parsers in production environments", "year": "2022" }, { "authors": "Pavel Izmailov; T Dmitrii Podoprikhin; Garipov; P Dmitry; Andrew Vetrov; Wilson Gordon", "journal": "", "ref_id": "b13", "title": "Averaging weights leads to wider optima and better generalization", "year": "2018" }, { "authors": "Pavel Izmailov; Sharad Vikram; Matthew D Hoffman; Andrew Gordon; Gordon Wilson", "journal": "", "ref_id": "b14", "title": "What are bayesian neural network posteriors really like?", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Heinrich Jiang; Harikrishna Narasimhan; Dara Bahri; Andrew Cotter; Afshin Rostamizadeh", "journal": "", "ref_id": "b16", "title": "Churn reduction via distillation", "year": "2021" }, { "authors": "Diederik Kingma; Jimmy Ba", "journal": "", "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Stefan Larson; Anish Mahendran; Joseph J Peper; Christopher Clarke; Andrew Lee; Parker Hill; Jonathan K Kummerfeld; Kevin Leach; Michael A Laurenzano; Lingjia Tang; Jason Mars", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "An evaluation dataset for intent classification and out-of-scope prediction", "year": "2019" }, { "authors": "J Wesley; Pavel Maddox; Timur Izmailov; Garipov; P Dmitry; Andrew Vetrov; Wilson Gordon", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "A simple baseline for bayesian uncertainty in deep learning", "year": "2019" }, { "authors": "Shikib Mehri; Mihail Eric; Dilek Z Hakkani-Tür", "journal": "", "ref_id": "b20", "title": "Dialoglue: A natural language understanding benchmark for task-oriented dialogue", "year": "2020" }, { "authors": "Mahdi Milani Fard; Quentin Cormier; Kevin Canini; Maya Gupta", "journal": "", "ref_id": "b21", "title": "Launch and iterate: Reducing prediction churn", "year": "2016" }, { "authors": "Rafael Müller; Simon Kornblith; Geoffrey E Hinton", "journal": "", "ref_id": "b22", "title": "When does label smoothing help? Advances in neural information processing systems", "year": "2019" }, { "authors": "Zachary Nado; Neil Band; Mark Collier; Josip Djolonga; Michael W Dusenberry; Sebastian Farquhar; Qixuan Feng; Angelos Filos; Marton Havasi; Rodolphe Jenatton", "journal": "", "ref_id": "b23", "title": "Uncertainty baselines: Benchmarks for uncertainty & robustness in deep learning", "year": "2021" }, { "authors": "Victor Sanh; Lysandre Debut; Julien Chaumond; Thomas Wolf", "journal": "", "ref_id": "b24", "title": "Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter", "year": "2019" }, { "authors": "Swabha Swayamdipta; Roy Schwartz; Nicholas Lourie; Yizhong Wang; Hannaneh Hajishirzi; Noah A Smith; Yejin Choi", "journal": "", "ref_id": "b25", "title": "Dataset cartography: Mapping and diagnosing datasets with training dynamics", "year": "2020" }, { "authors": "Christian Szegedy; Vincent Vanhoucke; Sergey Ioffe; Jon Shlens; Zbigniew Wojna", "journal": "", "ref_id": "b26", "title": "Rethinking the inception architecture for computer vision", "year": "2016" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b27", "title": "Glue: A multitask benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Florian Wenzel; Kevin Roth; S Bastiaan; Jakub Veeling; Linh Świ Ątkowski; Stephan Tran; Jasper Mandt; Tim Snoek; Rodolphe Salimans; Sebastian Jenatton; Nowozin", "journal": "", "ref_id": "b28", "title": "How good is the bayes posterior in deep neural networks really?", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 349.32, 499.8, 175.09, 33.98 ], "formula_id": "formula_0", "formula_text": "LE (i) m = K k=1 - n (i) k N log( n (i) k N ) (1)" }, { "formula_coordinates": [ 5, 144.16, 611.87, 144.97, 30.83 ], "formula_id": "formula_1", "formula_text": "q i = pi T K k=1 pk T(2)" } ]
2023-05-18
[ { "figure_ref": [ "fig_0" ], "heading": "INTRODUCTION", "publication_ref": [ "b4", "b5", "b26", "b31", "b29", "b41", "b17" ], "table_ref": [], "text": "The shadows are created when objects block a source of light. Singleimage shadow removal aims to reconstruct the shadow-free image from its degraded shadow counterpart, which is an important and non-trivial task in the computer vision field and can benefit many downstream tasks, e.g. object detection [5,6,27,32], objects tracking [30], and face recognition [42]. Despite significant progress, shadow removal still faces challenges. Even state-of-the-art shadow removal methods can produce de-shadowed results that contain artifacts in both shadow and non-shadow regions, e.g. color inconsistencies between shadow and non-shadow regions, as well as visible marks along the shadow boundary (see Fig. 1).\nThe existing shadow removal methods often rely on complex deep neural networks to reconstruct both shadow and non-shadow regions simultaneously. However, these methods overlook the fact that shadow removal involves two distinct tasks: restoring shadow regions to their shadow-free counterparts and identical mapping for non-shadow regions. As a consequence, these deep neural networks are optimized towards only one of these tasks during training instead of both, due to the shared weights and the poor compatibility of these tasks. To address this problem, in this work, we propose to handle these two tasks separately. Intuitively we can divide shadow removal into these two distinct tasks based on the binary shadow mask. However, due to the diverse properties of shadows in the real world, obtaining an accurate shadow mask that efficiently distinguishes between shadow and non-shadow regions is challenging or impossible. This is particularly true for areas around the shadow boundary, where a gradual transition occurs between shadow and non-shadow regions. Even the ground truth shadow masks provided by the commonly used shadow removal datasets, e.g. ISTD+ dataset [18], are not always precise and can not effectively differentiate between shadow and non-shadow regions (see the red and green areas in Fig. 2). Therefore, decoupling shadow removal into these two distinct tasks and processing them separately is challenging.\nTo tackle this issue, we claim that shadow removal can be decoupled by transferring identical mapping without explicitly distinguishing between shadow and non-shadow regions. Specifically, our approach consists of three components: an identical mapping branch (IMB) for processing non-shadow regions, an iterative deshadow branch (IDB) for shadow regions restoration based on identical results, and a smart aggregation block (SAB). The IMB aims to reconstruct an image that is identical to the input one, which can benefit the restoration of the non-shadow regions without explicitly distinguishing between shadow and non-shadow regions. The IDB is responsible for progressively transferring the information from the non-shadow regions to the shadow regions in an iterative manner to facilitate the process of shadow removal by utilizing the multi-scale features provided by IMB. The SAB is designed to adaptive integrate features from both IMB and IDB. Moreover, the SAB can generate finely tuned soft shadow masks at multiple feature levels (see Fig. 7 (c),(d), and (e)) to guide the process of removing shadows. In summary, this work makes the following contributions:\n❶ We are the first to decouple the shadow removal problem into two distinct tasks: restoring shadow regions to their shadow-free counterparts and identical mapping for non-shadow regions and propose a novel Dual-Branch shadow removal paradigm for solving this problem.\n❷ We propose a novel Dual-Branch shadow removal network that uses an identical mapping branch (IMB) to process the nonshadow regions, an iterative de-shadow branch (IDB) to process the shadow regions, and a smart aggregation block (SAB) to adaptive aggregate features from two branches.\n❸ The extensive experiments demonstrate our proposed method outperforms all previous state-of-the-art shadow removal approaches on several public shadow removal datasets, i.e. ISTD+ and SRD." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 General Shadow Removal Methods", "publication_ref": [ "b2", "b3", "b3", "b9", "b25", "b39", "b25", "b39", "b0", "b6", "b7", "b12", "b14", "b17", "b18", "b21", "b27", "b30", "b34", "b38", "b43", "b44", "b27", "b6", "b43", "b30", "b14", "b11", "b13", "b15", "b23", "b24" ], "table_ref": [], "text": "To restore the shadow-free image from the degraded shadow counterpart, traditional methods [3,4,4,10,26,40] rely on the priors information, e.g. gradients, illumination, and patch similarity. For example, [26] proposes a gradient-domain processing technique to adjust the softness of the shadows without introducing any artifacts. [10] uses a region-based approach that predicts relative illumination conditions between segmented regions to distinguish shadow regions and relight each pixel. [40] extends this region-based approach and constructs a novel illumination recovering operator to effectively remove the shadows and restore the detailed texture information based on the texture similarity between the shadow and non-shadow patches. Although well-designed, these methods have been surpassed by deep learning-based shadow removal methods recently [1,7,8,13,15,18,19,22,28,31,35,39,44,45]. Specifically, [28] is the pioneering work that uses an end-to-end network to tackle shadow detection and shadow removal problems by extracting multi-context features from the global and local regions. Since then, a large number of interesting deep learning-based methods have been proposed by focusing on different problem aspects. [7] reformulates the shadow removal task as an exposure problem and employs a neural network to predict the exposure parameters to get shadow-free images. [44] takes into account the auxiliary supervision of shadow generation in the shadow removal procedure and proposes a unified network to perform shadow removal and shadow generation. [31] explicitly considers the style consistency between shadow and non-shadow regions after shadow removal and proposes a style-guided shadow removal network. Although these methods achieve promising results, the problem of lacking large-scale paired training data becomes a bottleneck that limits their performance. To alleviate this problem, [15] designs a pipeline to generate a large-scale synthetic shadow dataset to improve the shadow removal performance. While the unsupervised or weekly supervised methods are also introduced [12,14,16,24,25] to train the deep neural network with unpaired data." }, { "figure_ref": [], "heading": "Iterative Network", "publication_ref": [ "b10", "b19", "b20", "b28", "b32", "b35", "b36", "b37", "b42", "b35", "b42", "b10", "b37", "b36", "b19" ], "table_ref": [], "text": "Iterative networks are extensively employed in machine learningbased image processing tasks [11,20,21,29,33,[36][37][38]43] to recursively and gradually improve the quality of predictions. For example, [36] uses a recurrent image-guided network to address challenges in depth prediction, where the recurrent is applied to both the image guidance branch and the depth generation branch to gradually and sufficiently recover depth values. [43] introduces an edge-guided recurrent positioning network for predicting salient objects in optical remote sensing images. The proposed approach can sharpen the predicted positions by utilizing the effective edge information and recurrently calibrating them during the prediction process. [11] introduces a cascaded recurrent neural network that utilizes gated recurrent units to effectively explore the redundant and complementary information present in hyperspectral images.\n[29] performs image Super-Resolution via Repeated Refinement, which employs a stochastic iterative denoising process to improve image super-resolution performance. [38] introduces an end-to-end trainable scene text recognition system that utilizes an iterative rectification framework to address the problem of perspective distortion and text line curvature. [37] introduces a deep iterative down-up convolutional neural network for image denoising that employs a resolution-adaptive approach by iteratively reducing and increasing the resolution at the feature level. [20] presents a recurrent feature reasoning (RFR) network for single-image inpainting that iteratively predicts hole boundaries at the feature level of a convolutional neural network, which then serves as cues for subsequent inference. Inspired by the success of these iterative networks, in this paper, we employ an iterative de-shadow branch (IDB) to gradually improve the performance of shadow removal." }, { "figure_ref": [], "heading": "DISCUSSION AND MOTIVATION", "publication_ref": [ "b6", "b27", "b43", "b33" ], "table_ref": [], "text": "We can divide the shadow removal task into two distinct tasks: 1) restoring shadow regions to their shadow-free counterparts and 2) identical mapping for non-shadow regions. Previous methods [7,28,44] use deep neural networks with shared weights to handle the two tasks. We argue that these networks are only optimized toward one of the tasks instead of both. In this section, we first conduct experiments to uncover the limitation of training a shared deep neural network to handle these two distinct tasks (See Sec. 3.1). Then, we illustrate the superiority of identical mapping for non-shadow regions. To do this, we utilize the Information in the Weights (IIW) [34] technique to demonstrate the benefits of restoring non-shadow regions using identical mapping training (See Sec. 3.2). Additionally, we employ an encoder-decoder architecture to explore the efficacy of the iterative technique for shadow removal and highlight its advantages and limitations (See Sec. 3.3)." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Mutual Interference of Shadow Removal Network", "publication_ref": [ "b17" ], "table_ref": [], "text": "To begin with, we define mutual interference as a phenomenon in which the performance of a shadow removal network increases in the shadow restoration task but decreases in the identical mapping task or vice versa. To uncover this phenomenon, we build a baseline encoder-decoder shadow removal network and evaluate its performance on the ISTD+ dataset [18] via root mean squared error (RMSE) every 10000 iterations during training. Specifically, given the shadow removal network trained at 𝑡th iteration, we can calculate the RMSEs at both shadow and non-shadow regions of testing images. Then, we can draw plots along the iteration indexes for both shadow and non-shadow regions (i.e., the red plot and green plot in Fig. 3 (a)). Then, we say that mutual interference occurs if the RMSE variation between two neighboring iterations at the shadow region is different from the one at the non-shadow region. We define the mutual interference ratio as the number of times mutual interference occurs during the training procedure.\nAs depicted in Fig. 3(c), we see that the baseline encoder-decoder network has a high mutual interference ratio (i.e., over 30 times within 100 counted iterations), which illustrates that a shared deep neural network can hardly cover the two distinct tasks at the same time." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Advantages of The Identical Mapping", "publication_ref": [ "b33" ], "table_ref": [], "text": "To analyze the potential advantages of identical mapping, we conduct two additional experiments using the same encoder-decoder network. For Exp1, the input is the shadow image, and the network is trained to remove shadows from that. While for Exp2, we aim to have the reconstructed result identical to the input shadow images (i.e., identical mapping). For both experiments, we evaluate the restoration quality in the non-shadow regions. Surprisingly, we observed that Exp2 had a significant advantage over Exp1. As shown in Fig. 4 (a), in the non-shadow regions, Exp2 has a much lower RMSE than Exp1 throughout the entire training procedure.\nTo further explore the potential functionality of identical mapping, we used the Information in the Weight (IIW) [34] technique to analyze the training procedure of Exp1 and Exp2. A lower IIW means the network has a higher generalization to different non-shadow scenes. The results are displayed in Fig. 4 (b). We observe that Exp2 substantially improved the model's generalization (i.e., lower IIW) in the non-shadow regions compared to Exp1, which demonstrates the efficacy of identical mapping in reconstructing non-shadow regions." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Iterative Network for Shadow Removal", "publication_ref": [], "table_ref": [], "text": "To investigate the effectiveness of the iterative network, we conduct a series of experiments using the same encoder-decoder structure in Sec. 3.1. Specifically, we feed the decoder's feature back into the encoder at various times and evaluate the restoration quality in both shadow and non-shadow regions. As depicted in Fig. 4(c), both the encoder-decoder structure and our proposed method show significant improvement in the restoration quality of shadow regions with more iterations (see the red line). However, we notice that for the encoder-decoder structure, the restoration quality in nonshadow regions decreases as the number of iterations increases (see the square green line). In contrast, with our method, the restoration quality in non-shadow regions remains nearly unchanged even with increasing iterations (see the dotted green line). In addition, we also calculate and visualize the L 1 differences between the ground truth and reconstructed results obtained by the encoder-decoder structure at different iterations. The results are shown in Fig. 5, where green arrows highlight differences in shadow regions, and red arrows highlight the differences in non-shadow regions. With a single iteration, the restoration quality is favorable in nonshadow regions but subpar in shadow regions, while the opposite trend is observed with two iterations. Overall, the above experiments present the necessity of decoupling the shadow removal task into two distinct tasks and the effectiveness of the combination of identical mapping and iterative shadow removal." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "METHOD 4.1 Overview", "publication_ref": [], "table_ref": [], "text": "The proposed method consists of three components: an identical mapping branch (IMB) (see Sec. 4.2), an iterative de-shadow branch (IDB) (see Sec. 4.3), and a smart aggregation block (SAB) (see Sec. 4.4). Given a shadow image, we decouple the shadow removal into two distinct tasks: restoring shadow regions to their shadow-free counterparts and identical mapping for non-shadow regions. We use IMB to handle the identical mapping task and use IDB to handle the shadow restoration task. The SAB is designed to adaptive integrate features from both IMB and IDB. To prove the advantage of our method, we conduct the same experiment as discussed in Sec. 3.1 by using our method. As shown in Fig. 3(b), for both tasks, our method exhibits less fluctuation in terms of RMSE compared to the encoder-decoder structure during training.\nBesides, our method also demonstrates a lower mutual interference ratio, as shown in Fig. 3(c)." }, { "figure_ref": [], "heading": "Identical Mapping Branch", "publication_ref": [], "table_ref": [], "text": "We propose the identical mapping branch (IMB) to reconstruct the input image via an encoder-decoder network, which can benefit the restoration of non-shadow regions. Given a shadow image I, the objective of IMB is to reconstruct Î, where Î should be identical to I. This procedure can be represented by Î = 𝜙 (I),\nwhere 𝜙 (•) denote the IMB. Let F 𝑙 denote the feature extracted from the 𝑙th convolution layer of 𝜙 (•). F 𝑙 can be formalized as\nF 𝑙 = 𝜙 𝑙 (. . . 𝜙 2 (𝜙 1 (I))),(2)\nwhere 𝜙 𝑙 (•) denotes the 𝑙th convolution layer of 𝜙 (•). After training the IMB, we freeze its parameters and rely solely on the multi-scale features, i.e. F 𝑙 , to guide the iterative de-shadow branch (IDB)." }, { "figure_ref": [], "heading": "Iterative De-shadow Branch", "publication_ref": [], "table_ref": [], "text": "The iterative de-shadow branch (IDB) is responsible for progressively transferring the information from the non-shadow regions to the shadow regions in an iterative manner to facilitate the process of shadow removal by utilizing the multi-scale features provided by IMB. Let 𝜓 (•) denotes the IDB and F𝑙 denote the feature extracted from the 𝑙th convolution layer of 𝜓 (•). F𝑙 can be formalized as\nF𝑙 = 𝜓 𝑙 (. . . 𝜓 2 (𝜓 1 (Cat(I, M))))(3)\nwhere Cat means channel-wise concatenation, 𝜓 𝑙 (•) denotes the 𝑙th convolution layer of 𝜓 (•) and M denotes the corresponding binary shadow mask of the shadow image I. The shadow and non-shadow regions are annotated by 1 and 0 respectively in M. Then we aggregate F 𝑙 and F𝑙 in an adaptive manner at the multi-scale features level (i.e. after the first, third, and second-to-last convolution layers of 𝜓 (•) as illustrated in Fig. 6). The procedure of the aggregation can be represented as\nF′ 𝑙 = SAB(F 𝑙 , F𝑙 ),(4)\nwhere F′ 𝑙 denotes the adaptive aggregated feature which will be used as the input of the next convolution layer of 𝜓 (•), i.e. 𝜓 𝑙+1 (•), and SAB denotes the smart aggregation block (see Sec. 4.4)." }, { "figure_ref": [], "heading": "Smart Aggregation Block", "publication_ref": [ "b27" ], "table_ref": [], "text": "Instead of directly concatenating the features extracted from the IMB and IDB, we propose to aggregate them in an adaptive manner. Specifically, we utilize a convolutional layer with a kernel size of 3x3 followed by a sigmoid activation function to estimate the adaptive aggregation weights, which can be represented as\n[W 𝑙 , Ŵ𝑙 ] = Sigmoid(Conv weight ( F𝑙 )),(5)\nwhere W 𝑙 , Ŵ𝑙 denote the corresponding aggregation weights of F 𝑙 , F𝑙 respectively. Since the IMB is frozen, F 𝑙 remains constant throughout the iterative procedure of IDB. Therefore, we utilize only F𝑙 to predict the aggregation weights. The whole aggregation procedure can be formalized as where ⊙ denotes the element-wise multiplication, Conv agg. denotes a convolution layer with a kernel size of 3x3, M𝑙 denotes the generated soft shadow mask which is obtained by applying average pooling along the channels of [W 𝑙 , Ŵ𝑙 ]. As shown in Fig. 7, the soft shadow masks (see (c)-(e)) obtained from the procedure of aggregation are capable of accurately capturing the shadow regions, in contrast to the binary shadow mask (see (b)) provided by the shadow removal dataset, i.e. SRD dataset [28], which can not capture the shadow details, especially for the regions along the shadow boundary.\nF′ 𝑙 = SAB(F 𝑙 , F𝑙 ) = Conv agg. (Cat(F 𝑙 ⊙ W 𝑙 , F𝑙 ⊙ Ŵ𝑙 , M𝑙 )),(6)" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b8", "b22", "b6", "b6" ], "table_ref": [ "tab_0" ], "text": "Network architectures. Following [9,23] the identical mapping branch 𝜙 (•) and the iterative de-shadow branch 𝜓 (•) employ a similar encoder-decoder architecture with different input and output, as shown in Table 1. Theoretically, the smart aggregation block can be added after each convolution layer of 𝜓 (•) to maximize its potential impact. However, to optimize computation efficiency, in our experiments, we selectively add the smart aggregation block after the first, third, and second-to-last layers of 𝜓 (•) to balance the computation and performance. Loss functions. Following the previous shadow removal method [7], we only employ the L 1 loss during the training process. Specifically, we first train the identical Mapping branch 𝜙 (•) with the objective function\nL 1 ( Î, I) = ∥ Î -I∥ 1 .(7)\nThen we freeze 𝜙 (•) and train the iterative de-shadow branch 𝜓 (•) with the same objective function\nL 2 (I * , I) = ∥I * -I∥ 1 ,(8)\nwhere I and I * denote the de-shadowed result and the corresponding ground truth shadow-free image, respectively. Training details. We adopt a two-step training strategy. Firstly, we exclusively utilize shadow images to train the identical mapping branch 𝜙 (•) for 500,000 iterations, employing a batch size of 8. Subsequently, we freeze 𝜙 (•) and utilize paired shadow & shadow-free images to train the iterative de-shadow branch 𝜓 (•) for 150,000 iterations with the same batch size. Following [7], we resize the input shadow image to 256x256 resolution. Both branches are optimized using the Adam optimizer with a learning rate of 0.00005. All experiments are conducted on the Linux server equipped with two NVIDIA Tesla V100 GPUs." }, { "figure_ref": [], "heading": "EXPERIMENTS 5.1 Setups", "publication_ref": [ "b30", "b17", "b27", "b6", "b1", "b30", "b43", "b16", "b40", "b17", "b18", "b6", "b23", "b15", "b24", "b43", "b30", "b12", "b1", "b6", "b15", "b43", "b30" ], "table_ref": [], "text": "Datasets. Following the previous shadow removal method [31], we conduct experiments on two widely used shadow removal datasets i.e. ISTD+ [18] and SRD [28]. The ISTD+ dataset consists of 1330 triplets for training and 540 triplets for testing. We use the provided ground truth masks directly during the training procedure. While in the evaluation step, we follow the previous method [7] and use Ostu's algorithm to detect the corresponding shadow masks. The SRD dataset contains 2680 paired shadow and shadow-free images for training and 408 paired shadow and shadow-free images for testing. Because the SRD does not provide the corresponding shadow masks, we use the shadow masks provided by DHAN [2] for both training and evaluation steps.\nMetrics. We adopt a comprehensive evaluation approach for assessing the performance of our proposed method. Firstly, we calculate the root mean square error (RMSE) in the LAB color space. Furthermore, following the previous approaches [31,44], we employ the commonly used image quality evaluation metrics, i.e. peak signal-to-noise ratio (PSNR) [17], structural similarity index (SSIM), and learned perceptual image patch similarity (LPIPs) [41]. This allows us to thoroughly evaluate the restoration quality of our proposed method from multiple perspectives.\nBaselines. We conduct comprehensive comparisons with previous state-of-the-art shadow removal algorithms, including SP+M-Net [18], Param+M+D-Net [19], Fu et al. [7], LG-ShadowNet [24], DC-ShadowNet [16], G2R-ShadowNet [25], BMNet [44], and SG-ShadowNet [31] on the ISTD+ dataset. Additionally, we compare with DSC [13], DHAN [2], Fu et al. [7], DC-ShadowNet [16], BMNet [44], and SG-ShadowNet [31] on the SRD dataset." }, { "figure_ref": [ "fig_6" ], "heading": "Comparison Results", "publication_ref": [ "b6", "b18", "b6", "b18", "b6", "b18", "b1", "b43", "b1", "b43", "b1", "b43" ], "table_ref": [ "tab_1", "tab_2" ], "text": "Quantitative comparison. To validate the effectiveness of our proposed method, we first conduct a comprehensive comparison with recent state-of-the-art methods on the ISTD+ datasets. The results, as depicted in Table 2, clearly demonstrate our method's superiority in terms of reconstruction quality, as evaluated by multiple metrics, including RMSE, PSNR, SSIM, and LPIPS. Specifically, for the comparison at the whole image level, our method outperforms all the competitors. Compared to Fu et al. [7], our method achieves a reduction of 20.50% in RMSE and 68.38% in LPIPS, as well as an increase of 15.32% in PSNR and 14.21% in SSIM. Similarly, compared to Param+M+D-Net [19], our method demonstrates superior performance with a reduction of 15.92% in RMSE and 30.30% in LPIPS, as well as an increase of 12.68% in PSNR and 1.89% in SSIM. For the comparison in the shadow regions, our method also outperforms other methods. Compared to Fu et al. [7], our method achieves a reduction of 9.80% in RMSE, as well as an increase of 5.14% in PSNR and 1.30% in SSIM. When compared to Param+M+D-Net [19], our method achieves a reduction of 38.87% in RMSE, as well as an increase of 13.96% in PSNR and 0.47% in SSIM. For the comparison in the non-shadow regions, our method continues to outperform other methods. Compared to Fu et al. [7], our method achieves a reduction of 24.12% in RMSE, as well as an increase of 19.45% in PSNR and 11.54% in SSIM. Additionally, when compared to Param+M+D-Net [19], our method achieves a reduction of 1.06% in RMSE, as well as an increase of 7.89% in PSNR and 0.43% in SSIM.\nTo further substantiate the effectiveness of our proposed method, we conducted additional comparison experiments on the SRD dataset. The results, as presented in Table 3, demonstrate the superiority of our method over other state-of-the-art shadow removal approaches. Our method exhibits a significant margin of improvement across all evaluation metrics. Specifically, for the comparison at the whole image level, our method outperforms all the competitors. Compared to DHAN [2], our method achieves a reduction of 22.20% in RMSE and 9.09% in LPIPS, as well as an increase of 9.46% in PSNR and 1.47% in SSIM. Compared to BMNet [44], our method demonstrates superior performance with a reduction of 14.39% in RMSE and 11.87% in LPIPS, as well as an increase of 5.30% in PSNR and 0.41% in SSIM. For the comparison in the shadow regions, our method also outperforms other methods. Compared to DHAN [2], our method achieves a reduction of 24.44% in RMSE, as well as an increase of 7.15% in PSNR and 0.41% in SSIM. When compared to BMNet [44], our method achieves a reduction of 15.90% in RMSE, as well as an increase of 6.12% in PSNR and 0.43% in SSIM. For the comparison in the non-shadow regions, our method continues to outperform other methods. Compared to DHAN [2], our method achieves a reduction of 20.31% in RMSE, as well as an increase of 9.22% in PSNR and 0.93% in SSIM. Additionally, when compared to BMNet [44], our method achieves a reduction of 13.13% in RMSE, as well as an increase of 2.65% in PSNR and 0.04% in SSIM. These comparison results unequivocally support the effectiveness of our proposed method and its superiority over the state-of-the-art methods in relation to reconstruction quality in both the shadow regions and non-shadow regions.\nQualitative comparison. We compared our visualized results with other state-of-the-art shadow removal methods on both ISTD+ and SRD datasets. As shown in Fig. 8, our method consistently outperforms the competitors in two aspects: ❶ Our reconstructed results exhibit superior color consistency. In particular, for case 2 and case 4, our method produces color-consistent results where the shadow regions and non-shadow regions are nearly indistinguishable from the human eye. In contrast, the competitors' results show " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "In this section, we conduct comprehensive ablation experiments to validate each part of the proposed method, and the results are displayed in Table 4. To further demonstrate the necessity of each smart aggregation block, we conduct experiments to remove them individually. As shown in (e)-(g), we find that removing any of the smart aggregation blocks leads to a decrease in reconstruction quality across all metrics. At the whole image level, removing the SAB after the first layer leads to an increase of 4.18% in RMSE and 2.08% in LPIPS, and a decrease of 0.82% in PSNR and 0.19% in SSIM. Removing the SAB after the third layer leads to an increase of 5.82% in RMSE and 1.32% in LPIPS, and a decrease of 0.50% in PSNR and 0.09% in SSIM. Removing the SAB after the second-to-last layer leads to an increase of 8.47% in RMSE and 7.94% in LPIPS, and a decrease of 0.53% in PSNR and 0.29% in SSIM.\n5." }, { "figure_ref": [], "heading": "5.3.2", "publication_ref": [], "table_ref": [], "text": "Effectiveness of the soft mask. Besides, we evaluate the significance of the soft mask M𝑙 produced in the smart aggregation block by removing it directly. As shown in (d), we observe that without M𝑙 , the reconstruction quality significantly deteriorates. Specifically, it leads to an increase of 3.73% in RMSE and 2.46% in LPIPS, and a decrease of 0.94% in PSNR and 0.14% in SSIM at the whole image level." }, { "figure_ref": [], "heading": "Effectiveness of the Dual-Branch shadow removal paradigm.", "publication_ref": [], "table_ref": [], "text": "Furthermore, we compare our method with an encoder-decoder architecture in two scenarios. Firstly, we evaluate a single saved model that is optimal for the whole image. As shown in (h), our method outperformed this scenario. Specifically, at the whole image level, our method achieves a reduction of 13.04% in RMSE and 17.08% in LPIPS, as well as an increase of 2.66% in PSNR and 0.56% in SSIM. Secondly, we evaluated two saved models that are optimal for the shadow regions and non-shadow regions, respectively. To obtain a de-shadowed clean image, we can combine the restored results of these two selected models using the binary shadow masks. As shown in (i), our method also outperforms this scenario. Specifically, at the whole image level, our method achieves a reduction of 11.78% in RMSE and 14.54% in LPIPS, as well as an increase of 2.82% in PSNR and 0.46% in SSIM. Finally, we evaluate the performance of our method by iterating at different times. As shown in (j)-(m), increasing the number of iterations significantly improves performance in the shadow regions, with only a negligible decrease in performance in the non-shadow regions. Specifically, compared to iteration-1, our method reduces the RMSE in shadow regions by 9.41% while only increasing it in non-shadow regions by 2.11%. Compared to iteration-2, our method surprisingly reduces the RMSE in both shadow and non-shadow regions by 5.87% and 0.51%, respectively. Similarly, in comparison to iteration-3, our method achieves a reduction in RMSE of 4.78% and 0.41% in shadow and non-shadow regions, respectively." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this work, we first identify the limitation of existing shadow removal approaches that use a shared model to restore both shadow and non-shadow regions. To overcome this limitation, we propose to decouple shadow removal into two distinct tasks: restoring shadow regions to their shadow-free counterparts and identical mapping for non-shadow regions. Specifically, our proposed method comprises three components. Firstly, we employ an identical mapping branch (IMB) to handle the non-shadow regions. Secondly, we use an iterative de-shadow branch (IDB) to handle the shadow regions by progressively transferring information from the non-shadow regions to the shadow regions in an iterative manner, which facilitates the process of shadow removal. Finally, we design a smart aggregation block (SAB) to adaptive integrate features from both IMB and IDB. The extensive experiments demonstrate the superiority of our proposed method over all state-of-the-art competitors." } ]
Our Method Figure 1: Left: the existing shadow removal methods that use a shared model for restoring both shadow and non-shadow regions. Right: our proposed method that separates shadow removal into two distinct tasks: restoring shadow regions to their shadow-free counterparts and identical mapping for non-shadow regions. The green squares represent the patches in the shadow image. The blue and purple squares represent the patches reconstructed by the shared model and our proposed method, respectively. The solid squares denote the patches in non-shadow regions, and the dashed squares denote the patches in shadow regions.
Learning Restoration is Not Enough: Transfering Identical Mapping for Single-Image Shadow Removal
[ { "figure_caption": "Figure 2 :2Figure 2: Visualization of the shadow image and its corresponding ground truth shadow mask. The red square in (a) highlights the gradual transition between the shadow and non-shadow regions, while the red square in (b) denotes the corresponding shadow mask in the same area. The green rectangle in (a) and (b) denote the inconsistency between the shadow regions and the corresponding ground truth shadow mask.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: (a) shows the fluctuation in terms of RMSE during the training in both shadow regions (represented by red squares) and non-shadow regions (represented by green squares) by using the encoder-decoder-based architecture. (b) demonstrate the fluctuation in terms of RMSE during the training in both shadow regions (i.e., red points) and non-shadow regions (i.e., green points) by using our proposed method. In (a) and (b), the red horizontal lines and the red rectangles indicate the mean and variance of RMSEs in shadow regions, while the green horizontal line and the green rectangles denote the mean and variance of RMSEs in non-shadow regions. (c) presents the mutual interference ratio between shadow and non-shadow restoration.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: (a) illustrates the RMSE fluctuation in Exp1 (represented by blue points) and Exp2 (represented by magenta points), respectively, in the nonshadow regions. (b) illustrates the IIW fluctuation in Exp1 (represented by blue points) and Exp2 (represented by magenta points), respectively, in the non-shadow regions. In (c), we compare the encoder-decoder structure with our proposed method in both shadow and non-shadow regions using different numbers of iterations.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualized results of the encoder-decoder-based iterative approach by using the number of iterations 1 and 2. (a) and (d) show the shadow image and its corresponding ground truth. (b) and (e) display the reconstruction result with the number of iteration 1 and the L1 difference between the reconstructed image and the ground truth. Similarly, (c) and (f) depict the reconstruction result with the number of iterations 2 and the L1 difference between the reconstructed image and the ground truth.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :Figure 7 :67Figure 6: (a) and (b) show the architecture of our proposed method and the smart aggregation block, respectively. In (a), the top section represents the identical mapping branch (IMB), and the bottom section represents the iterative de-shadow branch (IDB). The symbol L denotes the number of convolution layers in the IDB.", "figure_data": "", "figure_id": "fig_5", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Visualization results: the first two rows are from the ISTD+ dataset, and the last two rows are from the SRD dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Detailed architecture of our method. The output size of each layer is defined as H × W. The parameters of 'Conv&ConvTran' are the numbers of input and output channels, kernel size, stride, and padding, respectively. L denotes the number of convolution layers.", "figure_data": "DIDB(𝜓 ( •))IMB (𝜙 ( •))InputOutput Output size OperationInput Output Output size Operation[I, M]F1256 × 256Conv(4, 64, 7, 1, 3), ReLUIF 1256 × 256Conv(3, 64, 7, 1, 3), ReLU[F 1 , F1 ] F′ 1 F2F′ 1 F2 F3256 × 256 128 × 128 64 × 64SAB Conv(64, 128, 4, 2, 1), ReLU Conv(128, 256, 4, 2, 1), ReLU F 2 F 1F 2 F 3128 × 128 64 × 64Conv(64, 128, 4, 2, 1), ReLU Conv(128, 256, 4, 2, 1), ReLU[F 3 , F3 ]F′ 364 × 64SABResnet x 8Resnet x 8F′ 3 F4F4 F464 × 64 64 × 64Conv(256, 256, 3, 1, 1), ReLU F 3 Conv(256, 256, 3, 1, 1), ReLU F 4F 4 F 464 × 64 64 × 64Conv(256, 256, 3, 1, 1), ReLU Conv(256, 256, 3, 1, 1), ReLUF𝐿-3F𝐿-2128 × 128Conv(256, 128, 4, 2, 1), ReLU F 𝐿-3 F 𝐿-2128 × 128Conv(256, 128, 4, 2, 1), ReLUF𝐿-2F𝐿-1256 × 256Conv(128, 64, 4, 2, 1), ReLUF 𝐿-2 F 𝐿-1256 × 256Conv(128, 64, 4, 2, 1), ReLU[F 𝐿-1 , F𝐿-1 ]F′ 𝐿-1256 × 256SABF′ 𝐿-1F𝐿256 × 256Conv(64, 3, 7, 1, 3)", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison results on ISTD+ dataset[18].", "figure_data": "MethodAll RMSE↓ PSNR↑ SSIM↑ LPIPS↓ RMSE↓ PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ Shadow Non-ShadowSP+M-Net[18]3.61032.330.94790.07167.20536.160.98712.91335.840.9723Param+M+D-Net[19]4.04530.120.94200.07599.71433.590.98502.93534.330.9723Fu et al.[7]4.27829.430.84040.16736.58336.410.97693.82731.010.8755LG-ShadowNet[24]4.40229.200.93350.09209.70932.650.98063.36333.360.9683DC-ShadowNet[16]4.78128.760.92190.111210.43432.200.97583.67433.210.9630G2R-ShadowNet[25]3.97030.490.93300.08688.87234.010.97703.01034.620.9707BMNet[44]3.59532.300.95510.05676.18937.300.98993.08735.060.9738SG-ShadowNet[31]3.53132.410.95240.05946.01937.410.98933.04434.950.9725Ours3.40133.940.9598 0.05295.93838.280.98962.90437.040.9765", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison results on SRD dataset[28]. The mask boundary in our reconstructed results is smoother and more seamless. For case 1, the mask boundary in the competitors' results is clearly visible, whereas it is imperceptible in our method. For case 3, the competitors' results exhibit ghosting artifacts around the mask boundary, while our result does not show any artifacts around the mask boundary.", "figure_data": "MethodAll RMSE↓ PSNR↑ SSIM↑ LPIPS↓ RMSE↓ PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ Shadow Non-ShadowDSC[13]5.70429.010.90440.11458.82834.200.97024.50931.850.9555DHAN[2]4.66630.670.92780.07927.77137.050.98183.48632.980.9591Fu et al.[7]6.26927.900.84300.18208.92736.130.97425.25929.430.8888DC-ShadowNet[16]4.89330.750.91180.10848.10336.680.97593.67433.100.9540BMNet[44]4.24031.880.93760.08176.98237.410.98163.19835.090.9676SG-ShadowNet[31]4.29731.310.92730.08357.56436.550.98073.05634.230.9611Ours3.63033.570.9414 0.07205.87239.700.98582.77836.020.9680obvious color inconsistency. ❷", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "3.1 Effectiveness of SAB. Firstly, we evaluate the effectiveness of the smart aggregation block by comparing it with three substituted aggregation operations: feature addition, which adds F 𝑙 and F𝑙 ; feature multiplication, which multiplies F 𝑙 and F𝑙 ; and feature concatenation, which concatenates F 𝑙 and F𝑙 .", "figure_data": "Empowered by thesmart aggregation block, our method achieves the highest recon-struction quality across all evaluation metrics, as demonstrated in(a)-(c). Specifically, at the whole image level, replacing the smartaggregation block with feature addition leads to an increase of4.23% in RMSE and 3.59% in LPIPS, and a decrease of 0.53% in PSNRand 0.17% in SSIM. Replacing the smart aggregation block withfeature multiplication leads to an increase of 11.67% in RMSE and8.51% in LPIPS, and a decrease of 2.80% in PSNR and 0.46% in SSIM.Replacing the smart aggregation block with feature concatenationleads to an increase of 3.29% in RMSE and 0.19% in LPIPS, and adecrease of 0.09% in PSNR and 0.03% in SSIM.", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Ablation study on ISTD+ dataset[18].", "figure_data": "MethodAll RMSE↓ PSNR↑ SSIM↑ LPIPS↓ RMSE↓ PSNR↑ SSIM↑ RMSE↓ PSNR↑ SSIM↑ Shadow Non-Shadow(a) Feature addition3.54533.760.95820.05486.02138.320.98933.06036.770.9755(b) Feature multiplication3.79832.990.95540.05746.87937.220.98823.19536.410.9744(c) Feature concatenation3.51333.910.95950.05306.12138.370.98953.00237.040.9764(d) SF w/o soft-mask3.52833.620.95850.05426.29338.090.98922.98736.690.9756(e) w/o SAB-13.54333.660.95800.05406.24638.270.98943.01336.700.9750(f) w/o SAB-23.59933.770.95890.05366.54938.260.98913.02237.000.9764(g) w/o SAB-L -13.68933.760.95700.05716.31438.260.98893.17536.820.9744(h) One encoder-decoder3.91133.060.95450.06387.31937.040.98753.24336.590.9743(i) Two encoder-decoder3.85533.010.95540.06197.22736.960.98803.19436.540.9748(j) Iteration-13.45133.570.95980.05416.55537.520.98902.84437.230.9778(k) Iteration-23.47433.600.95890.05416.30837.910.98942.91936.870.9762(l) Iteration-33.46033.780.95910.05386.23637.990.98942.91637.030.9765(m) Iteration-4(Ours)3.40133.940.9598 0.05295.93838.280.98962.90437.040.97655.3.4 Effectiveness of Iterative.", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Xiaoguang Li; Qing Guo; Pingping Cai; Wei Feng; Ivor Tsang; Song Wang
[ { "authors": "Ryo Abiko; Masaaki Ikehara", "journal": "IEEE Access", "ref_id": "b0", "title": "Channel attention GAN trained with enhanced dataset for single-image shadow removal", "year": "2022" }, { "authors": "Xiaodong Cun; Chi-Man Pun; Cheng Shi", "journal": "", "ref_id": "b1", "title": "Towards ghost-free shadow removal via dual hierarchical aggregation network and shadow matting GAN", "year": "2020" }, { "authors": "Mark S Graham D Finlayson; Cheng Drew; Lu", "journal": "International Journal of Computer Vision", "ref_id": "b2", "title": "Entropy minimization for shadow removal", "year": "2009" }, { "authors": "Steven D Graham D Finlayson; Cheng Hordley; Mark S Lu; Drew", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b3", "title": "On the removal of shadows from images", "year": "2005" }, { "authors": "Lan Fu; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Wei Feng; Yang Liu; Song Wang", "journal": "", "ref_id": "b4", "title": "Benchmarking shadow removal for facial landmark detection and beyond", "year": "2021" }, { "authors": "Lan Fu; Hongkai Yu; Xiaoguang Li; Craig P Przybyla; Song Wang", "journal": "IEEE Signal Processing Magazine", "ref_id": "b5", "title": "Deep Learning for Object Detection in Materials-Science Images: A tutorial", "year": "2021" }, { "authors": "Lan Fu; Changqing Zhou; Qing Guo; Felix Juefei-Xu; Hongkai Yu; Wei Feng; Yang Liu; Song Wang", "journal": "", "ref_id": "b6", "title": "Auto-exposure fusion for single-image shadow removal", "year": "2021" }, { "authors": "Jianhao Gao; Quanlong Zheng; Yandong Guo", "journal": "", "ref_id": "b7", "title": "Towards real-world shadow removal with a shadow simulation method and a two-stage framework", "year": "2022" }, { "authors": "Qing Guo; Xiaoguang Li; Felix Juefei-Xu; Hongkai Yu; Yang Liu; Song Wang", "journal": "", "ref_id": "b8", "title": "JPGNet: Joint Predictive Filtering and Generative Network for Image Inpainting", "year": "2021" }, { "authors": "Ruiqi Guo; Qieyun Dai; Derek Hoiem", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b9", "title": "Paired regions for shadow detection and removal", "year": "2012" }, { "authors": "Renlong Hang; Qingshan Liu; Danfeng Hong; Pedram Ghamisi", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b10", "title": "Cascaded recurrent neural networks for hyperspectral image classification", "year": "2019" }, { "authors": "Yingqing He; Yazhou Xing; Tianjia Zhang; Qifeng Chen", "journal": "", "ref_id": "b11", "title": "Unsupervised Portrait Shadow Removal via Generative Priors", "year": "2021" }, { "authors": "Xiaowei Hu; Chi-Wing Fu; Lei Zhu; Jing Qin; Pheng-Ann Heng", "journal": "IEEE TPAMI", "ref_id": "b12", "title": "Direction-aware spatial context features for shadow detection and removal", "year": "2019" }, { "authors": "Xiaowei Hu; Yitong Jiang; Chi-Wing Fu; Pheng-Ann Heng", "journal": "", "ref_id": "b13", "title": "Mask-ShadowGAN: Learning to remove shadows from unpaired data", "year": "2019" }, { "authors": "Naoto Inoue; Toshihiko Yamasaki", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b14", "title": "Learning from synthetic shadows for shadow detection and removal", "year": "2020" }, { "authors": "Yeying Jin; Aashish Sharma; Robby T Tan", "journal": "", "ref_id": "b15", "title": "DC-ShadowNet: Single-Image Hard and Soft Shadow Removal Using Unsupervised Domain-Classifier Guided Network", "year": "2021" }, { "authors": "Justin Johnson; Alexandre Alahi; Li Fei-Fei", "journal": "Springer", "ref_id": "b16", "title": "Perceptual losses for realtime style transfer and super-resolution", "year": "2016" }, { "authors": "Hieu Le; Dimitris Samaras", "journal": "", "ref_id": "b17", "title": "Shadow removal via shadow image decomposition", "year": "2019" }, { "authors": "Hieu Le; Dimitris Samaras", "journal": "Springer", "ref_id": "b18", "title": "From shadow segmentation to shadow removal", "year": "2020" }, { "authors": "Jingyuan Li; Ning Wang; Lefei Zhang; Bo Du; Dacheng Tao", "journal": "", "ref_id": "b19", "title": "Recurrent feature reasoning for image inpainting", "year": "2020" }, { "authors": "Jiankun Li; Peisen Wang; Pengfei Xiong; Tao Cai; Ziwei Yan; Lei Yang; Jiangyu Liu; Haoqiang Fan; Shuaicheng Liu", "journal": "", "ref_id": "b20", "title": "Practical stereo matching via cascaded recurrent network with adaptive correlation", "year": "2022" }, { "authors": "Xiaoguang Li; Qing Guo; Rabab Abdelfattah; Di Lin; Wei Feng; Ivor Tsang; Song Wang", "journal": "", "ref_id": "b21", "title": "Leveraging Inpainting for Single-Image Shadow Removal", "year": "2023" }, { "authors": "Xiaoguang Li; Qing Guo; Di Lin; Ping Li; Wei Feng; Song Wang", "journal": "", "ref_id": "b22", "title": "MISF: Multi-level Interactive Siamese Filtering for High-Fidelity Image Inpainting", "year": "2022" }, { "authors": "Zhihao Liu; Hui Yin; Yang Mi; Mengyang Pu; Song Wang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b23", "title": "Shadow removal by a lightness-guided network with training on unpaired data", "year": "2021" }, { "authors": "Zhihao Liu; Hui Yin; Xinyi Wu; Zhenyao Wu; Yang Mi; Song Wang", "journal": "", "ref_id": "b24", "title": "From Shadow Generation to Shadow Removal", "year": "2021" }, { "authors": "Ankit Mohan; Jack Tumblin; Prasun Choudhury", "journal": "IEEE Computer Graphics and Applications", "ref_id": "b25", "title": "Editing soft shadows in a digital photograph", "year": "2007" }, { "authors": "Sohail Nadimi; Bir Bhanu", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b26", "title": "Physical models for moving shadow and object detection in video", "year": "2004" }, { "authors": "Liangqiong Qu; Jiandong Tian; Shengfeng He; Yandong Tang; Rynson Wh Lau", "journal": "", "ref_id": "b27", "title": "Deshadownet: A multi-context embedding deep network for shadow removal", "year": "2017" }, { "authors": "Chitwan Saharia; Jonathan Ho; William Chan; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b28", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "Andres Sanin; Conrad Sanderson; Brian C Lovell", "journal": "", "ref_id": "b29", "title": "Improved shadow removal for robust person tracking in surveillance scenarios", "year": "2010" }, { "authors": "Jin Wan; Hui Yin; Zhenyao Wu; Xinyi Wu; Yanting Liu; Song Wang", "journal": "", "ref_id": "b30", "title": "Style-Guided Shadow Removal", "year": "2022" }, { "authors": "Chien-Yao Wang; Alexey Bochkovskiy; Hong-Yuan Mark Liao", "journal": "", "ref_id": "b31", "title": "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors", "year": "2022" }, { "authors": "Dong Wang; Xiao-Ping Wang", "journal": "Pattern Recognition", "ref_id": "b32", "title": "The iterative convolution-thresholding method (ICTM) for image segmentation", "year": "2022" }, { "authors": "Zifeng Wang; Shao-Lun; Ercan E Huang; Jimeng Kuruoglu; Xi Sun; Yefeng Chen; Zheng", "journal": "", "ref_id": "b33", "title": "PAC-bayes information bottleneck", "year": "2021" }, { "authors": "Jinjiang Wei; Chengjiang Long; Hua Zou; Chunxia Xiao", "journal": "Computer Graphics Forum", "ref_id": "b34", "title": "Shadow inpainting and removal using generative adversarial networks with slice convolutions", "year": "2019" }, { "authors": "Zhiqiang Yan; Kun Wang; Xiang Li; Zhenyu Zhang; Jun Li; Jian Yang", "journal": "Springer", "ref_id": "b35", "title": "RigNet: Repetitive image guided network for depth completion", "year": "2022-10-23" }, { "authors": "Songhyun Yu; Bumjun Park; Jechang Jeong", "journal": "", "ref_id": "b36", "title": "Deep iterative down-up cnn for image denoising", "year": "2019" }, { "authors": "Fangneng Zhan; Shijian Lu", "journal": "", "ref_id": "b37", "title": "Esir: End-to-end scene text recognition via iterative image rectification", "year": "2019" }, { "authors": "Ling Zhang; Chengjiang Long; Xiaolong Zhang; Chunxia Xiao", "journal": "", "ref_id": "b38", "title": "Risgan: Explore residual and illumination with generative adversarial networks for shadow removal", "year": "2020" }, { "authors": "Ling Zhang; Qing Zhang; Chunxia Xiao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b39", "title": "Shadow remover: Image shadow removal based on illumination recovering optimization", "year": "2015" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b40", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Wuming Zhang; Xi Zhao; Jean-Marie Morvan; Liming Chen", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b41", "title": "Improving shadow suppression for illumination robust face recognition", "year": "2018" }, { "authors": "Xiaofei Zhou; Kunye Shen; Li Weng; Runmin Cong; Bolun Zheng; Jiyong Zhang; Chenggang Yan", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b42", "title": "Edge-guided recurrent positioning network for salient object detection in optical remote sensing images", "year": "2022" }, { "authors": "Yurui Zhu; Jie Huang; Xueyang Fu; Feng Zhao; Qibin Sun; Zheng-Jun Zha", "journal": "", "ref_id": "b43", "title": "Bijective Mapping Network for Shadow Removal", "year": "2022" }, { "authors": "Yurui Zhu; Zeyu Xiao; Yanchi Fang; Xueyang Fu; Zhiwei Xiong; Zheng-Jun Zha", "journal": "", "ref_id": "b44", "title": "Efficient Model-Driven Network for Shadow Removal", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 397.6, 234.16, 160.6, 9.38 ], "formula_id": "formula_1", "formula_text": "F 𝑙 = 𝜙 𝑙 (. . . 𝜙 2 (𝜙 1 (I))),(2)" }, { "formula_coordinates": [ 4, 383.66, 375.55, 174.54, 10.92 ], "formula_id": "formula_2", "formula_text": "F𝑙 = 𝜓 𝑙 (. . . 𝜓 2 (𝜓 1 (Cat(I, M))))(3)" }, { "formula_coordinates": [ 4, 408.1, 483.92, 150.1, 11.39 ], "formula_id": "formula_3", "formula_text": "F′ 𝑙 = SAB(F 𝑙 , F𝑙 ),(4)" }, { "formula_coordinates": [ 4, 369.73, 617.73, 188.47, 12.05 ], "formula_id": "formula_4", "formula_text": "[W 𝑙 , Ŵ𝑙 ] = Sigmoid(Conv weight ( F𝑙 )),(5)" }, { "formula_coordinates": [ 4, 329.87, 695.69, 228.33, 11.39 ], "formula_id": "formula_5", "formula_text": "F′ 𝑙 = SAB(F 𝑙 , F𝑙 ) = Conv agg. (Cat(F 𝑙 ⊙ W 𝑙 , F𝑙 ⊙ Ŵ𝑙 , M𝑙 )),(6)" }, { "formula_coordinates": [ 5, 404.26, 309.26, 153.94, 10.92 ], "formula_id": "formula_6", "formula_text": "L 1 ( Î, I) = ∥ Î -I∥ 1 .(7)" }, { "formula_coordinates": [ 5, 400.57, 353.71, 157.63, 11.88 ], "formula_id": "formula_7", "formula_text": "L 2 (I * , I) = ∥I * -I∥ 1 ,(8)" }, { "formula_coordinates": [ 7, 53.8, 492.67, 6.48, 8.04 ], "formula_id": "formula_8", "formula_text": "5." } ]
10.1007/s10439-023-03231-z
2023-05-19
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b10", "b10" ], "table_ref": [], "text": "The recent advancement of natural language processing is currently being exemplified by the large language model (LLMs) such as GPT-3 [1], PaLM [2], Galactica [3] and LLaMA [4]. The models have been trained on large amount of text-data and are able to answer questions, generate coherent text and complete most language related tasks. LLMs have been touted to have impact in domains such as climate science [5], health [6] and education [7]. In education, it has been suggested that they can be exploited to boost learning in different categories such as in elementary school children, middle and high school children, university students etc [8]. This is line with a long-time goal of AI to develop conversational agents that can support teachers in guiding children through reading material such as reading storybooks [9] [10]. Normally, in reading a text such as children's storybook, a teacher is expected to guide the children through the text and periodically gauge their understanding by posing questions from the text. The key concept in guided reading is the ability to use questions to gauge understanding and encourage deeper thinking about the material being read. In a teacher led guided reading, apart from gauging understanding, questions can be used to identify children's support needs and enable the teacher to direct attention to the critical content. To achieve the full benefits of guided reading, the teacher is supposed to ask wide variety of questions ranging from low to high cognitive challenge questions [11]. Low cognitive challenge questions are constrained to short answers while high cognitive challenge questions require explanations, evaluation or extension of text [11]. The use of questions to foster understanding and learning from text is well established across a range of age groups and learning contexts [11]. It is therefore of interest to gauge the effectiveness of LLMs to perform the tasks involved in guided reading. For LLMs to be viewed as potential support agents for teachers or even stand-alone tools that can help in guided reading, they must be able to: generate meaningful questions and answers from the text, generate diverse questions both in terms of content coverage and difficulty of questions and identify the support needs for the students. In this work, we investigate the suitability of ChatGPT 1 and Bard2 to act as a storybook reading guide for children. Specifically, we evaluate them on the following issues:\n1. Ability to generate content related questions and answers from a given input text i.e., its performance in question-answer generation(QAG) task.\n2. Ability to generate both low and high cognitive demand questions.\n3. Ability to generate diverse questions i.e., questions that cover almost all topics in each story.\n4. Ability to recommend areas that a student needs to focus on based on wrong responses from the student 5. Compare their performance to the currently existing AI-based tools for educational question generation." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b11", "b12", "b13", "b14", "b11", "b15", "b16", "b17", "b18", "b19", "b11", "b8", "b9", "b20", "b22", "b10", "b7", "b23", "b24", "b9" ], "table_ref": [], "text": "Question generation tools seek to accept input text and generate meaningful questions that are extracted from the text. Existing question generation tools can be categorised into two i.e., rule-based tools and neural based tools [12]. Rule based tools such as [13] and [14] exploit manually crafted rules to extract questions from text. Neural based techniques implement an end-to-end architecture that follow attention-based sequence-to-sequence framework [15]. The sequence-to-sequence frameworks are mainly composed of two key parts i.e., the encoder which learns a joint representation of the input text and the decoder which generates the questions [12]. Currently, both the joint representation learning, and question generation are implemented by attention-based framework [16]. Work in [17] introduced the attention-based sequence-to-sequence architecture to generate question from the input sentence. The encoder was implemented via RNN to accept an input text and learn its representation. Its output is fed into the decoder which generates a related question. The decoder exploits attention mechanism to assign more weight to the most relevant part of the text. Other question asking tools that implement attention-based sequence-to sequence framework include [18], [19] and [20]. Although these neural based tools have a common high-level encoder-decoder structure, their low-level implementations of different aspects of question generation differ significantly [12]. While these tools are generally developed for question generation, there are tool such as [9] and [10] which target question generation for educational purposes. The use of questions in guided reading has been widely studied [21] [22] [23]. Questions are used during guided reading to evaluate understanding, encourage deeper thinking about the text and to scaffold understanding of challenging text [11]. It has been suggested in [8] that large language models can play a significant role in boosting education of different levels of children. It is therefore important to evaluate them if indeed they are fit for purpose in which they are being deployed in. In this work we evaluate their ability to participate in guided reading and specifically, evaluate their question generation ability. Are they able to follow the general trend of a human teacher in asking question during comprehension reading ?\n3 Question and answer generation (QAG).\n3.1 Ability to generate meaningful questions.\nLLMs must demonstrate that they have the potential for an in-depth understanding of a given input text for them to be deployed as reading guides. One indicator of input text's comprehension is the ability to generate meaningful questions and answers from the input text. Formally, given a passage P , the model should be able to generate a set of questions {q 1 , • • • , q n } and their respective answers {q 1 , • • • , q n }. Both the set of questions and answers should be retrieved from the passage P . Perhaps one of the greatest powers of LLMs such as ChatGPT is the ability to respond to questions posed to it on-the-fly. However, it is unclear to what extent they can connect different aspect of input stories to generate both low cognitive questions and questions that require inference to be answered (i.e., high cognitive demand questions). Moreover,how will it exploit the vast amount of knowledge it acquired during training to boost its question asking ability? Further, will it be able to generate answers from the input text without being \"confused\" by its internal knowledge. We are interested in evaluating the ability of LLMs to ask meaningful questions that can be solely answered from the input story. We are also interested to evaluate how accurate LLMs are in answering the questions when solely relying on its understanding of the input text. To do this, we prompt a given LLM to generate questions based on an input story (see fig 1). The generated questions are automatically evaluated by comparing their semantic similarity to a set of baseline questions.\nFigure 1: Prompting LLM to generate questions from a given text input.\nTo evaluate the performance of a given LLM on question generation, we exploit the popularly used metrics in question generation task which include ROUGE-L [24] and BERTScore [25] and compare the semantic similarity of questions generated with the baseline questions. The similarity between LLM generated questions and reference questions is evaluated by concatenating the generated questions into one sentence and compare it with similarly concatenated reference questions ( see [10] which uses a similar approach)." }, { "figure_ref": [], "heading": "Question diversity", "publication_ref": [ "b25", "b26", "b27", "b29", "b30", "b30" ], "table_ref": [], "text": "To test the student's understanding of a given text being read, questions must be generated that cover nearly all the sections of the story. We are interested in evaluating the ability of ChatGPT and Bard to generate questions that are not biased towards a given section of the text. Concretely, we are seeking to quantify the variation of the questions being generated by LLMs. We hypothesize that the more diverse the questions are, the more exhaustive the questions cover the different topics in the input text. This will give us an idea on suitability of LLMs to generate questions that cover the whole content being read. In machine learning, several entropy reliant techniques have been proposed to evaluate diversity of dataset. Research in [26] proposes Inception Score (IS) to evaluate the synthetic samples generated by a Generative Adversarial model(GAN),(G) [27]. The intuition behind IS is that the conditional label distribution P (y | x) of images generated by GAN is projected to have low label entropy i.e., the generated images belong to few classes while entropy across the images should be high that is the marginal distribution p(y | x = G(z))dz should have high entropy. To capture this intuition, they suggest the metric in equation 1.\nexp(E x KL(p(z | x) p(y)))(1)\nAnother popular metric that provides a measure of diversity on synthetically generated samples is the Frechet Inception Distance (FID) score [28]. This is a metric that considers location and ordering of the data along the function space. Taking the activations of the penultimate layer of a given model to represent features of a given dataset x and considering only the first two moments i.e., the mean and covariance, the FID assumes that the coding units of the model f (x) follow a multi-dimensional Gaussian, therefore have maximum entropy distribution for a given mean and covariance. If the model f (x),generates embeddings with mean and covariance (m,C) given the synthetic data p(.) and (m w , C w ) for real data p w (.), then FID is defined as:\nd 2 ((m, C), (m w , C w ) = ||m -m w || 2 2 + T r(C + c w -2(CC w ) 1/2 )(2)\nOther metrics that have been proposed to evaluate diversity, include precision and recall metrics [29] [30]. One major problem with these metrics is that they assume existence of reference samples where the generated samples can be compared to [30]. In our case, we seek to evaluate the diversity of the questions generated without comparing to any reference questions. To achieve this, we use Vendi score (VS), a metric proposed for diversity evaluation in the absence of reference data. VS is defined as\nV S(X) = exp(- s i λ i logλ i )(3)\nHere\nX = {x i , • • • , x n } is the input data whose diversity is to be evaluated. λ i , i = {1, • • • , n} are the eigenvalues of a positive semidefinite matrix K/n whose entries are K ij = k(x i , x j )\nwhere k is a positive semidefinite similarity function with k(x, x) = 1 for all x. This metric which is like effective rank [31] seeks to quantify the geometric transformation induced by performing a linear mapping of a vector x from a vector space R n to R m by a matrix A i.e Ax. Normally, the number of dimensions retained by a linear transformation Ax is captured by the rank of the matrix A. The rank however is silent on the shape induced by the transformation. Effective rank introduced by [31] seeks to capture the shape that results due to the linear mapping. Effective rank therefore can be used to capture the spread of data hence ideal to measure diversity. To compute the diversity of questions generated by the two LLMs, we execute the following steps:\n1. Prompt the LLM to generate a set of questions Q given an input text." }, { "figure_ref": [], "heading": "Replicate the set", "publication_ref": [], "table_ref": [], "text": "Q 1 = {q 1 , • • • , q n } to get a copy of the questions Q 2 = {q 1 , • • • , q n }.\nDesignate Q 1 as the reference set and Q 2 as the candidate set.\n3. Pass the set Q 1 and Q 2 through the BERTScore3 to extract the cosine similarity matrix K.\n4. Use the VS package 4 to extract the VS score diversity value.\n5. Compare the VS score diversity value to human generated diversity score" }, { "figure_ref": [], "heading": "Human generated diversity score", "publication_ref": [], "table_ref": [], "text": "We engage independent human evaluators to annotate each reference question by attaching a sub-topic they think a given question is addressing in each storybook. We then aggregate all the subtopics generated by the human-annotators. We adopt an annotation by majority. We calculate diversity score of all the storybooks by computing the average number of all the sub-topics generated from all the storybooks' questions i.e., average diversity= N B i.e., total number of subtopics generated N divided by total number of books B. Intuitively average diversity represents the number of different sub-topics per storybook." }, { "figure_ref": [], "heading": "Ability to generate questions that differ in difficulty", "publication_ref": [ "b10", "b10", "b10", "b10", "b20" ], "table_ref": [], "text": "On top of generating questions that cover the whole content, it is desirable for LLMs to generate wide range of questions from low to high cognitive challenge questions. This will make students to answer questions that address all the cognitive domains. Low cognitive question mostly requires short answers which require affirmation or not (e.g. \"Did the queen have golden hair ?\"). Conversely high cognitive challenge questions require explanations, evaluations and some speculations on the extension of text [11](e.g., \"Why did the king's advisors fail to find the right wife for him ?\"). The purpose of low cognitive challenge questions is to evaluate the basic understanding of the text by the students and ease them into the study interaction [11]. However, they have the potential of promoting over-dominance of teachers . On the other hand, high cognitive questions foster greater engagement of the students, generate inferential responses and promote better comprehension of the reading material [11]. In [11], the two categories of questions are differentiated by exploiting the syntactic structure of the question. The high cognitive challenge questions are signalled by wh-word i.e., questions that use words such as what, why, how, or when. These questions are also composed of a continuum of high to low challenge questions. Specifically, wh-pronoun and wh-determinat questions that start with who, whom, whoever, what, which, whose, whichever and whatever require low challenge literal responses. However, the wh-adverb questions such how, why, where, and when are more challenging since they require more abstract and inferential responses involving explanation of causation and evaluation (e.g., \"Why was the king sad?\"; \"How did the king's daughter receive the news of her marriage?\"). The low cognitive challenge questions are generally non wh-word questions. It has been suggested in [21][32] that teachers should generally seek to ask high challenge questions as opposed to low challenge questions whenever possible. In our case we seek to establish the types of questions preferred by the two LLMs based on their level of challenge. To evaluate this, we adopt three categories of questions i.e., confirmative, explicit and implicit. Explicit are non-confirmative questions that require low challenge literal responses i.e., where answers can be retrieved from the text without much inference while implicit are questions that require inferential responses. To evaluate the type of questions generated, we employ the following steps:\n1. Given a paragraph of text we prompt the LLM to generate questions based on the text. 2. We provide the questions generated to human evaluators and ask them to read the text and answer the questions. 3. Human evaluators then annotate each question whether it is confirmative , explicit and implicit. 4. For each question, we select the most popular annotation i.e., annotation where most evaluators agree. 5. We compute percentages of each category." }, { "figure_ref": [], "heading": "Ability to recommend section of text", "publication_ref": [ "b32" ], "table_ref": [], "text": "Based on the responses to the teacher's questions, the teacher can detect students' weaknesses and their demands [33]. While it was difficult to design an evaluation on LLMs that can uncover students' weaknesses based on their responses, we resorted to evaluate their ability to recommend part of text where the student needs to re-read based on the responses provided to the questions. Basically, we evaluate the ability of a LLM to detect part of the text that the student did not understand. This we believe plays some part in diagnosing student's need. To perform our evaluation, we execute the following steps:\n1. We pass a three-paragraph text to a large language model and prompt it to generate questions from the text. 2. We annotate the generated questions based on the paragraph in which they were extracted from i.e < p i , q i > where p i is the paragraph i = 1, 2, 3 and q i are the questions i = 1, • • • , n, linked to paragraph p i . 3. We prompt the large language model to answer all the questions generated. We then deliberately alter a set of answers belonging to questions from a given paragraph to be wrong. All the answers from other two paragraphs remain as generated by the LLM. 4. We then prompt LLM to evaluate all the answers( for all the questions) generated in the previous step.\n5. We prompt LLMs to suggest the section of the text that the student did not comprehend based on the responses in the previous step.\n6. We compute the BERTScore between the recommended text and the paragraph which had all question altered to be wrong.\n4 Experimental setup" }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b8" ], "table_ref": [], "text": "We use the FairytaleQA dataset [9] to perform evaluation. The dataset contains 278 annotated storybooks which is composed of 232 storybooks for training a given question generation model, 23 books for testing and 23 for validation. Each book contains multiple paragraphs. Each paragraph of a book is annotated with several educational question-answer pairs. The questions have different types of annotation with our main concern being annotations linked to the difficulty attached to answering the questions. The question difficulty is annotated as either implicit (high challenge questions) or explicit (low challenge questions). The dataset covers storybooks for children between kindergarten and eighth grade." }, { "figure_ref": [], "heading": "Baseline models", "publication_ref": [ "b8", "b9", "b9", "b9" ], "table_ref": [], "text": "For question generation, we compare the performance of ChatGPT and Bard with selected models that were specifically developed for questions generations and were evaluated in FairytaleQA dataset. The models include: QAG [9]: The model relies on semantic role labelling to identify entities and events. The identified entities and events are then used to generate questions. We use the results reported in [10]. The questions used in evaluation include top 10/5/3/2/1 generated questions by QAG, labeled here as (top 10), QAG (top 5), QAG (top 3), QAG (top 1) respectively. E2E: This model is proposed in [10] where one BART-large model is trained on FairytaleQA dataset to generate questions.\nBERT based model: [10]. This model adapts and trains BERT to learn question type distribution.\nUsing the question type distribution as a control signal, they then train BART summarization model using FairytaleQA dataset question-answer annotation to learn the event-centric summary generation task." }, { "figure_ref": [], "heading": "Human evaluators", "publication_ref": [], "table_ref": [], "text": "Human evaluators used in the study were recruited through post-graduate social media page in Kenya.\nAn advert was posted on the page where 109 Msc. and PhD students responded positively indicating their willingness to participate. Out of these we selected 36 students who are doing research in various topics in NLP. Each student was paid $30 once the annotation task was completed. We conducted one day online training via zoom on the annotation process." }, { "figure_ref": [], "heading": "Quality of questions generated", "publication_ref": [ "b10" ], "table_ref": [ "tab_0" ], "text": "The results of the similarity of the questions generated by Bard and ChatGPT as compared to FairytaleQA dataset human annotated questions are shown in table 2. We only used the 46 storybooks which are contained in the test and validation set. This is to enable direct comparison with baseline models which also use the two sets. Table 2 also shows the performance of the baseline models.\nBased on the results, both ChatGPT and Bard register a slight performance advantage in their fmeasure values. The lack of significant advantage of LLMs over the baseline models is surprising given the high quality of questions they generated based on human evaluation. We hypothesise that the evaluation based on matching of similar tokens (used in ROUGE-L ) may not be ideal for this set-up since both ChatGPT and Bard were trained using very large text datasets that have extensive vocabularies, therefore they have a wide space of vocabularies to use while asking a question. Hence some of their vocabulary's choices may not be present in the reference questions. Table 1 shows some examples where the questions are semantically similar, but Rouge-L values are low. Further unlike baseline models, ChatGPT and Bard were not trained exclusively on FairytaleQA dataset question-answer pair, hence their style of asking questions may be significantly different from the reference questions. However, the superiority of LLMs is demonstrated when using BERTScore Table 1: Sample questions where ChatGPT uses diverse vocabulary.\nReference Questions ChatGPT generated question Precision Recall f-measure What kind of hair did the wife have ?\nWhat was the Queen's hair color? 0.2857 0.2500 0.2667 Why did the councillors say the king had to marry again?\nWhy was the King advised to re-marry ? 0.4545 0.62500 0.5263 Who did the king's wife send for when she felt that she would soon die Who did the Queen send for when she fell sick ? 0.4375 0.700 0.5384 metric. Here, they outperform the baseline models in precision, recall and f-measure values. Their superior performance demonstrates that LLMs are able to generate meaningful educational questions that mimic how a human-teacher would ask questions. The FairytaleQA dataset has 23 books to be used for model testing and 23 for validation. We use these 46 books to evaluate if ChatGPT and Bard can cover all the subtopics of a storybook while asking questions. To do this we use the evaluation criteria described in section 3.2. Question-topic diversity is evaluated on per book basis. For a given book, we generate questions by passing one or merged paragraphs and a prompt into a large language model. We iterate through all the paragraphs of the book and aggregate generated questions into a set Q. We then replicate the questions to get another set Q . BERTScore is then exploited to generate the similarity matrix K. The established matrix K is used to compute VS value for that book. To get an idea of how a given large language model generates diverse questions, we average the VS values over the 46 books. We also engage human annotators to annotate FairytaleQA reference questions based on the 46 books. The annotators are supposed to label each question with the sub-topic that the question addresses. We do not restrict the possible topics but allow annotators to come up with their own based on reading the storybooks excerpts and the questions. Annotations are adopted by majority. From human evaluators, we generated an average diversity score of 66.8 from the 46 excerpts of the storybooks. The question-topic diversity results are shown in table 3,4 and 5. We increase the input text by varying the number of paragraphs from 1 to 3. From the results in table1, when one paragraph is used as input, taking human annotation as the baseline, ChatGPT generates questions that cover slightly above 61 sub-topics in each storybook. This represents 91.9% of the total sub-topics. Bard large language model covers 64 sub-topics per given storybook. This represents 96.1% of the subtopics. In general, both LLMs generate questions that exhibit high diversity when compared to human generated diversity. However, as the size of input increases from 1 to 2 merged paragraphs, the diversity score of both LLMs drop to 54.4% and 56.2% for ChatGPT and Bard respectively. A further increase of input to 3 merged paragraphs reduces the diversity score to 49.4% and 48.2% for ChatGPT and Bard respectively. This is an indication LLMs are still limited on the amount of content that they can effectively handle. [11] confirms that the current trend of human teachers is to ask more low challenge questions as compared to high cognitive questions, LLMs significantly over-generate low challenge questions as compared to the baseline. Therefore, there is need to moderate the questions to reflect acceptable human-teachers way of asking questions. " }, { "figure_ref": [], "heading": "Text recommendation", "publication_ref": [], "table_ref": [], "text": "Here, we report the results of text recommendation of ChatGPT and Bard based on its ability to evaluate the responses and select part of text that the student did not understand. Both language models perform highly in text recommendation. This is an indication that they have the ability to summarize the student's responses and extract section of the story that the student needs to re-read. " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In our evaluation, we used a dataset that has storybooks that cover children from Kindergarten to eighth grade. The evaluation needs to be extended to storybooks covering more advanced levels of students. This will evaluate the ability of LLMs to generate high levels questions no-matter the cognitive level of the input text. A good part of the study relied on restricted number of human evaluators. While we ensured that we eliminated detected biases, the study can be replicated by increasing the number of human evaluators and choosing a more diverse population in-terms of location and level of education. There is also an opportunity to re-look at the text semantic similarity comparison metrics where one model is able to generate sentences based of a large sample space as compared to restricted vocabulary of the reference text. Learning is a complex process that is influenced by many parameters, therefore the use of non-human teachers on students needs to be thoroughly investigated before deploying LLMs based tools." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The results presented in this paper demonstrates that large language models have made significant progress in text comprehension and have the potential of being exploited by teachers as tools to assist during guided reading process. However, further validation of the results is needed by evaluating LLMs using diverse datasets, performing an in-depth analysis of the type of questions generated and how they directly correlate with human-teacher questions. Its social impact on teachers and students needs to be evaluated before they are deployed as reading guide assisting tools." } ]
This paper looks at the ability of large language models to participate in educational guided reading. We specifically, evaluate their ability to generate meaningful questions from the input text, generate diverse questions both in terms of content coverage and difficulty of the questions and evaluate their ability to recommend part of the text that a student should re-read based on the student's responses to the questions. Based on our evaluation of ChatGPT and Bard, we report that, 1) Large language models are able to generate high quality meaningful questions that have high correlation with the input text, 2) They generate diverse question that cover most topics in the input text even though this ability is significantly degraded as the input text increases, 3)The large language models are able to generate both low and high cognitive questions even though they are significantly biased toward low cognitive question, 4) They are able to effectively summarize responses and extract a portion of text that should be re-read.
Are Large Language Models Fit For Guided Reading?
[ { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Evaluation of quality of questions generated by ChatGPT and Bard.", "figure_data": "Rouge-L results on validation/test datasetModelPrecisionRecallf-measureE2E16.32/15.76 36.21/35.89 20.29/19.73QAG(top-1)34.58/32.33 19.56/19.69 22.88/22.29QAG(top-2)28.45/26.58 30.51/30.34 26.76/25.67QAG(top-3)24.29/22.74 36.80/36.31 26.67/25.50QAG(top-5)20.38/19.25 43.45/43.04 25.55/24.53QAG(top-10)18.12/17.26 46.57/47.04 24.05/23.34BERT based model[10]33.49/37.50 37.50/31.54 31.81/30.58ChatGPT31.21/33.45 39.78/42.33 34.98/37.36Bard21.62/26.12 38.09/45.89 27.58/33.29BERTScore results on validation/test datasetE2E88.55/88.39 84.25/84.07 86.32/86.15QAG(top-1)85.99/86.23 87.76/87.70 86.84/86.94QAG(top-2)88.30/88.105/ 87.45/87.02 87.86/87.54QAG(top-3)88.66/88.46/ 86.63/86.29 87.61/87.34QAG(top-5)88.83/88.62 85.71/85.40 87.22/86.96QAG(top-10)88.73/88.48 85.03/84.72 86.81/86.54BERT based model[10]89.15/88.62 88.86/89.30 88.98/88.93ChatGPT96.92/96.31 95.03/96.01 95.96/96.15Bard97.12/93.31 95.43/96.34 96.27/94.806 Question diversity on topic coverage", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "sub-topic diversity evaluation of LLMs with a one paragraph input.", "figure_data": "Rouge-L results on validation/test datasetModelVSHuman Evaluators66.8ChatGPT61.4Bard64.2", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "sub-topic diversity evaluation of LLMs with two merged paragraphs as input.", "figure_data": "Rouge-L results on validation/test datasetModelVSHuman Evaluators66.8ChatGPT54.4Bard56.2", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "sub-topic diversity evaluation LLMs with three merged paragraphs as input.Based on the results in table 6, baseline questions contain 53.22 % low challenge questions( i.e., conformative and explicit questions) while ChatGPT and Bard generate 70.26% and 73.38% low challenge questions respectively. This is approximately a 20 % deviation from the baseline. While research in", "figure_data": "Rouge-L results on validation/test datasetModelVSHuman Evaluators66.8ChatGPT49.4Bard48.27 Question diversity based on difficulty results", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Cognitive level of questions asked by LLMs", "figure_data": "What Questions Do LLMs Ask?Question Source total questions confirmative explicit implicitBaseline203001121909ChatGPT30061092003894Bard3546972505947", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "LLMs' ability to recommend relevant text.", "figure_data": "BERTScore valuesLLM Precsion Recall f-measureChatGPT98.2299.198.65Bard97.398.797.99", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" } ]
Peter Ochieng
[ { "authors": "T B Brown; B Mann; N Ryder; M Subbiah; J Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D M Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "Advances in neural information processing systems", "ref_id": "b0", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "A Chowdhery; S Narang; J Devlin; M Bosma; G Mishra; A Roberts; P Barham; H W Chung; C Sutton; S Gehrmann; P Schuh; K Shi; S Tsvyashchenko; J Maynez; A Rao; P Barnes; Y Tay; N Shazeer; V Prabhakaran; E Reif; N Du; B Hutchinson; R Pope; J Bradbury; J Austin; M Isard; G Gur-Ari; P Yin; T Duke; A Levskaya; S Ghemawat; S Dev; H Michalewski; X Garcia; V Misra; K Robinson; L Fedus; D Zhou; D Ippolito; D Luan; H Lim; B Zoph; A Spiridonov; R Sepassi; D Dohan; S Agrawal; M Omernick; A M Dai; T S Pillai; M Pellat; A Lewkowycz; E Moreira; R Child; O Polozov; K Lee; Z Zhou; X Wang; B Saeta; M Diaz; O Firat; M Catasta; J Wei; K Meier-Hellstern; D Eck; J Dean; S Petrov; N Fiedel", "journal": "", "ref_id": "b1", "title": "Palm: Scaling language modeling with pathways", "year": "" }, { "authors": "R Taylor; M Kardas; G Cucurull; T Scialom; A Hartshorn; E Saravia; A Poulton; V Kerkez; R Stojnic", "journal": "", "ref_id": "b2", "title": "Galactica: A large language model for science", "year": "" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar; A Rodriguez; A Joulin; E Grave; G Lample", "journal": "", "ref_id": "b3", "title": "Llama: Open and efficient foundation language models", "year": "" }, { "authors": "S S Biswas", "journal": "Annals of Biomedical Engineering", "ref_id": "b4", "title": "Potential use of chat gpt in global warming", "year": "2023" }, { "authors": "M Hasnain", "journal": "Annals of Biomedical Engineering", "ref_id": "b5", "title": "Chatgpt applications and challenges in controlling monkey pox in pakistan", "year": "2023" }, { "authors": "B D Lund; T Wang", "journal": "Library Hi Tech News", "ref_id": "b6", "title": "Chatting about chatgpt: How may ai and gpt impact academia and libraries?", "year": "2023" }, { "authors": "E Kasneci; K Sessler; S K Uchemann; M Bannert; D Dementieva; F Fischer; U Gasser; G Groh; S G Unnemann; S Krusche; G Kutyniok; T Michaeli; C Nerdel; J U Pfeffer; O Poquet; M Sailer; A Schmidt; T Seidel; M Stadler; J Weller; J Kuhn; G Kasneci", "journal": "Learning and Individual Differences", "ref_id": "b7", "title": "Chatgpt for good? on opportunities and challenges of large language models for education", "year": "2023" }, { "authors": "B Yao; D Wang; T Wu; Z Zhang; T J ; -J Li; M Yu; Y Xu", "journal": "", "ref_id": "b8", "title": "It is ai's turn to ask humans a question: Question-answer pair generation for children's story books", "year": "" }, { "authors": "Z Zhao; Y Hou; D Wang; M Yu; C Liu; X Ma", "journal": "", "ref_id": "b9", "title": "Educational question generation of children storybooks via question type distribution learning and event-centric summarization", "year": "2022" }, { "authors": "L P Blything; A Hardie; K Cain", "journal": "Reading Research Quarterly", "ref_id": "b10", "title": "Question asking during reading comprehension instruction: A corpus study of how question type influences the linguistic complexity of primary school students' responses", "year": "2020" }, { "authors": "L Pan; W Lei; T.-S Chua; M.-Y Kan", "journal": "", "ref_id": "b11", "title": "Recent advances in neural question generation", "year": "2019" }, { "authors": "M Heilman", "journal": "", "ref_id": "b12", "title": "Automatic factual question generation from text", "year": "2011" }, { "authors": "Y Chali; S A Hasan", "journal": "", "ref_id": "b13", "title": "Towards automatic topical question generation", "year": "2012" }, { "authors": "D Bahdanau; K Cho; Y Bengio", "journal": "", "ref_id": "b14", "title": "Neural machine translation by jointly learning to align and translate", "year": "" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Łukasz Kaiser; I Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "Attention is all you need", "year": "2017" }, { "authors": "X Du; J Shao; C Cardie", "journal": "", "ref_id": "b16", "title": "Learning to ask: Neural question generation for reading comprehension", "year": "2017" }, { "authors": "N Duan; D Tang; P Chen; M Zhou", "journal": "", "ref_id": "b17", "title": "Question generation for question answering", "year": "2017" }, { "authors": "Q Zhou; N Yang; F Wei; C Tan; H Bao; M Zhou", "journal": "Springer", "ref_id": "b18", "title": "Neural question generation from text: A preliminary study", "year": "2017" }, { "authors": "V Harrison; M Walker", "journal": "", "ref_id": "b19", "title": "Neural generation of diverse questions using answer focus, contextual and linguistic features", "year": "2018" }, { "authors": "S Degener; J Berne", "journal": "Reading Teacher", "ref_id": "b20", "title": "Complex questions promote complex thinking", "year": "2017" }, { "authors": "M P Ford", "journal": "Capstone", "ref_id": "b21", "title": "Guided reading: What's new, and what's next", "year": "2015" }, { "authors": "I C Fountas; G S Pinnell", "journal": "ERIC", "ref_id": "b22", "title": "Guided reading: Good first teaching for all children", "year": "1996" }, { "authors": "C.-Y Lin", "journal": "", "ref_id": "b23", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "T Zhang; V Kishore; F Wu; K Q Weinberger; Y Artzi", "journal": "", "ref_id": "b24", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "T Salimans; I Goodfellow; W Zaremba; V Cheung; A Radford; X Chen", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Improved techniques for training gans", "year": "2016" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b26", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "" }, { "authors": "M S Sajjadi; O Bachem; M Lucic; O Bousquet; S Gelly", "journal": "Advances in neural information processing systems", "ref_id": "b28", "title": "Assessing generative models via precision and recall", "year": "2018" }, { "authors": "L Simon; R Webster; J Rabin", "journal": "", "ref_id": "b29", "title": "Revisiting precision and recall definition for generative model evaluation", "year": "" }, { "authors": "O Roy; M Vetterli", "journal": "IEEE", "ref_id": "b30", "title": "The effective rank: A measure of effective dimensionality", "year": "2007" }, { "authors": "T A Zucker; L M Justice; S B Piasta; J N Kaderavek", "journal": "Early Childhood Research Quarterly", "ref_id": "b31", "title": "Preschool teachers' literal and inferential questions and children's responses during whole-class shared reading", "year": "2010" }, { "authors": "M Habib", "journal": "Revista Romaneasca pentru Educatie Multidimensionala", "ref_id": "b32", "title": "Assessment of reading comprehension", "year": "2016" } ]
[ { "formula_coordinates": [ 4, 246.6, 117.71, 257.4, 9.65 ], "formula_id": "formula_0", "formula_text": "exp(E x KL(p(z | x) p(y)))(1)" }, { "formula_coordinates": [ 4, 168.95, 235.4, 335.05, 12.69 ], "formula_id": "formula_1", "formula_text": "d 2 ((m, C), (m w , C w ) = ||m -m w || 2 2 + T r(C + c w -2(CC w ) 1/2 )(2)" }, { "formula_coordinates": [ 4, 243.65, 324.17, 260.35, 30.32 ], "formula_id": "formula_2", "formula_text": "V S(X) = exp(- s i λ i logλ i )(3)" }, { "formula_coordinates": [ 4, 108, 364.07, 396, 20.56 ], "formula_id": "formula_3", "formula_text": "X = {x i , • • • , x n } is the input data whose diversity is to be evaluated. λ i , i = {1, • • • , n} are the eigenvalues of a positive semidefinite matrix K/n whose entries are K ij = k(x i , x j )" }, { "formula_coordinates": [ 4, 215.55, 501.8, 290.19, 9.65 ], "formula_id": "formula_4", "formula_text": "Q 1 = {q 1 , • • • , q n } to get a copy of the questions Q 2 = {q 1 , • • • , q n }." } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b27", "b52", "b41", "b44", "b22", "b23", "b22", "b50", "b25", "b36", "b20", "b31", "b11", "b33", "b49", "b47", "b11" ], "table_ref": [], "text": "This paper re-visits Dennett's (1981) notion that philosophical discussion can benefit from the use of computational modelling. We do this by showing how recent criticisms of the dual-systems view of the mind (System-1 and System-2), can be clarified using the Common Model of Cognition to ground the discussion (Laird, Lebiere & Rosenbloom, 2017).\nThe terms System-1 and System-2 refer to a dualsystem model that ascribes distinct characteristics to what are thought to be opposing aspects of cognition (Wason & Evans, 1974;Stanovich, 1999;Strack & Deutsch, 2004;Kahneman, 2003Kahneman, , 2011)). System-1 is considered to be evolutionarily old and characterized as fast, associative, emotional, automatic, and not requiring working memory. System-2 is more evolutionarily recent and thought to be slow, declarative, rational, effortful, and relying on working memory. Kahneman (2003) referred to System-1 as \"intuitive\" and System-2 as \"rational\", thus linking them to higher level folk psychology concepts. The neural correlates of System-1 and System-2 have also been studied (e.g., Tsujii & Watanabe, 2009). System-1 and System-2 are often used in fields such as psychology, philosophy, neuroscience, and artificial intelligence as a means for ontologizing the functional properties of human cognition.\nRecently, however, this dual-system model has been criticized for lacking precision and conceptual clarity (Keren & Schul, 2009), leading to significant misconceptions (Pennycook et al., 2018;Houwer, 2019) and obscuring the dynamic complexities of psychological processes (Moors, 2016). One of the originators of dual-system theory stated that an important issue for future research is the problem that \"current theories are framed in general terms and are yet to be developed in terms of their specific computational architecture\" (Evans, 2003).\nFollowing Dennett (1981) we argue that a computational description is essential for clarifying high level, psychological characterizations such as System-1 and System-2. At the time, Dennett received significant pushback on his view. However, we argue that it was too early in the development of computational models to fully appreciate the pragmatic value of his position.\nIn the spirit of this endeavour, Proust (2013) has argued that a more precise computational definition is needed to understand the role of System-1 and System-2 in metacognition. Proust defined these systems in terms of informational typologies (System-1 non-conceptual; System-2 conceptual). Similarly, Thomson et al. (2015) argued that the expert use of heuristics (System-1) could be defined in terms of instance based learning in ACT-R. In fact, there are numerous ways that cognitive models and cognitive architectures can and have been mapped onto the System-1 and 2 distinction. For example, dual-process approaches to learning have been instantiated within the CLARION architecture, modelling the interaction between implicit and explicit processes (Sun, Terry & Slusarz, 2005). System-1 and 2 have also been instantiated directly into the LIDA architecture (Faghihi et al., 2014).\nWhile it is useful to work on modelling different aspects of System-1 and 2, the larger question is, in what sense is System-1 and 2 a valid construct? What are the necessary and sufficient conditions that precisely define System-1 and 2? And what are the cognitive and neural alignments to System-1 and System-2 (Evans, 2003)?" }, { "figure_ref": [], "heading": "The Common Model", "publication_ref": [ "b27", "b43", "b46" ], "table_ref": [], "text": "The Common Model of Cognition, originally the 'Standard Model' (Laird et al., 2017) is a consensus architecture that integrates decades of research on how human cognition functions computationally. The Common Model represents a convergence across cognitive architectures regarding the modules and components necessary for biological and artificial intelligence. These modules are correlated with their associated brain regions and verified through neuroscience (Steine-Hanson et al., 2018). Neural evidence strongly supports the Common Model as a leading candidate for modeling the functional organization of the human brain (Stocco et al., 2021).\nThe computational processes of the Common Model are categorized into five components -working memory, perception, action, declarative memory, and procedural memory. Procedural memory is described as a production system which contains units called production rules (or 'productions'). The production system interacts with different modules through working memory represented as buffers. While these components are implemented differently among Common Model-type architectures, they describe a common functionality across implementations." }, { "figure_ref": [], "heading": "System-1", "publication_ref": [ "b23", "b11", "b44", "b16", "b42", "b23", "b11", "b22", "b39", "b4", "b29", "b30", "b40", "b21" ], "table_ref": [], "text": "Researchers generally describe System-1 by using a constellation of characteristics. Specifically, System-1 is described as fast, associative, emotional, automatic, and not requiring working memory (Kahneman, 2011;Evans, 2003;Strack & Deutsch, 2004). System-1 is considered to be evolutionary old and present within animals. It is composed of biologically programmed instinctive behaviours and operations that contain innate modules of the kind put forth by Fodor (1983). System-1 is not comprised of a single system but is an assembly of sub-systems that are largely autonomous (Stanovich & West, 2000). Automatic operations are usually described as involving minimal or no effort, and without a sense of voluntary control (Kahneman, 2011). Researchers generally agree that System-1 is made of parallel and autonomous subsystems that output only their final product into consciousness (often as affect), which then influences human decision-making (Evans, 2003). This is one reason the system has been called \"intuitive\" (Kahneman, 2003).\nSystem-1 relies on automatic processes and shortcut strategies called heuristics -problem solving operations or rule of thumb strategies (Simon, 1955). The nature of System-1 is often portrayed as non symbolic, and has been associated with reinforcement learning (Barto et al., 1981) and neural networks (McLeod, 1998). Affect is integral to System-1 processes (Mitchell, 2011). Affect based heuristics result from an individual evaluating a stimulus based on their likes and dislikes. In more complex decisionmaking, it occurs when a choice is either weighed as a net positive (with more benefits than costs), or as net negative (less benefits than costs) (Slovic et al., 2004).\nSystem-1 can produce what are called \"cognitive illusions\" that can be harmful if left unchecked. For example, the 'illusion of validity' is a cognitive bias in which individuals overestimate their ability to accurately predict a data set, particularly when it shows a consistent pattern (Kahneman & Tversky, 1973). Biases and errors of System-1 operate automatically and cannot be turned off at will. However, they can be offset by using System-2 to monitor System-1 and correct it." }, { "figure_ref": [], "heading": "System-1 in the Common Model", "publication_ref": [ "b0", "b45", "b0", "b49" ], "table_ref": [], "text": "System-1 can be associated with the production system which is the computational instantiation of procedural memory in the Common Model (Singley & Anderson, 1989). Procedural knowledge is represented as production rules (\"productions\") which are modeled after computer program instructions in the form of condition-action pairings. They specify a condition that, when met, will perform a prescribed action. A production can also be thought of as an if-then rule (Anderson, 1993). If it matches a condition, then it fires an action. Production rules transform information to resolve problems or complete a task, and are responsible for state-changes within the system. Production rules fire automatically off of conditions in working memory buffers. Their automaticity is due to the fact that they are triggered without secondary evaluation. Neurologically, production rules correlate with the 50ms decision timing in the basal ganglia (Stocco, Lebiere, & Anderson, 2010). The production system can enact reinforcement learning in the form of utility learning, where faster or more useful productions are rewarded and are more likely to be used later (Anderson, 1993). In a similar way, problem solving heuristics can be implemented as production rules (Payne et al., 1988).\nThe Common Model production system has many of the properties associated with System-1 such as being fast, automatic, implicit, able to implement heuristics, and reinforcement learning. However, the Common Model declarative memory system also has some of the properties associated with System-1. Specifically, associative learning and the ability to implement heuristics that leverage associative learning (Thomson et al., 2015). Here, it is important to understand that the Common Model declarative memory cannot operate without the appropriate productions firing, and without the use of buffers (working memory). Therefore, from a Common Model perspective, System-1 minimally involves productions firing based on buffer conditions, but can also involve productions directing declarative memory retrieval, which also relies on buffers. Based on this, System-1 cannot be defined as being uniquely aligned with either declarative or procedural memory. System-1 activity must involve production rules and buffers, and can also involve declarative knowledge." }, { "figure_ref": [], "heading": "System-2", "publication_ref": [ "b23", "b44", "b17", "b9", "b19", "b50", "b17", "b11", "b22" ], "table_ref": [], "text": "Researchers generally view System-2 as a collection of cognitive properties, characterized as slow, propositional, rational, effortful, and requiring working memory (Kahneman, 2011;Strack & Deutsch, 2004;Frankish 2010). System-2 involves explicit propositional knowledge that is used to guide decisionmaking (Epstein & Pacini, 1999). Propositional knowledge is associated with relational knowledge (Halford, Wilson, & Phillips, 2010) which represents entities (e.g.: John and Mary), the relation between them (e.g.: loves) and the role of those entities in that relation (e.g.: John loves Mary). Higher level rationality in System-2 is also said to be epistemically committed to logical standards (Tsujii & Watanabe, 2009). System-2 processes are associated with the subjective experiences of agency, choice, and effortful concentration (Frankish, 2010). The term \"effortful\" encompasses the intentional, conscious, and more strenuous use of knowledge in complex thinking. Higher level rationality is considered responsible for human-like reasoning, allowing for hypothetical thinking, long-range planning, and is correlated with overall measures of general intelligence (Evans, 2003).\nResearchers have studied various ways in which System-2's effortful processes can intervene in System-1 automatic operations (Kahneman, 2003). Ordinarily, an individual does not need to invoke System-2 unless they notice that System-1 automaticity is insufficient or risky. System-2 can intervene when the anticipated System-1 output would infringe on explicit rules or potentially cause harm. For example, a scientist early in their experiment may notice that they are experiencing a feeling of certainty. System-2 can instruct them to resist jumping to conclusions and to gather more data. In this sense, System-2 can monitor System-1 and override it by applying conceptual rules." }, { "figure_ref": [], "heading": "System-2 in the Common Model", "publication_ref": [ "b28", "b51", "b23" ], "table_ref": [], "text": "Laird (2020) draws on Newell (1990), Legg and Hutter (2007) and others to equate rationality with intelligence, where \"an agent uses its available knowledge to select the best action(s) to achieve its goal(s).\" Newell's Rationality Principle involves the assumption that problem-solving occurs in a problem space, where knowledge is used to navigate toward a desired end. As Newell puts it, \"an agent will use the knowledge it has of its environment to achieve its goals\" (1982, p. 17). The prioritizing of knowledge in decision-making corresponds with the principles of classical computation involving symbol transformation and manipulation.\nThe Common Model architecture fundamentally distinguishes between declarative memory and procedural memory. This maps roughly onto the distinction between explicit and implicit knowledgewhere declarative knowledge can be made explicitly accessible in working memory, procedural knowledge operates outside of working memory and is inaccessible. However, declarative knowledge can also function in an implicit way. The presence of something within working memory does not necessarily mean it will be consciously accessed (Wallach & Lebiere, 2003).\nHigher level reasoning involves the retrieval of 'chunks', representing propositional information, into buffers (working memory) to assist in calculations and problem-solving operations. This appears to correlate with what System-2 researchers describe as \"effortful\", as this requires more computational resources (i.e., more productions) to manage the flow of information through limited space in working memory (buffers). As Kahneman points out, System-1 can involve knowledge of simple processes such as 2+2=4. However, more complex operations such as 17x16 require calculations that are effortful, a characteristic that is considered distinctive of System-2 (Kahneman, 2011).\nEffort, within the Common Model, involves greater computational resources being allocated toward a task. Moreover, the retrieval and processing of declarative knowledge requires more steps and more processing time when compared to the firing of productions alone. This longer retrieval and processing time can also account for the characteristic of \"slow\" associated with System-2." }, { "figure_ref": [], "heading": "Emotion in System-1 and 2", "publication_ref": [ "b23", "b55", "b12" ], "table_ref": [], "text": "Emotion and affect plays a vital role in the distinction between System-1 and System-2 processes (Chaiken & Trope, 1999;Kahneman, 2011). Decisions in System-1 are largely motivated by an individual's implicit association of a stimulus with an emotion or affect (feelings that something is bad or good). Behavior motivated by emotion or affect is faster, more automatic, and less cognitively expensive. One evolutionary advantage of these processes is that they allow for split-second reactions that can be crucial for avoiding predators, catching food, and interacting with complex and uncertain environments.\nEmotions can bias or overwhelm purely rational decision processes, but they can also be overridden by System-2 formal rules. While emotions and affect have historically been cast as the antithesis of reason, their importance in decision-making is being increasingly investigated by researchers who give affect a primary role in motivating decisions (e.g., Zajonc, 1980;Barrett & Salovey, 2002). Some maintain that rationality itself is not possible without emotion, as any instrumentally rational system must necessarily pursues desires (Evans, 2012)." }, { "figure_ref": [], "heading": "Emotion in the Common Model", "publication_ref": [ "b54", "b34", "b18", "b53" ], "table_ref": [], "text": "Feelings and emotions have strong effects on human performance and decision-making. However, there is considerable disagreement over what feelings and emotions are and how they can be incorporated into cognitive models. However, while philosophical explanations of affect have been debated, functional accounts of emotions and feelings within cognitive models have been built. Emotions have been modeled as amygdala states (West & Young, 2017), and somatic markers as emotional tags attached to units of information (Domasio, 1994). In Sigma models, lowlevel appraisals have been modeled as architectural selfreflections on factors such as expectedness, familiarity, and desirability (Rosenbloom, et al., 2015). Core affect theory has been modeled in ACT-R to demonstrate how an agent may prioritize information using emotional valuation (Juvina, Larue & Hough, 2018). Also, feelings have also been modelled by treating them as non propositional representations in buffers or \"metadata\" (West & Conway-Smith, 2019).\nOverall, the question of how to model emotion in the Common Model remains unresolved. However, as indicated in the research above, emotion has multiple routes for interacting with cognition in the Common Model." }, { "figure_ref": [], "heading": "Effort in System-1 and 2", "publication_ref": [ "b41", "b23", "b10", "b14", "b15", "b37" ], "table_ref": [], "text": "The concept of \"effort\" makes up a significant and confusing dimension of System-1 and System-2. While it is mainly associated with System-2 rationality, a precise definition of \"effort\" remains elusive and is largely implicit in discussions of System-1 and 2. Because System-2 is considered to have a low processing capacity, its operations are associated with greater effort and a de-prioritizing of irrelevant stimuli (Stanovich, 1999).\nEffort can be associated with complex calculations in System-2 to the extent that it taxes working memory. Alternatively, effort can be associated with System-2's capacity to overrule or suppress automatic processes in System-1 (Kahneman, 2011). For example, various System-1 biases (such as the \"belief bias\") can be subdued by instructing people to make a significant effort to reason deductively (Evans, 1983). The application of formal rules to \"control\" cognitive processes is also called metacognition -the monitoring and control of cognition (Flavell, 1979;Fletcher & Carruthers, 2012). Researchers have interpreted metacognition through a System-1 and System-2 framework (Arango-Muñoz, 2011; Shea et al., 2014). System-1 metacognition is thought to be implicit, automatic, affect-driven, and not requiring working memory. System-2 metacognition is considered explicit, rule-based, and relying on working memory.\nWhile the concept of \"effort\" is considered to be the monopoly of System-2, a computational approach suggests that effort is a continuum -with low effort cognitive phenomena being associated with System-1, and high effort cognitive phenomena being associated with System-2." }, { "figure_ref": [], "heading": "Effort in the Common Model", "publication_ref": [ "b33", "b8", "b38" ], "table_ref": [], "text": "The Common Model helps to elucidate how \"effort\" can be present in System-1 type operations in the absence of other System-2 characteristics. While neither dual-system theories nor the Common Model contain a c l e a r d e f i n i t i o n o f \" e ff o r t \" , c o m p u t a t i o n a l characteristics associated with effort can be necessary to System-1. For instance, \"effort\" is often associated with the intense use of working memory. However, the Common Model requires working memory (along with its processing limitations) for both System-1 and System-2 type operations. There is no reason why System-1 should necessarily use less working memory than System-2 in the Common Model. Instead, it would depend on the task duration and intensity.\nSystem-1 and System-2 metacognition can also be clarified by importing Proust's (2013) more precise account. Proust attempted to elucidate these two systems by claiming that they should be distinguished by their distinctive informational formats (System-1 non-conceptual; System-2 conceptual). In this sense, System-1 metacognition can exert effortful control while simultaneously being implicit and non conceptual. For example, consider a graduate student attending a conference while struggling not to fall asleep. An example of System-1 metacognition would involve the context implicitly prompting them to feel nervous, noticing their own fatigue, and then attempting to stay awake. This effort is context-driven, implicit, non conceptual, and effortful. Alternatively, System-2 metacognition can exert effort by way of explicit concepts, as in the case of a tired conference-attendee repeating the verbal instruction \"try to focus\". Either of these scenarios could be modelled using the Common Model, and to reiterate, there is little reason why System-1 should require less effort.\nAnother way to think about effort is in terms of the expense of neural energy. In this sense, effort can be viewed as the result of greater caloric expenditure in neurons. The neural and computational dynamics responsible for the effortful control of internal states have shown to be sensitive to performance incentives (Egger et al., 2019). Research also indicates that the allocation of effort as cognitive control is dependent on whether a goal's reward outweighs its costs (Shenhav, et al., 2017). Both of these relate to reinforcement learning, which is associated with System-1.\nExamining this question through the Common Model suggests that \"effort\" is not traditionally well defined, nor is it the sole privy of System-2. Rather, effort can be involved in processes characteristic of both System-1 and System-2." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The Common Model sheds light on the specific mechanisms that give rise to the general traits associated with System-1 and System-2. Interpreting System-1 and System-2 within the Common Model results in our concluding that the \"alignment assumption\" (that the two systems are opposites) is a false dichotomy. There are, of course, cases where all properties of System-1 and System-2 are cleanly bifurcated on either side. However, between these two extremities lies a spectrum where the characteristics are mixed. Few, if any, of these properties are 'necessary and sufficient' to be sharply distinctive of either. Evidence for this is as follows: 1. System-2 is grounded in System-1. While System-1 depends on procedural memory, so too does System-2. System-2 cannot operate separately due to the architectural constraints of the Common Model. Even if a System-2 process were primarily driven by declarative knowledge, it would still require System-1 procedural knowledge to be retrieved and acted upon. 2. System-1 and System-2 characteristics are often mixed as they routinely act together. System-2 goaldirected rationality often requires affect in the from of a desired end. Also, System-2 rationality is subject to System-1 affective biases. 3. Both System-1 and System-2 require working memory. While conventional views claim that System-1 does not require working memory, the constraints of the Common Model necessitate it. Production rules (procedural knowledge) are activated by the content of buffers (working memory) and hence are required by both systems. 4. Effort can be directed toward both System-2 rationality and System-1 metacognitive control. The effortful allocation of cognitive resources in System-1 can be based on an implicit cost-benefit analysis.\nRegardless of whether one adopts the Common Model architecture, researchers should be cautious of assuming the System-1 and System-2 dichotomy within their work. The framework is far from settled and deep issues continue to be unresolved. Questions remain as to whether System-1 and System-2 constitute an ontology or a convenient epistemology.\nSince before Descartes, substance dualism has continually been reimagined as mind and soul, reason and emotions, and opposing modes of thought. These have been expressions of the human species' attempt to make sense of our own minds, its processes, and how this understanding maps onto our personal experience. Clearly, System-1 and System-2 captures something deeply intuitive about the phenomenology of cognition. However, as we have discussed Kahneman's System-1 biases it may be worth asking -is System-2 a System-1 illusion? That is, do we assume the existence of System-2 simply because we so often act as if it exists?\nBy situating System-1 and System-2 within the Common Model of Cognition, we have attempted to bring light to this subject by clarifying its underlying mechanisms, misconceptions, and the base components needed for future research." } ]
There have been increasing challenges to dual-system descriptions of System-1 and System-2, critiquing them as imprecise and fostering misconceptions. We address these issues here by way of Dennett's appeal to use computational thinking as an analytical tool, specifically we employ the Common Model of Cognition. Results show that the characteristics thought to be distinctive of System-1 and System-2 instead form a spectrum of cognitive properties. By grounding System-1 and System-2 in the Common Model we aim to clarify their underlying mechanisms, persisting misconceptions, and implications for metacognition.
Clarifying System 1 & 2 through the Common Model of Cognition
[]
Brendan Conway-Smith; Robert L West
[ { "authors": "J R Anderson", "journal": "", "ref_id": "b0", "title": "Knowledge representation. Rules of the mind", "year": "1993" }, { "authors": "J R Anderson; C Lebiere", "journal": "Behavioral and brain Sciences", "ref_id": "b1", "title": "The Newell test for a theory of cognition", "year": "2003" }, { "authors": "S Arango-Muñoz", "journal": "Philosophia", "ref_id": "b2", "title": "Two levels of metacognition", "year": "2011" }, { "authors": "", "journal": "Guilford Press", "ref_id": "b3", "title": "The wisdom in feeling: Psychological processes in emotional intelligence", "year": "2002" }, { "authors": "A G Barto; R S Sutton; P S Brouwer", "journal": "Biological cybernetics", "ref_id": "b4", "title": "Associative search network: A reinforcement learning associative memory", "year": "1981" }, { "authors": "", "journal": "Guilford Press", "ref_id": "b5", "title": "Dual-process theories in social psychology", "year": "1999" }, { "authors": "A R Damasio", "journal": "Scientific American", "ref_id": "b6", "title": "Descartes' error and the future of human life", "year": "1994" }, { "authors": "D C Dennett", "journal": "MIT press", "ref_id": "b7", "title": "Brainstorms: Philosophical essays on mind and psychology", "year": "2017" }, { "authors": "S W Egger; E D Remington; C J Chang; M Jazayeri", "journal": "Nature neuroscience", "ref_id": "b8", "title": "Internal models of sensorimotor integration regulate cortical dynamics", "year": "2019" }, { "authors": "S Epstein; R Pacini", "journal": "", "ref_id": "b9", "title": "Some basic issues regarding dual-process theories from the perspective of cognitive-experiential self-theory", "year": "1999" }, { "authors": "J Evans; B T St", "journal": "Mem. Cogn", "ref_id": "b10", "title": "On the conflict between logic and belief in syllogistic reasoning", "year": "1983" }, { "authors": "J S B Evans", "journal": "Trends in cognitive sciences", "ref_id": "b11", "title": "In two minds: dual-process accounts of reasoning", "year": "2003" }, { "authors": "J S B Evans", "journal": "Mind & Society", "ref_id": "b12", "title": "Spot the difference: distinguishing between two kinds of processing", "year": "2012" }, { "authors": "U Faghihi; C Estey; R Mccall; S Franklin", "journal": "Biologically Inspired Cognitive Architectures", "ref_id": "b13", "title": "A cognitive model fleshes out Kahneman's fast and slow systems", "year": "2015" }, { "authors": "J H Flavell", "journal": "American psychologist", "ref_id": "b14", "title": "Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry", "year": "1979" }, { "authors": "L Fletcher; P Carruthers", "journal": "Philosophical Transactions of the Royal Society B: Biological Sciences", "ref_id": "b15", "title": "Metacognition and reasoning", "year": "1594" }, { "authors": "J Fodor", "journal": "Crowell", "ref_id": "b16", "title": "The Modularity of Mind", "year": "1983" }, { "authors": "K Frankish", "journal": "Philosophy Compass", "ref_id": "b17", "title": "Dual-Process and Dual-System Theories of Reasoning: Dual-Process and Dual-System Theories of Reasoning", "year": "2010" }, { "authors": "I Juvina; O Larue; A Hough", "journal": "Cognitive Systems Research", "ref_id": "b18", "title": "Modeling valuation and core affect in a cognitive architecture: The impact of valence and arousal on memory and decision-making", "year": "2018" }, { "authors": "G S Halford; W H Wilson; S Phillips", "journal": "Trends in cognitive sciences", "ref_id": "b19", "title": "Relational knowledge: The foundation of higher cognition", "year": "2010" }, { "authors": "J Houwer", "journal": "Experimental Psychology", "ref_id": "b20", "title": "Moving beyond System 1 and System 2: Conditioning, implicit evaluation, and habitual responding might be mediated by relational knowledge", "year": "2019" }, { "authors": "Daniel ; Kahneman; Amos Tversky", "journal": "Psychological Review", "ref_id": "b21", "title": "On the Psychology of Prediction", "year": "1973" }, { "authors": "D Kahneman", "journal": "American Psychologist", "ref_id": "b22", "title": "A perspective on judgment and choice: Mapping bounded rationality", "year": "2003" }, { "authors": "D Kahneman", "journal": "Macmillan", "ref_id": "b23", "title": "Thinking, fast and slow", "year": "2011" }, { "authors": "J R Anderson; C Lebiere", "journal": "Behavioral and brain Sciences", "ref_id": "b24", "title": "The Newell test for a theory of cognition", "year": "2003" }, { "authors": "G Keren; Y Schul", "journal": "Perspectives on psychological science", "ref_id": "b25", "title": "Two is not always better than one: A critical evaluation of two-system theories", "year": "2009" }, { "authors": "J Laird", "journal": "Journal of Artificial General Intelligence", "ref_id": "b26", "title": "Intelligence, knowledge & human-like intelligence", "year": "2020" }, { "authors": "J Laird; C Lebiere; P Rosenbloom", "journal": "Ai Magazine", "ref_id": "b27", "title": "A standard model of the mind", "year": "2017" }, { "authors": "S Legg; M Hutter", "journal": "Minds and machines", "ref_id": "b28", "title": "Universal intelligence: A definition of machine intelligence", "year": "2007" }, { "authors": "P Mcleod; K Plunkett; E T Rolls", "journal": "Oxford University Press", "ref_id": "b29", "title": "Introduction to connectionist modelling of cognitive processes", "year": "1998" }, { "authors": "D G Mitchell", "journal": "Behavioural brain research", "ref_id": "b30", "title": "The nexus between decision making and emotion regulation: a review of convergent neurocognitive substrates", "year": "2011" }, { "authors": "A Moors", "journal": "Annual review of psychology", "ref_id": "b31", "title": "Automaticity: Componential, causal, and mechanistic explanations", "year": "2016" }, { "authors": "J Panksepp; L Biven", "journal": "WW Norton & Company", "ref_id": "b32", "title": "The archaeology of mind: neuroevolutionary origins of human emotions", "year": "2012" }, { "authors": "J Proust", "journal": "", "ref_id": "b33", "title": "The philosophy of metacognition: Mental agency and self-awareness", "year": "2013" }, { "authors": "P S Rosenbloom; J Gratch; V Ustun", "journal": "Springer", "ref_id": "b34", "title": "Towards emotion in sigma: from appraisal to attention", "year": "2015-07" }, { "authors": "J W Payne; J R Bettman; E J Johnson", "journal": "Annual review of psychology", "ref_id": "b35", "title": "Behavioral decision research: A constructive processing perspective", "year": "1992" }, { "authors": "G Pennycook; W De Neys; J S B Evans; K E Stanovich; V A Thompson", "journal": "Trends in Cognitive Sciences", "ref_id": "b36", "title": "The mythical dualprocess typology", "year": "2018" }, { "authors": "N Shea; A Boldt; D Bang; N Yeung; C Heyes; C D Frith", "journal": "Trends in cognitive sciences", "ref_id": "b37", "title": "Supra-personal cognitive control and metacognition", "year": "2014" }, { "authors": "A Shenhav", "journal": "Annu. Rev. Neurosci", "ref_id": "b38", "title": "Toward a rational and mechanistic account of mental effort", "year": "2017" }, { "authors": "H A Simon", "journal": "The Quarterly Journal of Economics", "ref_id": "b39", "title": "A behavioural model of rational choice", "year": "1955" }, { "authors": "P Slovic; M L Finucane; E Peters; D G Macgregor", "journal": "Risk Analysis: An International Journal", "ref_id": "b40", "title": "Risk as analysis and risk as feelings: Some thoughts about affect, reason, risk, and rationality", "year": "2004" }, { "authors": "K Stanovich", "journal": "Psychology Press", "ref_id": "b41", "title": "Who is rational? Studies of individual differences in reasoning", "year": "1999" }, { "authors": "K E Stanovich; R F West", "journal": "Behavioral and brain sciences", "ref_id": "b42", "title": "Individual differences in reasoning: Implications for the rationality debate?", "year": "2000" }, { "authors": "Z Steine-Hanson; N Koh; A Stocco", "journal": "", "ref_id": "b43", "title": "Refining the Common Model of Cognition through large neuroscience data", "year": "2018" }, { "authors": "F Strack; R Deutsch", "journal": "Personality and social psychology review", "ref_id": "b44", "title": "Reflective and impulsive determinants of social behavior", "year": "2004" }, { "authors": "A Stocco; C Lebiere; J R Anderson", "journal": "Psychological review", "ref_id": "b45", "title": "Conditional routing of information to the cortex: A model of the basal ganglia's role in cognitive coordination", "year": "2010" }, { "authors": "A Stocco; C Sibert; Z Steine-Hanson; N Koh; J E Laird; C J Lebiere; P Rosenbloom", "journal": "NeuroImage", "ref_id": "b46", "title": "Analysis of the human connectome data supports the notion of a \"Common Model of Cognition\" for human and human-like intelligence across domains", "year": "2021" }, { "authors": "R Sun; P Slusarz; C Terry", "journal": "Psychological review", "ref_id": "b47", "title": "The interaction of the explicit and the implicit in skill learning: a dual-process approach", "year": "2005" }, { "authors": "S W Tay; P Ryan; C A Ryan", "journal": "Canadian medical education journal", "ref_id": "b48", "title": "Systems 1 and 2 thinking processes and cognitive reflection testing in medical students", "year": "2016" }, { "authors": "R Thomson; A Pyke; L M Hiatt; J G Trafton", "journal": "", "ref_id": "b49", "title": "An Account of Associative Learning in Memory Recall", "year": "2015-07" }, { "authors": "T Tsujii; S Watanabe", "journal": "Brain Research", "ref_id": "b50", "title": "Neural correlates of dual-task effect on belief-bias syllogistic reasoning: a near-infrared spectroscopy study", "year": "2009" }, { "authors": "D Wallach; C Lebiere", "journal": "Advances in Consciousness Research", "ref_id": "b51", "title": "Implicit and explicit learning in a unified architecture of cognition", "year": "2003" }, { "authors": "P C Wason; J S B Evans", "journal": "Cognition", "ref_id": "b52", "title": "Dual processes in reasoning?", "year": "1974" }, { "authors": "R L West; B Conway-Smith", "journal": "", "ref_id": "b53", "title": "Put Feeling into Cognitive Models: A Computational Theory of Feeling", "year": "2019" }, { "authors": "R L West; J T Young", "journal": "", "ref_id": "b54", "title": "Proposal to add emotion to the standard model", "year": "2017-10" }, { "authors": "R B Zajonc", "journal": "American psychologist", "ref_id": "b55", "title": "Feeling and thinking: Preferences need no inferences", "year": "1980" } ]
[]
2023-11-01
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b10", "b4", "b62", "b19", "b41", "b32", "b14", "b56", "b7", "b52", "b16", "b29", "b53", "b38", "b1", "b30", "b36", "b38", "b33", "b48", "b48", "b38", "b48", "b22", "b2", "b43", "b34", "b21", "b35", "b61" ], "table_ref": [], "text": "Diffusion models have demonstrated remarkable ability in generating high-quality samples in multiple fields [11,5,63,20,42,33,15,57,8,53]. Compared to generative adversarial networks (GANs) [17] and variational autoencoders (VAEs) [30], diffusion models do not face the issue of mode collapse and posterior collapse, thus training is more stable. Nonetheless, the application of diffusion models is limited by two major bottlenecks. Firstly, diffusion models typically require hundreds of denoising steps to generate high-quality samples, making the process significantly slower than that of GANs. To address this, many studies [54,39,2,31,37] have proposed advanced training-free sampler to reduce the number of denoising iterations. Among them, a recent study DPM-solver [39] curtails the denoising process to ten steps by analytically computing the linear part of the diffusion ordinary Q-Diffusion (W4A8) PTQD (W4A8)\nFull Precision\nFigure 1: The comparisons of samples generated by Q-Diffusion [34], PTQD and full-precision LDM-4 [49] on CelebA-HQ 256 × 256 dataset. Here, WxAy indicates the weights are quantized to x-bit while the activations are quantized to y-bit.\ndifferential equations (ODEs). Nevertheless, diffusion models with these fast samplers are not yet ready for real-time applications. For instance, even when executed on a high-performance platform such as the RTX 3090, Stable Diffusion [49] with the DPM-Solver [39] sampler still takes over a second to generate a 512 × 512 image. Second, the application of diffusion models on various devices is constrained by the massive parameters and computational complexity. To illustrate, executing Stable Diffusion [49] requires 16GB of running memory and GPUs with over 10GB of VRAM, which is infeasible for most consumer-grade PCs, not to mention resource-constrained edge devices.\nModel quantization, which employs lower numerical bitwidth to represent weights and activations, has been widely studied to reduce memory footprint and computational complexity. For instance, employing 8-bit models can result in a significant speed-up of 2.2× compared to floating-point models on ARM CPUs [23]. Adopting 4-bit quantization can further deliver a throughput increase of up to 59% compared to 8-bit quantization [3]. To facilitate the quantization process without the need for re-training, post-training quantization (PTQ) has emerged as a widely used technique, which is highly practical and easy to implement. While PTQ on traditional models have been widely studied [44,35,22,36,62], its application on diffusion models incurs two new challenges at the fundamental level. First, with the noise prediction network quantized, its quantization noise inevitably introduces bias in the estimated mean and brings additional variance that collides with the predetermined variance schedule in each denoising step. Additionally, the quantization noise accumulates as the iterative sampling process progresses, leading to a significant drop in the signalto-noise ratio (SNR) of the noise prediction network in the later denoising steps. This diminished SNR severely impedes the denoising capability, resulting in a noticeable degradation in the quality of the generated images.\nTo tackle the aforementioned challenges, we present PTQD, a novel post-training quantization framework for diffusion models. To address the mean deviation and additional variance in each denoising step, we model the quantization noise by disentangling it into its correlated and residual uncorrelated parts regarding its full-precision counterpart, and designs separate correction methods for them. By estimating the correlation coefficient, the correlated part can be easily rectified. For the residual uncorrelated part, we subtract the bias from the estimated mean and propose variance schedule calibration, which absorbs the additional variance into the diffusion perturbed noise. To overcome the issue of low SNR that diminishes denoising capability in later denoising steps, we introduce a step-aware mixed precision scheme, which adaptively allocates different bitwidths for synonymous steps to maintain a high SNR for the denoising process.\nIn summary, our contributions are as follows:\n• We present PTQD, a novel post-training quantization framework for diffusion models, which provides a unified formulation for quantization noise and diffusion perturbed noise.\n• We disentangle the quantization noise into correlated and uncorrelated parts regarding its fullprecision counterpart. Then we correct the correlated part by estimating the correlation coefficient, and propose variance schedule calibration to rectify the residual uncorrelated part.\n• We introduce a step-aware mixed precision scheme, which dynamically selects the appropriate bitwidths for synonymous steps, preserving SNR throughout the denoising process.\n• Our extensive experiments demonstrate that our method reaches a new state-of-the-art performance for post-training quantization of diffusion models." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b40", "b50", "b6", "b13", "b68", "b42", "b28", "b46", "b59", "b31", "b53", "b30", "b67", "b60", "b1", "b0", "b54", "b24", "b27", "b38", "b36", "b66", "b15", "b37", "b23", "b69", "b65", "b34", "b43", "b21", "b61", "b35", "b43", "b21", "b57", "b34", "b61", "b20", "b12", "b3", "b63", "b5", "b12", "b5", "b3", "b51", "b33", "b34" ], "table_ref": [], "text": "Efficient diffusion models. While diffusion models can produce high-quality samples, their slow generation speed hinders their large-scale applications in downstream tasks. To explore efficient diffusion models, many methods have been proposed to expedite the sampling process. These methods can be classified into two categories: methods that necessitate re-training and advanced samplers for pre-trained models that do not require training. The first category of methods comprises knowledge distillation [41,51], diffusion scheme learning [7,14,69,43], noise scale learning [29,47], and sample trajectory learning [60,32]. Although these methods can accelerate sampling, re-training a diffusion model can be resource-intensive and time-consuming. On the other hand, the second category of methods designs advanced samplers directly on pre-trained diffusion models, eliminating the need for re-training. The primary methods in this category are implicit sampler [54,31,68,61], analytical trajectory estimation [2,1], and differential equation (DE) solvers such as customized SDE [55,25,28] and ODE [39,37,67]. Although these methods can reduce the sampling iterations, the diffusion model's massive parameters and computational complexity restrict their use to highperformance platforms. Conversely, our proposed low-bit diffusion model can significantly reduce the model's computational complexity while speeding up the sampling and reducing the demand for hardware computing resources in a training-free manner.\nModel quantization. Quantization is a dominant technique to save memory costs and speed up computation. It can be divided into two categories: quantization-aware training (QAT) [16,38,24,70,66] and post-training quantization (PTQ) [35,44,22,62,36]. QAT involves simulating quantization during training to achieve good performance with lower precision, but it requires substantial time, computational resources, and access to the original dataset. In contrast, PTQ does not require fine-tuning and only needs a small amount of unlabeled data to calibrate. Recent studies have pushed the limits of PTQ to 4-bit on traditional models by using new rounding strategies [44], layer-wise calibration [22,58], and second-order statistics [35,62]. Additionally, mixed precision (MP) [21,13,4,64,6] allows a part of the model to be represented by lower bitwidths to accelerate inference. Common criteria for determining quantization bitwidths include Hessian spectrum [13,6] or Pareto frontier [4]. In contrast, we propose a novel mixed-precision scheme for diffusion models that adapts different bitwidths for synonymous denoising steps.\nUntil now, there have been few studies specifically focusing on quantizing a pre-trained diffusion model without re-training. PTQ4DM [52] is the first attempt to quantize diffusion models to 8-bit, but its experiments are limited to small datasets and low resolution. Q-Diffusion [34] applies advanced PTQ techniques proposed by BRECQ [35] to improve performance and evaluate it on a wider range of datasets. Our paper aims to analyze systematically the quantization effect on diffusion models and establish a unified framework for accurate post-training diffusion quantization." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Diffusion Models", "publication_ref": [ "b53", "b18", "b18", "b53" ], "table_ref": [], "text": "Diffusion models [54,19] gradually apply Gaussian noise to real data x 0 in the forward process and learn a reverse process to denoise and generate high-quality images. For DDPMs [19], the forward process is a Markov chain, which can be formulated as:\nq(x 1:T |x 0 ) = T t=1 q(x t |x t-1 ), q(x t |x t-1 ) = N (x t ; √ α t x t-1 , β t I)(1)\nwhere α t , β t are hyperparameteres and β t = 1 -α t .\nIn the reverse process, since directly estimating the real distribution of q(x t-1 |x t ) is intractable, diffusion models approximate it via variational inference by learning a Gaussian distribution p θ (x t-1 |x t ) = N (x t-1 ; µ θ (x t , t), Σ θ (x t , t)) and reparameterize its mean by a noise prediction network ϵ θ (x t , t):\nµ θ (x t , t) = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t)(2)\nwhere ᾱt = t s=1 α s . The variance Σ θ (x t , t) can either be reparameterized or fixed to a constant schedule σ t . When it uses a constant schedule, the sampling of x t-1 can be formulated as:\nx t-1 = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t) + σ t z, where z ∼ N (0, I).(3)\nOur method focuses on post-training quantization of diffusion models without the need of training. Instead, we use pre-trained diffusion models and inherit their hyperparameters and variance schedules for inference. Although the derivations presented in this paper are based on DDPM, they can be readily extended to other fast sampling methods, such as DDIM [54]. Additional information can be found in the supplementary material." }, { "figure_ref": [], "heading": "Model Quantization", "publication_ref": [], "table_ref": [], "text": "We use uniform quantization in our study and all the experiments. For uniform quantization, given a floating-point vector x, the target bitwidth b, the quantization process can be defined as:\nx = ∆ • clip(⌊ x ∆ ⌉ + Z, 0, 2 b -1) -Z ,(4)\nwhere\n⌊•⌉ is the round operation, ∆ = max(x)-min(x) 2 b -1\nand Z = -⌊ min(x) ∆ ⌉. To ensure clarity and consistency, we introduce notation to define the variables used in the paper. Let X be a tensor (weights or activations) in the full-precision model, the result after normalization layers is denoted as X. The corresponding tensor of the quantized model is represented as X. The quantization noise is depicted by ∆ X , which is the difference between X and X." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Model quantization discretizes the weights and activations, which will inevitably introduce quantization noise into the result. As per Eq. ( 3), during the reverse process of the quantized diffusion model, the sampling of x t-1 can be expressed as:\nx t-1 = 1 √ α t x t - β t √ 1 -ᾱt εθ (x t , t) + σ t z (5) = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t) + ∆ ϵ θ (xt,t) + σ t z.\nHere, εθ (x t , t) is the output of the quantized noise prediction network and ∆ ϵ θ (xt,t) refers to the quantization noise. The additional quantization noise will inevitably alter the mean and variance of x t-1 , decreasing the signal-to-noise ratio (SNR) and adversely affecting the quality of the generated samples. Therefore, to mitigate the impact of quantization, it is necessary to correct the mean and variance to restore the SNR at each step of the reverse process." }, { "figure_ref": [], "heading": "Correlation Disentanglement", "publication_ref": [ "b48", "b64" ], "table_ref": [], "text": "We begin by making an assumption that a correlation exists between the quantization noise and the result of the full-precision noise prediction network. While other factors, such as nonlinear operations, may contribute to this correlation, Proposition 1 demonstrates that normalization layers are responsible for a part of it.\nProposition 1. Given Y and Ŷ as inputs to a normalization layer in a full-precision model and its quantized version, where the quantization noise ∆ Y = Ŷ -Y is initially uncorrelated with Y , a correlation between the quantization noise and the output of the full-precision model after the normalization layer will exist.\nThe proof is based on the fact that the mean and variance of Ŷ will differ from that of Y (depending on the specific quantization scheme). Therefore, the quantization noise after normalization layer can be expressed as :\n∆ Y = Ŷ -µ Ŷ σ Ŷ - Y -µ Y σ Y = σ Y ∆ Y -(σ Ŷ -σ Y )Y + σ Ŷ µ Y -σ Y µ Ŷ σ Ŷ σ Y .(6)\n2.5 0.0 2.5\nθ (x t , t) 0.5 0.0 0.5 ∆ θ (x t , t) t=199 R 2 =0.98 2.5 0.0 2.5 θ (x t , t) 0.5 0.0 0.5 ∆ θ (x t , t) t=100 R 2 =0.72 1 0 1 θ (x t , t) 1 0 1 ∆ θ (x t , t) t=0 R 2 =0.85 Figure 2:\nThe correlation between the quantization noise (Y-axis) and the output of the full-precision noise prediction network (X-axis). Each data point on the plot corresponds to specific entries within these vectors. Data were collected by generating samples with 4-bit LDM-8 [49] for 200 steps on LSUN-Churches [65].\nHere, we omit the affine transform parameters in normalization layers for simplicity. It can be observed from Eq. ( 6) that the second term in the numerator is related to Y , while the other three terms are uncorrelated. Therefore, after normalization layers, the quantization noise ∆ Y will be correlated with Y .\nThe empirical observation illustrated in Figure 2 confirms a strong correlation between the quantization noise and the output of the full-precision noise prediction network, which further verifies our assumption. Based on the assumption and observation, the quantization noise of the quantized noise prediction network can be disentangled into two parts:\n∆ ϵ θ (xt,t) = kϵ θ (x t , t) + ∆ ′ ϵ θ (xt,t) .(7)\nThe first part, denoted by kϵ θ (x t , t), is linearly related to ϵ θ (x t , t). The second part, expressed by ∆ ′ ϵ θ (xt,t) , represents the residual component of the quantization noise, and is assumed to be uncorrelated with ϵ θ (x t , t). Here, k is the correlation coefficient, which can be estimated by applying linear regression on the quantization noise ∆ ϵ θ (xt,t) and the original value ϵ θ (x t , t). Details can be found in Section 5.1.\nWith the disentanglement presented in Eq. ( 7), the sampling of x t-1 can be further expressed as:\nx t-1 = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t) + ∆ ϵ θ (xt,t) + σ t z (8) = 1 √ α t x t - β t √ 1 -ᾱt (1 + k) ϵ θ (x t , t) + ∆ ′ ϵ θ (xt,t) + σ t z.\nConsequently, the bias and additional variance arise from both the correlated and uncorrelated parts of quantization noise. In the following section, we will provide a detailed explanation of how these two parts of quantization noise can be separately corrected." }, { "figure_ref": [], "heading": "Quantization Noise Correction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Correlated Noise Correction", "publication_ref": [], "table_ref": [], "text": "Based on Eq. ( 8), the correlated part of the quantization noise can be rectified by dividing the output of the quantized noise prediction network εθ (x t , t) by 1 + k, resulting in:\nx t-1 = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ(xt,t) + σ t z - β t √ α t √ 1 -ᾱt (1 + k) ∆ ′ ϵ θ (xt,t) .(9)\nConsequently, only the uncorrelated quantization noise remains. Moreover, for values of k ≥ 0, it can be deduced that the mean and variance of the uncorrelated quantization noise are diminished by 1 1+k . In practice, we enforce the non-negativity of k, and reset it to zero if it is negative. In the following, we will explain how to handle the uncorrelated quantization noise that persists in Eq. ( 9)." }, { "figure_ref": [ "fig_0" ], "heading": "Uncorrelated Noise Correction", "publication_ref": [ "b44", "b34" ], "table_ref": [], "text": "The presence of uncorrelated quantization noise introduces additional variance at each step, resulting in a total variance that exceeds the scheduled value σ 2 t . To address this, we propose to calibrate the variance schedule for quantized diffusion models, which is denoted as σ ′ 2 t and smaller than the original schedule σ 2 t . To estimate σ ′ 2 t , we further make an assumption and model the uncorrelated quantization noise as a Gaussian distribution with a mean of µ q and a variance of σ 2 q :\n∆ ′ ϵ θ (xt,t) ∼ N (µ q , σ q ).(10)\nTo verify this assumption, we conduct statistical tests (refer to the supplementary material) and present the distribution of the uncorrelated quantization noise in Figure 3. The values of mean and variance can be estimated by generating samples with both quantized and full-precision diffusion models and collecting the statistics of the uncorrelated quantization noise. Following prior work [45], the mean deviation can be rectified through Bias Correction (BC), where we collect the channel-wise means of uncorrelated quantization noise and subtract them from the output of quantized noise prediction network. For the variance of the uncorrelated quantization noise, we propose Variance Schedule Calibration (VSC), where the uncorrelated quantization noise can be absorbed into Gaussian diffusion noise with the above assumption. By substituting the calibrated variance schedule σ ′ 2 t into Eq. ( 9) while keeping the variance of each step unaltered, we can solve for the optimal variance schedule using the following approach:\nσ ′ 2 t + β 2 t α t (1 -ᾱt )(1 + k) 2 σ 2 q = σ 2 t ,(11) σ\n′ 2 t = σ 2 t - β 2 t αt(1-ᾱt)(1+k) 2 σ 2 q , if σ 2 t ≥ β 2 t αt(1-ᾱt)(1+k) 2 σ 2 q 0, otherwise.(12)\nIt can be observed that if the additional variance of quantization noise is smaller than the noise hyperparameter σ 2 t , the increase in variance caused by quantization can be eliminated. According to Eq. ( 12), the coefficient for the variance of the quantization noise can be calculated as 2 , which is generally small enough to ensure that the quantization noise can be fully absorbed, except for cases of deterministic sampling where σ t is zero. In this case, there is no analytical solution for σ ′ 2 t , and we use the optimal solution that is σ ′ 2 t = 0. Overall, the process quantization noise correction is summarized in Algorithm 1.\nβ 2 t αt(1-ᾱt)(1+k)\nAlgorithm 1: Quantization noise correction. Statistics collection before sampling: 1) Quantize diffusion models with BRECQ [35] (or other PTQ methods); 2) Generate samples with both quantized and FP models and collect quantization noise; 3) Calculate the correlated coefficient k based on Eq. ( 7), and the mean and variance of the uncorrelated quantization noise as per Eq. ( 10); Noise correction for each sampling step: 4) Correct the correlated part of the quantization noise by dividing the output of the noise prediction network by 1 + k; 5) Calibrate the variance schedule by Eq. ( 12) and subtract the channel-wise biases from the output of the quantized noise prediction network.\nAlthough the proposed method can correct the mean deviation and the numerical value of the variance for each step, generating satisfactory samples with low-bit diffusion models remains challenging due to the low signal-to-noise ratio (SNR) of the quantized noise prediction network. In the next section, we will analyze this issue in detail. \nSNR Q (W4A4, corrected) SNR Q (W4A4, uncorrected) SNR Q (W4A8, corrected) SNR Q (W4A8, uncorrected) SNR F" }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Step-aware Mixed Precision", "publication_ref": [ "b38", "b28" ], "table_ref": [], "text": "Given the output of the full-precision noise prediction network ϵ θ (x t , t) and corresponding quantization noise ∆ ϵ θ (xt,t) , we define the SNR Q of quantized noise prediction network by:\nSNR Q (t) = ∥ϵ θ (x t , t)∥ 2 ∥∆ ϵ θ (xt,t) ∥ 2 . (13\n)\nFigure 4 depicts the SNR Q with various bitwidths and correction methods. The figure reveals several insights: 1) SNR Q drops drastically as step t decreases; 2) models with higher bitwidth exhibit larger SNR Q ; 3) the proposed correction methods yield clear SNR Q improvements, especially for large steps. The first observation highlights the challenge of generating high-quality samples using low-bit diffusion models. In particular, as t approaches zero, the SNR Q of W4A4 diffusion models diminishes and approaches unity, implying that the magnitude of quantization noise is even comparable to the original result of the noise prediction network. To enable low-bit diffusion models while maintaining good generation performance, we propose a novel approach called Step-aware Mixed Precision, which involves setting different bitwidths for synonymous steps to keep SNR Q within a reasonable range across all steps.\nSpecifically, the bitwidth of weights is fixed and shared across different denoising steps, which eliminates the need to store and reload multiple model state files during the sampling process.\nAs a result, we only adjust the bitwidth of activations. Formally, we predefine a set of bitwidths B = {b 1 , b 2 , . . . , b n } for activations and evaluate the SNR Q under each bitwidth. To establish a benchmark for SNR Q , we follow prior studies [39,29] and introduce SNR F based on the forward process, which denotes the degree of data noise at each step:\nSNR F (t) = α 2 t /σ 2 t .(14)\nFigure 4 illustrates SNR F (t), which decreases strictly with respect to steps t. To determine the optimal bitwidth for each step t, we compare the SNR Q of each bitwidth with SNR F , and select the minimum bitwidth b min that satisfies:\nSNR Q bmin (t) > SNR F (t). (15\n)\nIf none of the bitwidths satisfies this condition, we utilize the maximum bitwidth in B to achieve a higher SNR. In practice, models with different bitwidths are calibrated separately, with the calibration set collected from the corresponding steps." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b48", "b9", "b64", "b48", "b34", "b43", "b33", "b47", "b17", "b45", "b49", "b10", "b58", "b9" ], "table_ref": [], "text": "Datasets and quantization settings. We conduct image synthesis experiments using latent diffusion models (LDM) [49] on three standard benchmarks: ImageNet [10], LSUN-Bedrooms, and LSUN-Churches [65], each with a resolution of 256 × 256. All experimental configurations, including the number of steps, variance schedule (denoted by eta in the following), and classifier-free guidance scale, follow the official implementation [49]. For low-bit quantization, we use the PTQ method proposed in BRECQ [35] and AdaRound [44], which is congruent with Q-Diffusion [34]. For 8-bit quantization on ImageNet, we only use a naive PTQ method proposed by TensorRT [48], which is simple and fast. The input and output layers in the model are fixed to 8-bit, while all other convolutional and linear layers are quantized to the target bitwidth. In mixed precision experiments, we fix the weights to 4-bit and use Eq. ( 15) to determine the bitwidth of activations over uncorrected quantized diffusion models with a bitwidth set of {4, 8}. Details of bitwidth allocation can be found in the supplementary material.\nEvaluation metrics. For each experiment, we report the widely adopted Frechet Inception Distance (FID) [18] and sFID [46] to evaluate the performance. For ImageNet experiments, we additionally report Inception Score (IS) [50] for reference. To ensure consistency in the reported outcomes, including those of the baseline methods, all results are obtained by our implementation. We sample 50,000 images and evaluate them with ADM's TensorFlow evaluation suite [11]. To quantify the computational efficiency, we measure Bit Operations (BOPs) for a single forward pass of the diffusion model using the equation BOPs = MACs • b w • b a , where MACs denotes Multiply-And-Accumulate operations, and b w and b a represent the bitwidth of weights and activations, respectively, following [59].\nStatistics collection. Before implementing our method, three statistics need to be collected: the correlation coefficient, denoted as k in Eq. ( 7), and the mean and variance of the uncorrelated quantization noise, as depicted in Eq. (10). To obtain these statistics, we generate 1024 samples using both quantized and full-precision diffusion models, store the quantization noise at each step, and then calculate the required statistics." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "As shown in Additional ablation experiments can be found in the supplementary material." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Class-conditional Generation", "publication_ref": [ "b47" ], "table_ref": [ "tab_2" ], "text": "In this section, we evaluate the performance of class-conditional image generation on 256 × 256 ImageNet dataset, as presented in Table 2. By utilizing the Naive PTQ method [48] and quantizing to 8-bit, diffusion models can achieve a notable 12.39× reduction in bit operations, while experiencing minimal increases in FID/sFID. With the aggressive W4A8 bitwidth setting, our method effectively narrows the FID gap to a mere 0.06 with 250 generation steps. In this setting, the model size is " }, { "figure_ref": [], "heading": "Unconditional Generation", "publication_ref": [ "b64", "b33" ], "table_ref": [ "tab_3" ], "text": "In this section, we present a comprehensive evaluation of our approach on LSUN-Bedrooms and LSUN-Churches [65] datasets for unconditional image generation. As shown in Table 3, our method consistently narrows the performance gap between quantized and full-precision diffusion models. Notably, our proposed method allows for compression of diffusion models to 8-bit with minimal performance degradation, resulting in a mere 0.1 increase in FID on the LSUN-Churches dataset.\nWith the W4A8 bitwidth setting, our method reduces FID and sFID by notably 0.78 and 3.61 compared with Q-Diffusion [34] on LSUN-Bedrooms. Furthermore, Q-Diffusion fails to effectively denoise samples under the mixed precision setting on LSUN-Churches due to its low SNR. In this case, the hyperparameter eta is set to zero, which prevents the use of Variance Schedule Calibration. Despite relying solely on Correlated Noise Correction and Bias Correction, our approach remarkably enhances the quality of the generated images, as demonstrated by a substantial reduction in the FID score from 218.59 to 17.99. This notable improvement highlights the significant impact of the correlated part of quantization noise on the overall image quality, which can be effectively rectified by our method.\nAdditional evaluation results on CelebA-HQ dataset can be found in the supplementary material. " }, { "figure_ref": [], "heading": "Deployment Efficiency", "publication_ref": [ "b26" ], "table_ref": [ "tab_4" ], "text": "We have measured the latency of matrix multiplication and convolution operations in quantized and full-precision diffusion models using an RTX3090 GPU, as shown in Table 4. Both floating-point and quantized operations are implemented with CUTLASS [27]. When both weights and activations are quantized to 8-bit, we observe a 2.03× reduction in latency compared to its full-precision counterpart over LDM-4. Moreover, when weights and activations are quantized to 4-bit, the speedup further increases to 3.34×. The mixed-precision settings explored in our experiments strike a good balance between latency and model performance. " }, { "figure_ref": [ "fig_3", "fig_3", "fig_3" ], "heading": "Conclusion and Future Work", "publication_ref": [ "b11", "b8" ], "table_ref": [], "text": "In this paper, we have proposed PTQD, a novel post-training quantization framework for diffusion models that unifies the formulation of quantization noise and diffusion perturbed noise. To start with, we have disentangled the quantization noise into correlated and residual uncorrelated parts relative to its full-precision counterpart. To reduce mean deviations and additional variance in each step, the correlated part can be easily corrected by estimating the correlation coefficient. For the uncorrelated part, we have proposed Variance Schedule Calibration to absorb its additional variance and Bias Correction to correct the mean deviations. Moreover, we have introduced Step-aware Mixed Precision to adaptively select the optimal bitwidth for each denoising step. By incorporating these techniques, our PTQD has achieved significant performance improvement over existing state-of-the-art posttraining quantized diffusion models, with only a 0.06 FID increase compared to the full-precision LDM-4 on ImageNet 256 × 256 while saving 19.9× bit-operations. In the future, we can further quantize other components within diffusion models, such as the text encoder and image decoder, to achieve higher compression ratios and accelerated performance. We may also extend PTQD to a wider range of generative tasks to assess its efficacy and generalizability.\nLimitations and Broader Impacts. The proposed PTQD framework stands out for its high efficiency and energy-saving properties, which carry significant implications in reducing the carbon emissions attributed to the widespread deployment of diffusion models. However, similar to other deep generative models, PTQD has the potential to be utilized for producing counterfeit images and videos for malicious purposes.\ntest [12,9], with the null hypothesis proposing that the sample comes from a normal distribution. The outcomes are illustrated in Figure A, and they reveal that, with a significance level of 0.01, the null hypothesis cannot be rejected at any step, thus substantiating our assumption. In Figure B, we present the variance of the residual uncorrelated quantization noise. It can be observed that as the quantization bitwidth decreases, the variance of the quantization noise increases accordingly. Nonetheless, the coefficient associated with this variance is relatively small, allowing for its effective absorption into the calibrated diffusion variance schedule. Figure C illustrates the bias on the estimated mean introduced by the residual quantization noise. Notably, this bias exhibits significant variations across different channels, emphasizing the necessity for distinct correction procedures for each channel. Correlation analysis. In Figures D to G, we present the results of linear regression analysis conducted on the quantization noise and the output of the full-precision noise estimation network, which includes Pearson's coefficient R and the coefficient k as defined in Eq. (B). As depicted in Figures E andG, we observe a notably high R value for diffusion models with W4A4 bitwidth, indicating that the quantization noise primarily consists of the correlated component. This finding demonstrates the effectiveness of our method in rectifying this specific aspect of quantization noise, particularly in scenarios involving low bitwidth. In cases of diffusion models with W4A8 or W8A8 bitwidth, our approach can also correct a substantial portion of the quantization noise by leveraging the correlation. Additionally, for larger steps, the coefficient k generally exhibits positive values (which can also be observed in Figures E andG, as k and R value share the same sign), thereby affirming our capability to correct the correlated part of the quantization noise in these steps. " }, { "figure_ref": [], "heading": "C Additional experimental results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.1 Implementation details of step-aware mixed precision", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.2 Additional ablation experiments", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "In this section, we conduct additional ablation experiments with constant which are outlined in Table F. The experimental results consistently demonstrate performance improvements brought by each component of our method under constant precision settings. Notably, our method exhibits more significant improvements at lower bitwidth (W3A8) due to the inherent presence of greater quantization noise at these levels." }, { "figure_ref": [], "heading": "C.3 Comparisons with PTQ4DM", "publication_ref": [ "b51" ], "table_ref": [], "text": "Additionally, we include a comparison with the PTQ method PTQ4DM [52] on the LSUN-Bedrooms dataset, as shown in Table G. Remarkably, our proposed approach outperforms PTQ4DM in both W4A8 and W3A8 bitwidth scenarios." }, { "figure_ref": [], "heading": "C.4 Evaluation with advanced sampler", "publication_ref": [ "b36", "b39" ], "table_ref": [], "text": "Table H presents the results on a new dataset CelebA-HQ over recent DDPM variants PLMS [37], demonstrating the strong performance of PTQD under this configuration. Notably, the proposed PTQD reduces the FID and sFID by a considerable margin of 3.23 and 4.73 in comparison to Q-Diffusion, respectively.\nAdditionally, we present the results of our PTQD over latest DPM++ solver [40] on LSUN-Churches dataset, as shown in Table I. Notably, our PTQD with W3A8 bitwidth achieves a sFID result comparable to that of W4A8 Q-Diffusion. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgement This work was supported by National Key Research and Development Program of China (2022YFC3602601)." }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [ "b53", "b53", "b18", "b18" ], "table_ref": [], "text": "We organize our supplementary material as follows:\n• In section A, we provide a comprehensive explanation of extending PTQD to DDIM [54].\n• In section B, we show the statistical analysis of quantization noise.\n• In section C, we present additional experimental results.\n• In section D, we provide additional visualization results on ImageNet and LSUN dataset.\nA Extending PTQD to DDIM DDIM [54] generalizes DDPMs [19] via a class of non-Markovian diffusion processes, which can greatly accelerate the sampling process. Briefly, when DDIM is quantized, the sampling of x t-1 can be expressed as:\nwhere εθ (x t , t) is the result of quantized noise prediction network and ∆ ϵ θ (xt,t) is the quantization noise.\nFirstly, we disentangle the quantization noise to its correlated and residual uncorrelated part, which is same as in DDPM [19]:\nThen we can reformulate Eq. (A) as\nBy estimating the correlation coefficient k, the correlated part can be corrected by dividing the output of the quantized noise prediction network εθ (x t , t) by 1 + k:\nThen we calibrate the variance schedule, denoted as σ ′ t , to absorb the excess variance of residual quantization noise, which is depicted by\n√ αt , we have: " }, { "figure_ref": [], "heading": "B Statistical analysis", "publication_ref": [], "table_ref": [], "text": "" } ]
Diffusion models have recently dominated image synthesis and other related generative tasks. However, the iterative denoising process is expensive in computations at inference time, making diffusion models less practical for low-latency and scalable real-world applications. Post-training quantization of diffusion models can significantly reduce the model size and accelerate the sampling process without requiring any re-training. Nonetheless, applying existing post-training quantization methods directly to low-bit diffusion models can significantly impair the quality of generated samples. Specifically, for each denoising step, quantization noise leads to deviations in the estimated mean and mismatches with the predetermined variance schedule. Moreover, as the sampling process proceeds, the quantization noise may accumulate, resulting in a low signal-to-noise ratio (SNR) during the later denoising steps. To address these challenges, we propose a unified formulation for the quantization noise and diffusion perturbed noise in the quantized denoising process. Specifically, we first disentangle the quantization noise into its correlated and residual uncorrelated parts regarding its full-precision counterpart. The correlated part can be easily corrected by estimating the correlation coefficient. For the uncorrelated part, we subtract the bias from the quantized results to correct the mean deviation and calibrate the denoising variance schedule to absorb the excess variance resulting from quantization. Moreover, we introduce a mixed-precision scheme for selecting the optimal bitwidth for each denoising step, which prioritizes lower bitwidths to expedite early denoising steps, while ensuring that higher bitwidths maintain a high signal-to-noise ratio (SNR) in the later steps. Extensive experiments demonstrate that our method outperforms previous post-training quantized diffusion models in generating high-quality samples, with only a 0.06 increase in FID score compared to full-precision LDM-4 on ImageNet 256 × 256, while saving 19.9× bit operations.
PTQD: Accurate Post-Training Quantization for Diffusion Models
[ { "figure_caption": "Figure 3 :3Figure 3: The distribution of uncorrelated quantization noise collected from W4A8 LDM-4 on LSUN-Bedrooms 256 × 256 dataset, where the x-axis represents the range of values and the yaxis is the frequency of values.", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of the signal-to-noiseratio (SNR) in each step of LDM-4 on LSUN-Bedrooms across various bitwidths.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure A :tFigureAFigure A: The result of normal test for residual quantization noise across various steps. Data is collected from W4A4 LDM-8 on LSUN-Churches.", "figure_data": "", "figure_id": "fig_3", "figure_label": "A", "figure_type": "figure" }, { "figure_caption": "Figure D :Figure E :Figure F :Figure G :DEFGFigure D: The correlation coefficient k in each step of LDM-8 on LSUN-Churches.", "figure_data": "", "figure_id": "fig_4", "figure_label": "DEFG", "figure_type": "figure" }, { "figure_caption": "Figure I: The comparisons of samples generated by Q-Diffusion [34], PTQD and full-precision LDM-4 [49] on LSUN-Bedrooms 256 × 256. Compared with Q-Diffusion, samples generated by PTQD are less affected by quantization noise and exhibit a closer resemblance to the results of the full-precision model.", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "we conduct ablation experiments on ImageNet 256 × 256 dataset over LDM-4 model, to demonstrate the effectiveness of the proposed techniques. These techniques include Correlated Noise Correction (CNC) for addressing the correlated quantization noise, as well as Bias Correction (BC) and Variance Schedule Calibration (VSC) for correcting the residual uncorrelated quantization noise. By employing Correlated Noise Correction, we achieved a 0.48 reduction in FID and a 6.55 decrease in sFID. The considerable reduction in sFID suggests that the generated images possess more intricate spatial details than those generated using the baseline method, and that the correlated portions significantly contribute to the quantization noise. With the proposed Variance Schedule Calibration, the additional variance of uncorrelated quantization noise can be absorbed, achieving a reduction of 0.2 in FID and 0.11 in sFID. By further introducing Bias Correction that effectively corrects the mean deviation caused by quantization noise, our proposed PTQD achieved an FID of 6.44 and an sFID of 8.43, with only a 1.33 increase in sFID under the W4A4/W4A8 mixed precision setting. These results demonstrate the efficacy of the proposed techniques in achieving accurate post-training quantization of diffusion models.", "figure_data": "", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The effect of different components proposed in the paper. Here, MP denotes the proposed step-aware mixed precision scheme. 83× and the bit operations can be reduced by a remarkable 19.96×. In experiments utilizing mixed precision with W4A4 and W4A8 bitwidths, previous methods encounter difficulties in mitigating substantial quantization noises caused by low-bit quantization. For instance, in the first set of experiments with a 20-step generation process, Q-Diffusion[34] obtains FID and sFID scores as high as 116.61 and 172.99, respectively, indicating difficulties in handling low-bit diffusion models with fewer generation steps. While our method cannot calibrate the variance schedule due to a zero value for the hyperparameter eta, it still achieves an exceptionally low FID score of 7.75, demonstrating effective rectification of the correlated quantization noise and mean deviation. The second set of mixed precision experiments also yielded similar results, with our method reducing FID and sFID scores by 3.53 and 9.80, respectively.", "figure_data": "ModelsMethodBitwidth (W/A)FID↓ sFID↓Q-DiffusionMP9.9718.23LDM-4+ CNCMP9.4911.68(steps = 250+ CNC + VSCMP9.2911.57eta = 1.0PTQD (CNC + VSC + BC)MP6.448.43scale = 1.5)FP32/325.057.10compressed by 6.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparisons of class-conditional image generation on ImageNet 256 × 256.", "figure_data": "ModelMethodBitwidth (W/A)Model Size (MB)BOPs (T)BOP comp. ratioIS↑FID↓sFID↓FP32/321603.35102.21-225.16 12.457.85LDM-4 (steps = 20 eta = 0.0 scale = 3.0)Naive PTQ Ours Q-Diffusion Ours Q-Diffusion8/8 8/8 4/8 4/8 MP430.06 430.06 234.51 234.51 234.518.25 8.25 5.12 5.12 4.7312.39× 12.39× 19.96× 19.96× 21.61×152.91 12.14 153.92 11.94 212.52 10.63 214.73 10.40 7.86 116.61 172.99 8.43 8.03 14.80 12.68OursMP234.514.7321.61×175.197.7518.78FP32/321603.35102.21-185.045.057.10LDM-4 (steps = 250 eta = 1.0 scale = 1.5)Naive PTQ Ours Q-Diffusion Ours Q-Diffusion8/8 8/8 4/8 4/8 MP430.06 430.06 234.51 234.51 234.518.25 8.25 5.12 5.12 4.8112.39× 12.39× 19.96× 19.96× 21.25×180.56 180.83 148.74 149.74 121.104.06 4.02 5.37 5.11 9.975.91 5.81 9.56 8.49 18.23OursMP234.514.8121.25×126.266.448.43", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance comparisons of unconditional image generation.", "figure_data": "LSUN-Bedrooms 256 × 256LSUN-Churches 256 × 256LDM-4 (steps = 200, eta = 1.0)LDM-8 (steps = 200, eta = 0.0)MethodBitwidth (W/A)FID↓ sFID↓MethodBitwidth (W/A)FID↓sFID↓Full precision32/323.007.13Full precision32/326.3018.24Q-Diffusion8/83.809.95Q-Diffusion8/86.9418.93Ours8/83.759.89Ours8/86.4018.34Q-Diffusion4/86.7218.80Q-Diffusion4/87.8019.97Ours4/85.9415.16Ours4/87.3319.40Q-DiffusionMP5.7512.79Q-DiffusionMP218.59 312.86OursMP5.4912.04OursMP17.9937.34", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparisons of time cost across various bitwidth configurations on ImageNet 256 × 256. Due to the current lack of a fast implementation for W4A8, we implement MP scheme with W8A8 and W4A4 kernels.", "figure_data": "ModelBitwidth (W/A)Model Size (MB)FID↓sFID↓Time (s)LDM-432/321603.355.057.105.46(steps=2508/8430.064.025.812.68eta=1.0MP234.516.448.432.45scale=1.5)4/4234.51--1.63", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Table E presents the results of bitwidth allocation for each dataset, which are determined by Eq. (15) in the paper.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Bitwidth allocation for each dataset.", "figure_data": "DatasetW4A4 Step RangeW4A8 Step RangeImageNet (250 steps)249 to 202201 to 0ImageNet (20 steps)19 to 1514 to 0LSUN-Bedrooms199 to 155154 to 0LSUN-Churches199 to 146145 to 0", "figure_id": "tab_6", "figure_label": "E", "figure_type": "table" }, { "figure_caption": "Additional ablation study with constant precision on LSUN-Bedrooms dataset. As the bitwidth decreases, the efficacy of our approach becomes increasingly pronounced.Table J presents experimental results with deterministic and stochastic sampling on FFHQ and ImageNet dataset over LDM-4 model. While deterministic sampling has gained widespread adoption, it tends to result in lower output quality compared to stochastic sampling[55,26]. Specifically, when generating samples on FFHQ dataset with a deterministic DDIM sampler, introducing stochastic perturbations lower both the FID and sFID metrics. For experiments on ImageNet dataset, it greatly improves the IS with little increase in FID and sFID.", "figure_data": "ModelMethodBitwidth (W/A)FID↓ sFID↓FP32/323.007.13Q-Diffusion4/86.7218.80+CNC4/86.3116.28LDM-4+CNC+VSC4/86.1016.03(steps=200PTQD (CNC+VSC+BC)4/85.9415.16eta=1.0)Q-Diffusion3/88.3121.06+CNC3/87.0118.32+CNC+VSC3/86.6617.99PTQD (CNC+VSC+BC)3/86.4617.04", "figure_id": "tab_7", "figure_label": "F", "figure_type": "table" } ]
Yefei He; Luping Liu; Jing Liu; Weijia Wu; Hong Zhou; Bohan Zhuang
[ { "authors": "F Bao; C Li; J Sun; J Zhu; B Zhang", "journal": "", "ref_id": "b0", "title": "Estimating the optimal covariance with imperfect mean in diffusion probabilistic models", "year": "2022" }, { "authors": "F Bao; C Li; J Zhu; B Zhang", "journal": "", "ref_id": "b1", "title": "Analytic-dpm: an analytic estimate of the optimal reverse variance in diffusion probabilistic models", "year": "2022" }, { "authors": "N D Blog", "journal": "", "ref_id": "b2", "title": "Int4 for ai inference", "year": "2021-05" }, { "authors": "Y Cai; Z Yao; Z Dong; A Gholami; M W Mahoney; K Keutzer", "journal": "", "ref_id": "b3", "title": "Zeroq: A novel zero shot quantization framework", "year": "2020" }, { "authors": "N Chen; Y Zhang; H Zen; R J Weiss; M Norouzi; W Chan", "journal": "", "ref_id": "b4", "title": "Wavegrad: Estimating gradients for waveform generation", "year": "2021" }, { "authors": "W Chen; P Wang; J Cheng", "journal": "", "ref_id": "b5", "title": "Towards mixed-precision quantization of neural networks via constrained optimization", "year": "2021" }, { "authors": "H Chung; B Sim; J C Ye", "journal": "", "ref_id": "b6", "title": "Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction", "year": "2022" }, { "authors": "H Chung; J C Ye", "journal": "Medical Image Analysis", "ref_id": "b7", "title": "Score-based diffusion models for accelerated mri", "year": "2022" }, { "authors": "R ; E S Pearson", "journal": "Biometrika", "ref_id": "b8", "title": "Tests for departure from normality. empirical results for the distributions of b 2 and √ b", "year": "1973" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b9", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "P Dhariwal; A Q Nichol", "journal": "", "ref_id": "b10", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "R Diagostino", "journal": "Biometrika", "ref_id": "b11", "title": "An omnibus test of normality for moderate and large sample sizes", "year": "1971" }, { "authors": "Z Dong; Z Yao; A Gholami; M W Mahoney; K Keutzer", "journal": "", "ref_id": "b12", "title": "Hawq: Hessian aware quantization of neural networks with mixed-precision", "year": "2019" }, { "authors": "G Franzese; S Rossi; L Yang; A Finamore; D Rossi; M Filippone; P Michiardi", "journal": "", "ref_id": "b13", "title": "How much is enough? a study on diffusion times in score-based generative models", "year": "2022" }, { "authors": "G Giannone; D Nielsen; O Winther", "journal": "", "ref_id": "b14", "title": "Few-shot diffusion models", "year": "2022" }, { "authors": "R Gong; X Liu; S Jiang; T Li; P Hu; J Lin; F Yu; J Yan", "journal": "", "ref_id": "b15", "title": "Differentiable soft quantization: Bridging full-precision and low-bit neural networks", "year": "2019" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b16", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "M Heusel; H Ramsauer; T Unterthiner; B Nessler; S Hochreiter", "journal": "NeurIPS", "ref_id": "b17", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "", "ref_id": "b18", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "J Ho; T Salimans; A A Gritsenko; W Chan; M Norouzi; D J Fleet", "journal": "", "ref_id": "b19", "title": "Video diffusion models", "year": "2022" }, { "authors": "X Huang; Z Shen; S Li; Z Liu; H Xianghong; J Wicaksana; E Xing; K.-T Cheng", "journal": "", "ref_id": "b20", "title": "Sdq: Stochastic differentiable quantization with mixed precision", "year": "2022" }, { "authors": "I Hubara; Y Nahshan; Y Hanani; R Banner; D Soudry", "journal": "", "ref_id": "b21", "title": "Improving post training neural quantization: Layer-wise calibration and integer programming", "year": "2020" }, { "authors": "B Jacob; S Kligys; B Chen; M Zhu; M Tang; A Howard; H Adam; D Kalenichenko", "journal": "", "ref_id": "b22", "title": "Quantization and training of neural networks for efficient integer-arithmetic-only inference", "year": "2018" }, { "authors": "B Jacob; S Kligys; B Chen; M Zhu; M Tang; A Howard; H Adam; D Kalenichenko", "journal": "", "ref_id": "b23", "title": "Quantization and training of neural networks for efficient integer-arithmetic-only inference", "year": "2018" }, { "authors": "A Jolicoeur-Martineau; K Li; R Piché-Taillefer; T Kachman; I Mitliagkas", "journal": "", "ref_id": "b24", "title": "Gotta go fast when generating data with score-based models", "year": "2021" }, { "authors": "T Karras; M Aittala; T Aila; S Laine", "journal": "", "ref_id": "b25", "title": "Elucidating the design space of diffusion-based generative models", "year": "2022" }, { "authors": "A Kerr; D Merrill; J Demouth; J Tran", "journal": "", "ref_id": "b26", "title": "Cutlass: Fast linear algebra in cuda c++", "year": "2017" }, { "authors": "B Kim; J C Ye", "journal": "", "ref_id": "b27", "title": "Denoising mcmc for accelerating diffusion-based generative models", "year": "2023" }, { "authors": "D Kingma; T Salimans; B Poole; J Ho", "journal": "", "ref_id": "b28", "title": "Variational diffusion models", "year": "2021" }, { "authors": "D P Kingma; M Welling", "journal": "", "ref_id": "b29", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Z Kong; W Ping", "journal": "", "ref_id": "b30", "title": "On fast sampling of diffusion probabilistic models", "year": "2023" }, { "authors": "M W Y Lam; J Wang; D Su; D Yu", "journal": "", "ref_id": "b31", "title": "BDDM: bilateral denoising diffusion models for fast and high-quality speech synthesis", "year": "2022" }, { "authors": "A Levkovitch; E Nachmani; L Wolf", "journal": "", "ref_id": "b32", "title": "Zero-shot voice conditioning for denoising diffusion TTS models", "year": "2022" }, { "authors": "X Li; Y Liu; L Lian; H Yang; Z Dong; D Kang; S Zhang; K Keutzer", "journal": "", "ref_id": "b33", "title": "Q-diffusion: Quantizing diffusion models", "year": "2023" }, { "authors": "Y Li; R Gong; X Tan; Y Yang; P Hu; Q Zhang; F Yu; W Wang; S Gu", "journal": "", "ref_id": "b34", "title": "BRECQ: pushing the limit of post-training quantization by block reconstruction", "year": "2021" }, { "authors": "Y Lin; T Zhang; P Sun; Z Li; S Zhou", "journal": "", "ref_id": "b35", "title": "Fq-vit: Post-training quantization for fully quantized vision transformer", "year": "2022" }, { "authors": "L Liu; Y Ren; Z Lin; Z Zhao", "journal": "", "ref_id": "b36", "title": "Pseudo numerical methods for diffusion models on manifolds", "year": "2022" }, { "authors": "C Louizos; M Reisser; T Blankevoort; E Gavves; M Welling", "journal": "", "ref_id": "b37", "title": "Relaxed quantization for discretized neural networks", "year": "2019" }, { "authors": "C Lu; Y Zhou; F Bao; J Chen; C Li; J Zhu", "journal": "", "ref_id": "b38", "title": "Dpm-solver: A fast ODE solver for diffusion probabilistic model sampling in around 10 steps", "year": "2022" }, { "authors": "C Lu; Y Zhou; F Bao; J Chen; C Li; J Zhu", "journal": "", "ref_id": "b39", "title": "Dpm-solver++: Fast solver for guided sampling of diffusion probabilistic models", "year": "2022" }, { "authors": "E Luhman; T Luhman", "journal": "", "ref_id": "b40", "title": "Knowledge distillation in iterative generative models for improved sampling speed", "year": "2021" }, { "authors": "S Luo; W Hu", "journal": "", "ref_id": "b41", "title": "Diffusion probabilistic models for 3d point cloud generation", "year": "2021" }, { "authors": "Z Lyu; X Xu; C Yang; D Lin; B Dai", "journal": "", "ref_id": "b42", "title": "Accelerating diffusion models via early stop of the diffusion process", "year": "2022" }, { "authors": "M Nagel; R A Amjad; M Van Baalen; C Louizos; T Blankevoort", "journal": "", "ref_id": "b43", "title": "Up or down? adaptive rounding for post-training quantization", "year": "2020" }, { "authors": "M Nagel; M Van Baalen; T Blankevoort; M Welling", "journal": "", "ref_id": "b44", "title": "Data-free quantization through weight equalization and bias correction", "year": "2019" }, { "authors": "C Nash; J Menick; S Dieleman; P W Battaglia", "journal": "", "ref_id": "b45", "title": "Generating images with sparse representations", "year": "2021" }, { "authors": "A Q Nichol; P ", "journal": "", "ref_id": "b46", "title": "Improved denoising diffusion probabilistic models", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b47", "title": "TensorRT", "year": "2019-04-30" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b48", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "T Salimans; I Goodfellow; W Zaremba; V Cheung; A Radford; X Chen", "journal": "NeurIPS", "ref_id": "b49", "title": "Improved techniques for training gans", "year": "2016" }, { "authors": "T Salimans; J Ho", "journal": "", "ref_id": "b50", "title": "Progressive distillation for fast sampling of diffusion models", "year": "2022" }, { "authors": "Y Shang; Z Yuan; B Xie; B Wu; Y Yan", "journal": "", "ref_id": "b51", "title": "Post-training quantization on diffusion models", "year": "2022" }, { "authors": "C Shi; S Luo; M Xu; J Tang", "journal": "", "ref_id": "b52", "title": "Learning gradient fields for molecular conformation generation", "year": "2021" }, { "authors": "J Song; C Meng; S Ermon", "journal": "", "ref_id": "b53", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Y Song; J Sohl-Dickstein; D P Kingma; A Kumar; S Ermon; B Poole", "journal": "", "ref_id": "b54", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "P Virtanen; R Gommers; T E Oliphant; M Haberland; T Reddy; D Cournapeau; E Burovski; P Peterson; W Weckesser; J Bright", "journal": "Nature methods", "ref_id": "b55", "title": "Scipy 1.0: fundamental algorithms for scientific computing in python", "year": "2020" }, { "authors": "V Voleti; A Jolicoeur-Martineau; C Pal", "journal": "", "ref_id": "b56", "title": "MCVD -masked conditional video diffusion for prediction, generation, and interpolation", "year": "2022" }, { "authors": "P Wang; Q Chen; X He; J Cheng", "journal": "", "ref_id": "b57", "title": "Towards accurate post-training network quantization via bit-split and stitching", "year": "2020" }, { "authors": "Y Wang; Y Lu; T Blankevoort", "journal": "", "ref_id": "b58", "title": "Differentiable joint pruning and quantization for hardware efficiency", "year": "2020" }, { "authors": "D Watson; W Chan; J Ho; M Norouzi", "journal": "", "ref_id": "b59", "title": "Learning fast samplers for diffusion models by differentiating through sample quality", "year": "2022" }, { "authors": "D Watson; J Ho; M Norouzi; W Chan", "journal": "", "ref_id": "b60", "title": "Learning to efficiently sample from diffusion probabilistic models", "year": "2021" }, { "authors": "X Wei; R Gong; Y Li; X Liu; F Yu", "journal": "", "ref_id": "b61", "title": "Qdrop: Randomly dropping quantization for extremely low-bit post-training quantization", "year": "2022" }, { "authors": "W Wu; Y Zhao; M Z Shou; H Zhou; C Shen", "journal": "", "ref_id": "b62", "title": "Diffumask: Synthesizing images with pixel-level annotations for semantic segmentation using diffusion models", "year": "2023" }, { "authors": "H Yang; L Duan; Y Chen; H Li", "journal": "", "ref_id": "b63", "title": "BSQ: exploring bit-level sparsity for mixed-precision neural network quantization", "year": "2021" }, { "authors": "F Yu; A Seff; Y Zhang; S Song; T Funkhouser; J Xiao", "journal": "", "ref_id": "b64", "title": "Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop", "year": "2015" }, { "authors": "L Zhang; Y He; Z Lou; X Ye; Y Wang; H Zhou", "journal": "Applied Intelligence", "ref_id": "b65", "title": "Root quantization: a self-adaptive supplement ste", "year": "2023" }, { "authors": "Q Zhang; Y Chen", "journal": "", "ref_id": "b66", "title": "Fast sampling of diffusion models with exponential integrator", "year": "2022" }, { "authors": "Q Zhang; M Tao; Y Chen", "journal": "", "ref_id": "b67", "title": "gddim: Generalized denoising diffusion implicit models", "year": "2023" }, { "authors": "H Zheng; P He; W Chen; M Zhou", "journal": "stat", "ref_id": "b68", "title": "Truncated diffusion probabilistic models", "year": "2022" }, { "authors": "B Zhuang; C Shen; M Tan; L Liu; I Reid", "journal": "", "ref_id": "b69", "title": "Towards effective low-bitwidth convolutional neural networks", "year": "2018" } ]
[ { "formula_coordinates": [ 3, 162.73, 587.62, 341.94, 30.2 ], "formula_id": "formula_0", "formula_text": "q(x 1:T |x 0 ) = T t=1 q(x t |x t-1 ), q(x t |x t-1 ) = N (x t ; √ α t x t-1 , β t I)(1)" }, { "formula_coordinates": [ 3, 215.41, 696.49, 289.26, 23.23 ], "formula_id": "formula_1", "formula_text": "µ θ (x t , t) = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t)(2)" }, { "formula_coordinates": [ 4, 166.5, 100.14, 338.17, 23.22 ], "formula_id": "formula_2", "formula_text": "x t-1 = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t) + σ t z, where z ∼ N (0, I).(3)" }, { "formula_coordinates": [ 4, 220.16, 242.27, 284.51, 22.34 ], "formula_id": "formula_3", "formula_text": "x = ∆ • clip(⌊ x ∆ ⌉ + Z, 0, 2 b -1) -Z ,(4)" }, { "formula_coordinates": [ 4, 134.47, 267.19, 187.39, 14.51 ], "formula_id": "formula_4", "formula_text": "⌊•⌉ is the round operation, ∆ = max(x)-min(x) 2 b -1" }, { "formula_coordinates": [ 4, 179.71, 409.67, 324.96, 51.63 ], "formula_id": "formula_5", "formula_text": "x t-1 = 1 √ α t x t - β t √ 1 -ᾱt εθ (x t , t) + σ t z (5) = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t) + ∆ ϵ θ (xt,t) + σ t z." }, { "formula_coordinates": [ 4, 156.78, 693.61, 347.88, 26.06 ], "formula_id": "formula_6", "formula_text": "∆ Y = Ŷ -µ Ŷ σ Ŷ - Y -µ Y σ Y = σ Y ∆ Y -(σ Ŷ -σ Y )Y + σ Ŷ µ Y -σ Y µ Ŷ σ Ŷ σ Y .(6)" }, { "formula_coordinates": [ 5, 106.44, 67.84, 377.76, 142.04 ], "formula_id": "formula_7", "formula_text": "θ (x t , t) 0.5 0.0 0.5 ∆ θ (x t , t) t=199 R 2 =0.98 2.5 0.0 2.5 θ (x t , t) 0.5 0.0 0.5 ∆ θ (x t , t) t=100 R 2 =0.72 1 0 1 θ (x t , t) 1 0 1 ∆ θ (x t , t) t=0 R 2 =0.85 Figure 2:" }, { "formula_coordinates": [ 5, 235.48, 361.54, 269.19, 15.26 ], "formula_id": "formula_8", "formula_text": "∆ ϵ θ (xt,t) = kϵ θ (x t , t) + ∆ ′ ϵ θ (xt,t) .(7)" }, { "formula_coordinates": [ 5, 162.29, 471.2, 342.38, 51.62 ], "formula_id": "formula_9", "formula_text": "x t-1 = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ (x t , t) + ∆ ϵ θ (xt,t) + σ t z (8) = 1 √ α t x t - β t √ 1 -ᾱt (1 + k) ϵ θ (x t , t) + ∆ ′ ϵ θ (xt,t) + σ t z." }, { "formula_coordinates": [ 5, 144.77, 648.37, 359.9, 23.59 ], "formula_id": "formula_10", "formula_text": "x t-1 = 1 √ α t x t - β t √ 1 -ᾱt ϵ θ(xt,t) + σ t z - β t √ α t √ 1 -ᾱt (1 + k) ∆ ′ ϵ θ (xt,t) .(9)" }, { "formula_coordinates": [ 6, 258.65, 165.14, 246.02, 15.26 ], "formula_id": "formula_11", "formula_text": "∆ ′ ϵ θ (xt,t) ∼ N (µ q , σ q ).(10)" }, { "formula_coordinates": [ 6, 183.19, 343.24, 321.48, 61.91 ], "formula_id": "formula_12", "formula_text": "σ ′ 2 t + β 2 t α t (1 -ᾱt )(1 + k) 2 σ 2 q = σ 2 t ,(11) σ" }, { "formula_coordinates": [ 6, 188.88, 385.76, 315.79, 27.97 ], "formula_id": "formula_13", "formula_text": "′ 2 t = σ 2 t - β 2 t αt(1-ᾱt)(1+k) 2 σ 2 q , if σ 2 t ≥ β 2 t αt(1-ᾱt)(1+k) 2 σ 2 q 0, otherwise.(12)" }, { "formula_coordinates": [ 6, 443.61, 453.15, 54.11, 16.26 ], "formula_id": "formula_14", "formula_text": "β 2 t αt(1-ᾱt)(1+k)" }, { "formula_coordinates": [ 7, 430.26, 73.52, 66.29, 39.7 ], "formula_id": "formula_15", "formula_text": "SNR Q (W4A4, corrected) SNR Q (W4A4, uncorrected) SNR Q (W4A8, corrected) SNR Q (W4A8, uncorrected) SNR F" }, { "formula_coordinates": [ 7, 252.11, 343.39, 248.41, 24.2 ], "formula_id": "formula_16", "formula_text": "SNR Q (t) = ∥ϵ θ (x t , t)∥ 2 ∥∆ ϵ θ (xt,t) ∥ 2 . (13" }, { "formula_coordinates": [ 7, 500.52, 350.45, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 7, 266.12, 587.99, 238.55, 12.92 ], "formula_id": "formula_18", "formula_text": "SNR F (t) = α 2 t /σ 2 t .(14)" }, { "formula_coordinates": [ 7, 254.89, 665.54, 245.63, 13.91 ], "formula_id": "formula_19", "formula_text": "SNR Q bmin (t) > SNR F (t). (15" }, { "formula_coordinates": [ 7, 500.52, 668.6, 4.15, 8.64 ], "formula_id": "formula_20", "formula_text": ")" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b41", "b18", "b22", "b30", "b6", "b38", "b20", "b4", "b18", "b30", "b17", "b13", "b39", "b35", "b0", "b6", "b35", "b0", "b6", "b2", "b26", "b2", "b26", "b26", "b26", "b6", "b20", "b38", "b41", "b4", "b23", "b8", "b16", "b9", "b3" ], "table_ref": [], "text": "Paucity of labeled data limits the application of supervised learning to various visual learning tasks [36]. As a result, unsupervised [17,19,26] and self-supervised based learning methods [6,33,18,4] have garnered a lot of attention and popularity for their ability to learn from vast unlabeled data. Such methods can be broadly classified into two categories: generative methods and discriminative methods. Generative methods [17,26] train deep neural networks to generate in the input space i.e. the pixel space and hence, are computationally expensive and not necessary for representation learning. On the other hand, discriminative approaches [16,13,34,30,1,6] train deep neural networks to learn representations for pretext tasks using unlabeled data and an objective function. Out of these discriminative based approaches, contrastive learning based methods [30,1,6] have performed significantly well and are an active area of research.\nThe common principle of contrastive learning based methods in an unsupervised setting is to create semantic preserving transformations of the input which are called positives and treat transformations of other samples in a batch as negatives [2,23]. The contrastive loss objective considers every transformed sample as a reference sample, called an anchor, and is then used to train the network architecture to pull the positives (for that anchor) closer to the anchor and push the negatives away from the anchor in latent space [2,23]. The positives are often created using various data augmentation strategies. Supervised Contrastive Learning [23] extended contrastive learning to supervised setting by using the label information and treating the other samples in the batch having the same label as that of the anchor also as positives in addition to the ones produced through data augmentation [23]. For the SupCon loss per sample -L sup i (from equation 2) to decrease, the anchor z i will pull the positive z p but push away the other positives by some extent in the embedding space. TCL loss introduces parameters to reduce this effect and helps improve performance.\nstrategies. It presents a new loss called supervised contrastive loss (abbreviated as SupCon loss) that can be viewed as a loss generalizing to multiple available positives in a batch.\nIn this work, we propose a novel contrastive learning loss objective, which we call Tuned Contrastive Learning (TCL) Loss that can use multiple positives and multiple negatives present in a batch. We show how it can be used in supervised as well as self-supervised settings. TCL loss improves upon the limitations of the SupCon loss: 1. Implicit consideration of positives as negatives and, 2. No provision of regulating hard negative gradient response. TCL loss thus gives better gradient response to hard positives and hard negatives. This leads to small (< 1% in terms of classification accuracy) but consistent improvements in performance over SupCon loss and comprehensive outperformance over cross-entropy loss. Since TCL generalizes to multiple positives, we then present a novel idea of having and using positive triplets (and possibly more) instead of being limited to positive pairs for self-supervised learning. We evaluate our loss function in self-supervised settings without making use of any label information and show how TCL outperforms SimCLR [6] and performs on par with various SOTA self-supervised learning methods [18,33,36,4,20,8,15,9,3]. Our key contributions in the paper are as follows:\n1. We identify and analyse in detail two limitations of the supervised contrastive (SupCon) loss.\n2. We present a novel contrastive loss function called Tuned Contrastive Learning (TCL) loss that generalizes to multiple positives and multiple negatives in a batch, overcomes the described limitations of the SupCon loss and is applicable in both supervised and selfsupervised settings. We mathematically show with clear proofs how our loss's gradient response is better than that of SupCon loss.\n3. We compare TCL loss with SupCon loss (as well as cross-entropy loss) in supervised settings on various classification datasets and show that TCL loss gives consistent improvements in top-1 accuracy over SupCon loss. We empirically show the stability of TCL loss to a range of hyperparameters: network architecture, batch size, projector size and augmentation strategy.\n4. At last, we present a novel idea of having positive triplets (and possibly more) instead of positive pairs and show how TCL can be extended to self-supervised settings. We empirically show that TCL outperforms SimCLR, and performs on par with various SOTA self-supervised learning (SSL) methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b2", "b25", "b7", "b31", "b6", "b35", "b26", "b40", "b34", "b29", "b14", "b20", "b9", "b38", "b4" ], "table_ref": [], "text": "In this section, we cover various popular and recent works in brief involving contrastive learning.\nDeep Metric learning methods originated with the idea of contrastive losses and were introduced with the goal of learning a distance metric between samples in a high-dimensional space [2]. The goal in such methods is to learn a function that maps similar samples to nearby points in this space, and dissimilar samples to distant points. There is often a margin parameter, m, imposing the distance between examples from different classes to be larger than this value of m [2]. The triplet loss [22] and the proposed improvements [7,27] on it used this principle. These methods rely heavily on sophisticated sampling techniques for choosing samples in every batch for better training.\nSimCLR [6], an Info-NCE [30] loss based framework, learns visual representations by increasing the similarity between the embeddings of two augmented views of the input image. Augmented views generally come from a series of transformations like random resizing, cropping, color jittering, and random blurring. Although they make use of multiple negatives, only one positive is available per anchor. They require large batch sizes in order to have more hard negatives in the batch to learn from and boost the performance. SupCon loss [23] applies contrastive learning in supervised setting by basically extending the SimCLR loss to generalize to multiple positives available in a batch and improves upon the cross-entropy loss which lacks robustness to noisy labels [35,29] and has the possibility of poor margins [25,14].\nUnlike SimCLR or SupCon, many SOTA SSL approaches only work with positives (don't require negatives) or use different approach altogether. BYOL [18] uses asymmetric networks with one network using an additional predictor module while the other using exponential moving average (EMA) to update its weights, in order to learn using positive pairs only and prevent collapse. SimSiam [9] uses stop-gradient operation instead of EMA and asymmetric networks to achieve the same goal.\nBarlow Twins [33] objective function on the other hand computes the cross-correlation matrix between the embeddings of two identical networks fed with augmentations of a batch of samples, and tries to make this matrix close to identity. SwAV uses a clustering approach and enforces consistency between the cluster assignments of multiple positives produced through multi-crop strategy [4].\n3 Methodology" }, { "figure_ref": [], "heading": "Supervised Contrastive Learning & Its Issues", "publication_ref": [ "b26", "b26", "b26", "b26", "b26", "b26" ], "table_ref": [], "text": "The framework for Supervised Contrastive Learning consists of three components: a data augmentation module that produces two augmentations for each sample in the batch, an encoder network that maps the augmentations to their corresponding representation vectors and a projection network that produces normalized embeddings for the representation vectors to be fed to the loss function. The projection network is later discarded and the encoder network is used at inference time by training a linear classifier (attached to the frozen encoder) with cross-entropy loss. Section 3.1 of [23] contains more details on this. The SupCon loss is given by the following two equations (refers to L sup out in [23]):\nL sup = i∈I L sup i (1)\nwhere\nL sup i = -1 |P (i)| p∈P (i) log( exp(z i .z p /τ ) p ′ ∈P (i) exp(z i .z p ′ /τ ) + n∈N (i) exp(z i .z n /τ ) )(2)\nHere I denotes the batch of samples obtained after augmentation and so, will be twice the size of the original input batch. i ∈ I denotes a sample (anchor) within it. z i denotes the normalized projection network embedding for the sample i as given by the projector network. P (i) is the set of all positives for the anchor i (except the anchor i itself) i.e. positive from the augmentation module and positives with the same label as anchor i in the batch I. N (i) denotes the set of negatives in the batch such that N (i) ≡ I \\ (P (i) ∪ {i}). As shown in Section 2 of the supplementary material of [23], we have the following lemma:\nLemma 1. The gradient of the SupCon loss per sample -L sup i with respect to the normalized projection network embedding z i is given by:\n∂L sup i ∂z i = 1 τ ( p∈P (i) z p (P s ip -X ip )\nGradient response from positives + n∈N (i)\nz n P s in Gradient response from negatives )(3)\nwhere\nX ip = 1 |P (i)|(4)\nP s ip = exp(z i .z p /τ ) a∈A(i) exp(z i .z a /τ )(5)\nP s in = exp(z i .z n /τ ) a∈A(i) exp(z i .z a /τ )(6)\nNote that A(i) ≡ P (i) ∪ N (i) here. The authors further show in Section 3 of the supplementary [23] that the gradient from a positive while flowing back through the projector into the encoder reduces to almost zero for easy positives and |P s ip -X ip | for a hard positive because of the normalization consideration in the projection network. Similarly, the gradient from a negative reduces to almost zero for easy negatives and |P s in | for a hard negative. We now present and analyse the following two limitations of the SupCon loss:\n1. Implicit consideration of positives as negatives: Having a closer look at the L sup i (equation 2) loss term reveals that the numerator inside the log function considers similarity with one positive p at a time while the denominator consists of similarity terms of the anchor i with all the positives in the batch -the set P (i), thereby implicitly considering all the positives as negatives. A glance at the derivation of Lemma 1 in [23] clearly shows that this leads to the magnitude of the gradient response from a hard positive getting reduced to |X ip -P s ip | instead of simply |X ip |. The term P s ip consists of an exponential term in the numerator and thus can reduce the magnitude of |X ip -P s ip | considerably, especially because the temperature τ is generally chosen to be small. Note that the authors of [23] approximate the numerator of P s ip to 1 while considering the magnitude of |X ip -P s ip | in their supplementary by assuming z i .z p ≈ 0 for a hard positive which might not always be true. Another way to look at this limitation analytically is to observe the log part in the L sup i term. For the loss term to decrease and ideally converge to close to zero, the numerator term inside the log function will encourage the anchor z i to pull the positive z p towards it while the denominator term will encourage it to push away the other positives present in P (i) by some extent, thereby treating the other positives as negatives implicitly." }, { "figure_ref": [], "heading": "No possibility of regulating P s", "publication_ref": [ "b6", "b26" ], "table_ref": [], "text": "in : [6,23] mention that performance in contrastive learning benefits from hard negatives and gradient contribution from hard negatives should be higher. It is easy to observe from equation 6 that the magnitude of the gradient signal from a hard negative -|P s in | in the SupCon loss decreases with batch size and the number of positives in the batch, and can become considerably small, especially since the denominator consists of similarity terms between the anchor and all the positives in the batch which are temperature scaled and exponentiated. This can limit the gradient contribution from hard negatives." }, { "figure_ref": [], "heading": "Tuned Contrastive Learning", "publication_ref": [ "b26" ], "table_ref": [], "text": "In this section, we present our novel contrastive loss function -Tuned Contrastive Learning (TCL) Loss. Note that our representation learning framework remains the same as that of Supervised Contrastive Learning discussed above. The TCL loss is given by the following equations:\nL tcl = i∈I L tcl i (7) L tcl i = -1 |P (i)| p∈P (i) log( exp(z i .z p /τ ) D(z i ) )(8)\nwhere\nD(z i ) = p ′ ∈P (i) exp(z i .z p ′ /τ ) + k 1 ( p ′ ∈P (i) exp(-z i .z p ′ )) + k 2 ( n∈N (i) exp(z i .z n /τ )) (9) k 1 , k 2 ≥ 1(10)\nk 1 and k 2 are scalar parameters that are fixed before training. All other symbols have the same meaning as discussed in the previous section. We now present the following lemma: Lemma 2. The gradient of the TCL loss per sample -L tcl i with respect to the normalized projection network embedding z i is given by:\n∂L tcl i ∂z i = 1 τ ( p∈P (i) z p (P t ip -X ip -Y t ip )\nGradient response from positives\n+ n∈N (i) z n P t in Gradient response from negatives )(11)\nwhere\nX ip = 1 |P (i)|(12)\nP t ip = exp(z i .z p /τ ) D(z i )(13)\nY t ip = τ k 1 exp(-z i .z p ) D(z i )(14)\nP t in = k 2 exp(z i .z n /τ ) D(z i )(15)\nFrom Lemma 2, Theorem 1 and Theorem 2 follow in a straightforward fashion. The proofs for Lemma 2 and the two theorems are provided in our supplementary. Theorem 1. For k 1 , k 2 ≥ 1, the magnitude of the gradient from a hard positive for TCL is strictly greater than the magnitude of the gradient from a hard positive for SupCon and hence, the following result follows:\n|X ip -P t ip + Y t ip | (TCL's hard positive gradient) > |X ip -P s ip | (Supcon's hard positive gradient)(16)\nTheorem 2. For fixed k 1 , the magnitude of the gradient response from a hard negative for TCL loss -P t in strictly increases with k 2 .\nEffects of k 1 and k 2 The authors of SupCon show (in equation 18 in the supplementary of [23]) that the magnitude of gradient response from a hard positive |X ip -P s ip | increases with the number of positives and negatives in the batch. This is basically a result of reducing the value of P s ip , a term that results from having positive similarity terms in the denominator of L sup i . But they approximate the numerator of P s ip to 1 by assuming z i .z p ≈ 0 for a hard positive which might not always be true (especially since τ is typically chosen to be small like 0.1). As evident from the proof of Theorem 1 in our supplementary, we further push this idea and reduce the value of P s ip in SupCon loss to P t ip in TCL loss by having an extra term in the denominator involving\nk 1 -k 1 ( p ′ ∈P (i) exp(-z i .z p ′ ))\nand choosing a large enough value for k 1 . Hence, it reduces the effect of implicit consideration of positives as negatives, the first limitation of SupCon loss discussed in the previous section. Note that having the extra term to increase the gradient response from hard positive is not the same as increasing the gradient response by amplifying the learning rate. This is because for the same and fixed learning rate, TCL loss increases the magnitude of the gradient signal over SupCon loss by changing the coefficient of z p in equation 11 which in turn means changing the gradient direction as well. This leads to consistently better performance as shown in the numerous experiments that we perform. Also, it directly follows from Theorem 2 that k 2 allows to regulate (increase) the gradient signal from a hard negative and thus, overcomes the second limitation of the SupCon loss.\nAugmentation Strategy for Self-Supervised Setting Since TCL loss can use multiple positives, we consider working with positive triplets instead of positive pairs in self-supervised settings. Given a batch B with N samples, we produce augmented batch I of size 3N by producing three augmented views (positives) for each sample in B. This idea can further be extended in different ways to have more positives per anchor. For example, one can think of combining different augmentation strategies to produce multiple views per sample although we limit ourselves positive triplets in this work." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "We evaluate TCL in three stages: 1. Supervised setting, 2. Hyper-parameter stability and 3. Selfsupervised setting. We then present empirical analysis on TCL loss's parameters -k 1 and k 2 and show how we choose their values. All the relevant training details are mentioned in our supplementary." }, { "figure_ref": [], "heading": "Supervised Setting", "publication_ref": [ "b26", "b25", "b32", "b28", "b36", "b12", "b24", "b24", "b26", "b26", "b26" ], "table_ref": [ "tab_0" ], "text": "We start by evaluating TCL in supervised setting first. Since the authors of [23] mention that SupCon loss performs significantly better than triplet loss [22] and N-pair loss [28], we directly compare TCL loss with SupCon and cross-entropy losses on various classification benchmarks including CIFAR-10, CIFAR-100 [24], Fashion MNIST (FMNIST) [31] and ImageNet-100 [12]. The encoder network chosen is ResNet-50 [21] for CIFAR and FMNIST datasets while Resnet-18 [21] for ImageNet dataset (because of memory constraints). The representation vector is the activation of the final pooling layer of the encoder. ResNet-18 and ResNet-34 encoders give 512 dimensional representation vectors while ResNet-50 and above produce 2048 dimensional vectors. The projector network is a MLP with one hidden layer with size being 512 for ResNet-18 and Resnet-34, and 2048 for ResNet-50 and higher networks. The output layer of the projector MLP is 128 dimensional for all the networks. We use the same cross-entropy implementation as used by Supervised Contrastive Learning [23].\nNote that for fair comparison of TCL with Supervised Contrastive Learning, we keep the architecture and all other possible hyper-parameters except the learning rate exactly the same. We also do hyper-parameter tuning significantly more for Supervised Contrastive Learning than for TCL. As a result, we found that our re-implementation of Supervised Contrastive Learning gave better results than what is reported in the paper [23]. For example, on CIFAR-100 our significantly tuned version of SupCon achieves 79.1% top-1 classification accuracy, 2.6% more than what is reported in SupCon paper. As the authors of SupCon [23] mention that 200 epochs of contrastive training is sufficient for training a ResNet-50 on complete ImageNet dataset, our observations for the supervised setting case on relatively smaller datasets like CIFAR, FMNIST and ImageNet-100 are consistent with this finding. We train Resnet-50 (and ResNet-18) for a total of 150 epochs -100 epochs of contrastive training for the encoder and the projector followed by 50 epochs of cross-entropy training for the linear layer. Note that 150 epochs of total training was sufficient for our re-implementation of SupCon loss to achieve better results than reported in the paper (2.6% more on CIFAR-100 and 0.3% more on CIFAR-10). We anyways still provide results for 250 epochs of training in our supplementary. As Table 1 shows, TCL loss consistently performs better than SupCon loss and outperforms cross-entropy loss on all the datasets. " }, { "figure_ref": [], "heading": "Hyper-parameter Stability", "publication_ref": [], "table_ref": [], "text": "We now show the stability of TCL loss to a range of hyper-parameters. We compare TCL loss with SupCon loss on various hyper-parameters -encoder architectures, batch sizes, projection embedding sizes and different augmentations. For all the hyper-parameter experiments we choose CIFAR-100 as the common dataset (unless stated otherwise), set total training epochs to 150 (same as earlier section), temperature τ to 0.1 and use SGD optimizer with momentum=0.9 and weight decay=1e -4." }, { "figure_ref": [], "heading": "Encoder Architecture", "publication_ref": [], "table_ref": [], "text": "We choose 4 encoder architectures of varying sizes-ResNet- " }, { "figure_ref": [ "fig_1" ], "heading": "Batch Size", "publication_ref": [ "b10" ], "table_ref": [], "text": "For comparing TCL loss with SupCon loss on different batch sizes, we choose ResNet-50 as the encoder architecture and AutoAugment [10] data augmentation. As evident from Fig. 2-(a), we observe that TCL loss consistently performs better than SupCon loss on all batch sizes. All the batch sizes mentioned are after performing augmentation. Note that the authors of SupCon loss use an effective batch size of 256 (after augmentation) for CIFAR datasets in their released code 1 . We select batch sizes equal to, smaller and greater than this value for comparison to demonstrate the effectiveness of Tuned Contrastive Learning." }, { "figure_ref": [ "fig_1" ], "heading": "Projection Network Embedding (z i ) Size", "publication_ref": [ "b26", "b10" ], "table_ref": [], "text": "In this section we analyse empirically how SupCon and TCL losses perform on various projection network output embedding sizes. This particular experiment was not explored as stated by the authors of Supervised Contrastive Learning [23]. ResNet-50 is the common encoder used with Auto-Augment [10] data augmentation. As evident from Fig. 2-(c), we observe that TCL loss achieves consistent improvements in top-1 test classification accuracy over SupCon loss for various projector output sizes. We observe that 64 performs the worst while 128, 256, 512 and 1024 give similar results. 2048 performs the best for both with TCL loss achieving 1.2% higher accuracy than SupCon loss for this size." }, { "figure_ref": [ "fig_1" ], "heading": "Augmentations", "publication_ref": [ "b10", "b6" ], "table_ref": [], "text": "We choose two augmentation strategies -AutoAugment and SimAugment for comparisons. AutoAugment [10] is a two-stage augmentation policy trained with reinforcement learning and gives stronger (aggressive and diverse) augmentations. SimAugment [6] is relatively a weaker augmentation strategy used in SimCLR that applies simple transformations like random flips, rotations, color jitters and gaussian blurring. We don't use gaussian blur in our implementation of SimAugment and train for 100 extra epochs i.e. 250 epochs while using it. Fig. 2-(d) shows that TCL loss performs better than SupCon loss with both augmentations although, the gain is more with AutoAugment -the stronger augmentation strategy." }, { "figure_ref": [], "heading": "Self-Supervised Setting", "publication_ref": [ "b41", "b11", "b6", "b20", "b3", "b41", "b38", "b8", "b20", "b9", "b23", "b8", "b4" ], "table_ref": [ "tab_2", "tab_2" ], "text": "In this section we evaluate TCL without any labels in self-supervised setting by making use of positive triplets as described earlier. We compare TCL with various SOTA SSL methods as shown in Table 2.\nThe results for these methods are taken from the works of [36], [11]. The datasets used for comparison are CIFAR 10, CIFAR-100 and ImageNet-100. ResNet-18 is the common encoder used for every method. For CIFAR-10 and CIFAR-100 every method uses 1000 epochs of contrastive pre-training including TCL. For ImageNet-100, every method does 400 epochs of contrastive pre-training. Table 2 shows the top-1 accuracy achieved by various methods on the three datasets. TCL performs consistently better than SimCLR [6] and performs on par with various other methods. Note that methods like BYOL [18], VICReg [3], ARB [36] and Barlow-Twins [33] use much larger projector size for output embedding and extra hidden layers in the projector MLP to get better performance while MOCO V2 [8] uses a queue size of 32,768 to get better results. Few of the methods like BYOL [18], SimSiam [9], MOCO V2 [20,8] also maintain two networks and hence, effectively use double the number of parameters and are memory intensive. We also add the results of supervised TCL that can make use of labels as it is generalizable to any number of positives. Supervised TCL achieves significantly better results than all other SSL methods. SwAV does use a multi-crop strategy to create multiple augmentations but is not extended to supervised setting to use the labels [4]." }, { "figure_ref": [ "fig_2", "fig_2", "fig_2", "fig_2" ], "heading": "Analyzing and Choosing k 1 and k 2 for TCL", "publication_ref": [], "table_ref": [], "text": "As we discussed earlier in Section 3.2, k 1 helps in increasing the magnitude of positive gradient from positives while k 2 helps in regulating (increasing) the gradient from negatives. We verify our claims empirically and show how we go about choosing their values for training.\nAnalyzing effects of k 1 We calculate the mean gradient from all positives (expressions from equation 16) per anchor averaged across the batch and plot the values for SupCon loss and TCL loss over the course of training of ResNet-50 on CIFAR-100 for 100 epochs. As evident from Fig. 3-(a), increasing the value of k 1 increases the magnitude of gradient response from positives. We also analyze how this correlates with the top-1 accuracy in Fig. 3-(b). As we see for small values of k 1 , the top-1 accuracy remains more or less the same as that of SupCon loss. As we increase it further, the gradient from positives increase leading to gains in top-1 accuracy. The top-1 accuracy reaches a peak and then starts to drop with further increase in k 1 . We hypothesize that this drop is because very large values of k 1 start affecting the gradient response from negatives (equations 15 and 9). We verify this hypothesis while analyzing k 2 .\nAnalyzing effects of k 2 We calculate the mean gradient from all negatives (expressions from equations 6 and 15) per anchor averaged across the batch for the same setting as above and plot the values for SupCon loss and TCL loss. As we see in Fig. 3-(c), TCL loss's gradient lags behind SupCon loss's gradient by some margin for k 1 = 50000 and k 2 = 1. This value of k 1 actually leads to a top-1 accuracy of 71.8%, a drop in performance. When we start increasing the value of k 2 , the gradient response from negatives increase for TCL loss. Fig. 3-(d) shows that by increasing k 2 to 3 while k 1 = 50000, the gap between gradient (from negatives) curves of TCL loss and SupCon loss vanishes. We also observe that the top-1 accuracy increases back to 76.2%, the best possible accuracy that we got for this setting.\nChoosing k 1 and k 2 We observe that a value of k 1 in the range of 10 3 to 10 4 works the best with k 1 = 4 × 10 3 or 5 × 10 3 almost always working on all datasets and configurations we experimented with. We generally start with these two values or otherwise with 2 × 10 3 and increase it in steps of 2000 till 8 × 10 3 . We also observed during our experiments that choosing any value less than 5 × 10 3 always gave improvements in performance over SupCon loss. For most of our experiments we set k 1 to 4 × 10 3 or 5 × 10 3 and get the desired performance boost in a single run. We found k 2 to be useful to compensate for the reduction in the value of P t in caused by increasing k 1 and especially in self-supervised settings where hard negative gradient contribution is important. For setting k 2 , we fix k 1 (which itself gives boost in performance) and increase k 2 in steps of 0.1 or 0.2 to see if we can get further improvement. We provide values for k 1 and k 2 for all our experiments in the supplementary. As we see, we generally keep k 2 = 1 for supervised settings but we do sometimes set it to a value slightly bigger than 1. We set k 2 to a higher value in self-supervised settings as compared to supervised settings to get higher gradient contribution from hard negatives. Increasing k 1 didn't help much in boosting the performance in self-supervised setting (as we only had two positives per anchor) and so we set it to 1. Increasing k 2 also increases the gradient response from positives to some extent by decreasing P t ip (equation 13) and so, we found it sufficient to increase only k 2 and set k 1 to 1 in self-supervised setting." }, { "figure_ref": [], "heading": "Conclusion & Limitations", "publication_ref": [], "table_ref": [], "text": "In this work, we have presented a novel contrastive loss function called Tuned Contrastive Learning (TCL) loss that generalizes to multiple positives and multiple negatives present in a batch and is applicable to both supervised and self-supervised settings. We showed mathematically how its gradient response to hard negatives and hard-positives is better than that of SupCon loss. We evaluated TCL loss in supervised and self-supervised settings and showed that it performs on par with existing state-of-the-art supervised and self-supervised learning methods. We also showed empirically the stability of TCL loss to a range of hyper-parameter settings.\nA limitation of our work is that the proposed loss objective introduces two extra parameters k 1 and k 2 , for which the values are chosen heuristically. Future direction can include works that come up with loss objectives that provide the properties of TCL loss out of the box without introducing any extra parameters." }, { "figure_ref": [], "heading": "A Proofs for Theoretical Results", "publication_ref": [ "b26", "b26", "b26", "b26" ], "table_ref": [], "text": "Proof for Lemma 1: Section 2 of the supplementary material of SupCon [23] gives a clear proof for Lemma 1 (refer to the derivation of L sup out in that section). Lemma 2. The gradient of the TCL loss per sample -L tcl i with respect to the normalized projection network embedding z i is given by:\n∂L tcl i ∂z i = 1 τ ( p∈P (i) z p (P t ip -X ip -Y t ip )\nGradient response from positives + n∈N (i)\nz n P t in Gradient response from negatives )(17)\nwhere\nX ip = 1 |P (i)|(18)\nP t ip = exp(z i .z p /τ ) D(z i )(19)\nY t ip = τ k 1 exp(-z i .z p ) D(z i )(20)\nP t in = k 2 exp(z i .z n /τ ) D(z i )(21)\nProof.\nL tcl i = -1 |P (i)| p∈P (i) log( exp(z i .z p /τ ) D(z i ) )(22)\n=⇒ L tcl i = -1 |P (i)| p∈P (i) ( z i .z p τ -log(D(z i ))(23)\n=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - ( p ′ ∈P (i) z p ′ exp(z i .z p ′ /τ ) D(z i ) + τ k 1 ( p ′ ∈P (i) z p ′ exp(-z i .z p ′ )) D(z i ) - k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(24)\n=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p∈P (i) ( p ′ ∈P (i) z p ′ exp(z i .z p ′ /τ )) D(z i ) + p∈P (i) τ k 1 ( p ′ ∈P (i) z p ′ exp(-z i .z p ′ )) D(z i ) - p∈P (i) k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(25)\n=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p ′ ∈P (i) ( p∈P (i) z p ′ exp(z i .z p ′ /τ )) D(z i ) + p ′ ∈P (i) τ k 1 ( p∈P (i) z p ′ exp(-z i .z p ′ )) D(z i ) - p∈P (i) k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(26)\n=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p ′ ∈P (i) (|P (i)|z p ′ exp(z i .z p ′ /τ )) D(z i ) + p ′ ∈P (i) τ k 1 (|P (i)|z p ′ exp(-z i .z p ′ )) D(z i ) - |P (i)|k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i ) (27) =⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p∈P (i) (|P (i)|z p exp(z i .z p /τ )) D(z i ) + p∈P (i) τ k 1 (|P (i)|z p exp(-z i .z p )) D(z i ) - |P (i)|k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(28)\n=⇒ ∂L tcl i ∂z i = -1 τ p∈P (i) z p |P (i)| - p∈P (i) (z p exp(z i .z p /τ )) D(z i ) + p∈P (i) τ k 1 (z p exp(-z i .z p )) D(z i ) - k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(29)\n=⇒ ∂L tcl i ∂z i = 1 τ p∈P (i) z p exp(z i .z p /τ ) D(z i ) - 1 |P (i)| - τ k 1 exp(-z i .z p ) D(z i ) + n∈N (i) z n k 2 exp(z i .z n /τ ) D(z i )(30)\nThis completes the proof.\nTheorem 1. For k 1 , k 2 ≥ 1, the magnitude of the gradient from a hard positive for TCL is strictly greater than the magnitude of the gradient from a hard positive for SupCon and hence, the following result follows: \n|X ip -P t ip + Y t ip |(\nProof. As the authors of [23] show in Section 3 of their supplementary (we also mention the same in our main paper in Section 3.1) that the gradient from a positive while flowing back through the projector into the encoder reduces to almost zero for easy positives and |P s ip -X ip | for a hard positive because of the normalization consideration in the projection network combined with the assumption that z i .z p ≈ 1 for easy positives and z i .z p ≈ 0 for hard positives. Proceeding in a similar manner, it is straightforward to see that the gradient response from a hard positive in case of TCL is |P t ip -X ip -Y t ip |. We don't prove this explicitly again since the derivation will be identical to what authors [23] have already shown. One can refer section 3 of the supplementary of [23] for details. Now, because k 1 , k 2 ≥ 1, it is easy to observe from equations 5 and 13 of our main paper that,\nP t ip < P s ip(32)\nAnd from equation 14 of our main paper:\nY t ip > 0(33)\nHence, the result follows. This completes the proof.\nTheorem 2. For fixed k 1 , the magnitude of the gradient response from a hard negative for TCL -P t in increases strictly with k 2 .\nProof. The starting learning rate for contrastive training is 1e -1 for all the encoders except ResNet-101 for which we used a value of 9e -2. k 1 = 5000 and k 2 = 1 are the common values used for all the encoders.\nP t in = k 2 exp(z i .z n /τ ) D(z i )(34)" }, { "figure_ref": [], "heading": "B.2.2 Batch Size", "publication_ref": [], "table_ref": [], "text": "For batch sizes=32, 64, 128, 256, 512 and 1024 we set the starting learning rates for contrastive training to 8e -3, 9e -3, 1e -1, 2e -1, 5e -1 and 1 respectively. For batch size of 32 we used k 1 = 5000 and k 2 = 1. For batch size of 64 we used k 1 = 7500 and k 2 = 1. For batch size of 128 we used k 1 = 5000 and k 2 = 1. For batch sizes of 256, 512 and 1024 we used k 1 = 4000 and k 2 = 1." }, { "figure_ref": [], "heading": "B.2.3 Projection Network Embedding (z i ) Size", "publication_ref": [], "table_ref": [], "text": "We used a common starting learning rate of 1e -1 with k 1 = 5000 and k 2 = 1 for all the projector output sizes." }, { "figure_ref": [], "heading": "B.2.4 Augmentations", "publication_ref": [ "b10", "b6" ], "table_ref": [], "text": "For AutoAugment [10] method, we use a learning rate of 1e -1 with k 1 = 5000 and k 2 = 1. For SimAugment [6], we use a learning rate of 1e -1 with k 1 = 5000 and k 2 = 1.2." }, { "figure_ref": [], "heading": "B.3 Self-Supervised Setting", "publication_ref": [ "b11", "b37", "b6", "b11" ], "table_ref": [], "text": "For the self-supervised setting, we reuse the code provided by [11] and we are thankful to them for providing all the required details. The projector used for TCL is exactly the same as SimCLR for fair comparison and consists of one hidden layer of size 2048 and output size of 256. ResNet-18 is the common encoder used for all the methods. We use SGD optimizer with momentum=0.9 wrapped with LARS optimizer [32] and weight deacy of 1e -4. Augmentation used is SimAugment [6] and is done in the same manner as [11]. Gaussian blur is used for self-supervised setting. We use NVIDIA-GeForce-RTX-2080-Ti, NVIDIA-TITAN-RTX and NVIDIA-A100-SXM4-80GB GPUs for our experiments." }, { "figure_ref": [], "heading": "CIFAR-10 [24]", "publication_ref": [ "b28", "b12" ], "table_ref": [], "text": "All methods do 1000 epochs of contrastive pre-training on CIFAR-10 and images are reshaped to 32 × 32 in the data augmentation pipeline. We use batch size=256, same as SimCLR.\nFor TCL, we use a starting learning rate of 4e -1 for contrastive pre-training with k 1 = 1 and k 2 = 1.5.\nCIFAR-100 [24] All methods do 1000 epochs of contrastive pre-training on CIFAR-100 and images are reshaped to 32 × 32 in the data augmentation pipeline. We use batch size=256, same as SimCLR.\nFor TCL, we use a starting learning rate of 4e -1 for contrastive pre-training with k 1 = 1 and k 2 = 1.5.\nImageNet-100 [12] All methods do 400 epochs of contrastive pre-training on ImageNet-100 and images are rescaled to a size of 224 × 224. We use batch size=256, same as used by SimCLR. For TCL, we use a starting learning rate of 4e -1 for contrastive pre-training with k 1 = 1 and k 2 = 1.5." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "It is now easy to observe that for a fixed k 1 , P t in increases strictly with k 2 . This completes the proof." }, { "figure_ref": [], "heading": "B Training Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "B.1 Supervised Setting", "publication_ref": [ "b26", "b10", "b36", "b24" ], "table_ref": [], "text": "We first present the common training details used for each dataset experiment in the supervised setting for SupCon [23] and TCL. Except for the contrastive training learning rate, every other detail presented is common for SupCon and TCL. As mentioned in our main paper, we train for a total of 150 epochs which involves 100 epochs of contrastive training for the encoder and the projector, and 50 epochs of cross-entropy training for the linear layer for both the losses. AutoAugment [10] is the common data augmentation method used except for FMNIST [31] for which we used a simple augmentation strategy consisting of random cropping and horizontal flip. We use cosine annealing based learning rate scheduler and SGD optimizer with momentum=0.9 and weight decay=1e -4 for both contrastive and linear layer training. Temperature τ is set to 0.1. For linear layer training, the starting learning rate is 5e -1. ResNet-50 [21] is the common encoder architecture used. We use NVIDIA-GeForce-RTX-2080-Ti, NVIDIA-TITAN-RTX and NVIDIA-A100-SXM4-80GB GPUs for our experiments." }, { "figure_ref": [], "heading": "CIFAR-10 [24]", "publication_ref": [ "b28", "b36", "b12", "b26" ], "table_ref": [], "text": "Image size is resized to 32 × 32 in the data augmentation pipeline. We use a batch size of 128. For both SupCon and TCL we use a starting learning rate of 1e -1 for contrastive training. We set k 1 = 5000 and k 2 = 1 for TCL. CIFAR-100 [24] Image size is resized to 32 × 32 in the data augmentation pipeline. We use a batch size of 256. For both SupCon and TCL we use a starting learning rate of 2e -1 for contrastive training. We set k 1 = 4000 and k 2 = 1 for TCL.\nFMNIST [31] Image size is resized to 28 × 28 in the data augmentation pipeline. We use a batch size of 128. For both SupCon and TCL we use a starting learning rate of 9e -2 for contrastive training. We set k 1 = 5000 and k 2 = 1 for TCL.\nImageNet-100 [12] Images are resized to 224 × 224 in the data-augmentation pipeline and batch size of 256 is used. For SupCon we use a starting learning rate of 2e -1 for contrastive training while 3e -1 for TCL. We set k 1 = 4000 and k 2 = 1 for TCL.\nFor CIFAR-100 dataset and batch size of 128, we also ran the experiment 30 times to get 95% confidence intervals for top-1 accuracies of SupCon and TCL. For SupCon we got 74.79 ± 0.23 while for TCL we got 75.72 ± 0.16 as the confidence intervals. We also present results for 250 epochs of training constituted by 200 epochs of contrastive training and 50 epochs of linear layer training in Table 3. As we see, TCL performs consistently better than SupCon [23]. Note that we didn't see any performance improvement for FMNIST dataset for either SupCon or TCL by running them for 250 epochs." }, { "figure_ref": [], "heading": "B.2 Hyper-parameter Stability", "publication_ref": [], "table_ref": [], "text": "For the hyper-parameter stability experiments we have presented most of the details in the main paper. We present the learning rates and values of k 1 and k 2 used for TCL. Remaining details are the same as the supervised setting experiments." } ]
In recent times, contrastive learning based loss functions have become increasingly popular for visual self-supervised representation learning owing to their state-ofthe-art (SOTA) performance. Most of the modern contrastive learning methods generalize only to one positive and multiple negatives per anchor. A recent state-ofthe-art, supervised contrastive (SupCon) loss, extends self-supervised contrastive learning to supervised setting by generalizing to multiple positives and negatives in a batch and improves upon the cross-entropy loss. In this paper, we propose a novel contrastive loss function -Tuned Contrastive Learning (TCL) loss, that generalizes to multiple positives and negatives in a batch and offers parameters to tune and improve the gradient responses from hard positives and hard negatives. We provide theoretical analysis of our loss function's gradient response and show mathematically how it is better than that of SupCon loss. We empirically compare our loss function with SupCon loss and cross-entropy loss in supervised setting on multiple classification-task datasets to show its effectiveness. We also show the stability of our loss function to a range of hyper-parameter settings. Unlike SupCon loss which is only applied to supervised setting, we show how to extend TCL to self-supervised setting and empirically compare it with various SOTA selfsupervised learning methods. Hence, we show that TCL loss achieves performance on par with SOTA methods in both supervised and self-supervised settings.
Tuned Contrastive Learning
[ { "figure_caption": "Figure 1 :1Figure 1: Figure illustrates intuitively how TCL loss differs from SupCon loss [23]. For the SupCon loss per sample -L sup", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: SupCon vs TCL losses on a range of hyper-parameters. (a). batch size (top left) (b). encoder architecture (top right) (c). projector output dimensions/size (bottom left) (d). augmentation method (bottom right)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Analysis of k 1 and k 2 (a). plot of mean gradient from positives for SupCon and TCL (at various values of k 1 ) (top left) (b). top-1 accuracy vs k 1 on CIFAR-100 (top right) (c). plot of mean gradient from negatives for SupCon and TCL (k 1 = 50000 and k 2 = 1) (bottom left) (d). plot of mean gradient from negatives for SupCon and TCL (k 1 = 50000 and k 2 = 3) (bottom right)", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "TCL's hard positive gradient) > |X ip -P s ip | (Supcon's hard positive gradient)", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Comparisons of top-1 accuracies of TCL with SupCon and cross-entropy loss in supervised settings. The values in parenthesis for SupCon denote the values presented in their paper.", "figure_data": "DatasetCross-Entropy SupConTCLCIFAR-1095.096.3 (96.0) 96.4CIFAR-10075.379.1 (76.5) 79.8FashionMNIST 94.595.595.7ImageNet-10084.285.986.7", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of top-1 accuracy of TCL with various SSL methods. Values in bold show the best performing method.", "figure_data": "MethodProjector Size CIFAR-10 CIFAR-100 ImageNet-100BYOL[18]409692.670.280.1DINO[5]25689.266.474.8SimSiam[9]204890.565.977.0MOCO V2[20, 8]25692.969.578.2ReSSL[37]25690.665.876.6VICReg[3]204890.168.579.2SwAV[4]25689.264.774.3W-MSE[15]25688.261.369.1ARB[36]25691.868.274.9ARB[36]204892.269.679.5Barlow-Twins[33]25687.457.967.2Barlow-Twins[33]204889.669.278.6SimCLR[6]25690.765.577.5TCL (Self-Supervised) 25691.666.777.9TCL (Supervised)12895.877.586.7", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparisons of top-1 accuracies of TCL with SupCon in supervised setting for 250 epochs of training.", "figure_data": "DatasetSupCon TCLCIFAR-1096.796.8CIFAR-10081.081.6FashionMNIST 95.595.7ImageNet-10086.587.1B.2.1 Encoder Architecture", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Chaitanya Animesh; Manmohan Chandraker
[ { "authors": "Philip Bachman; Devon Hjelm; William Buchwalter", "journal": "", "ref_id": "b0", "title": "Learning representations by maximizing mutual information across views", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b1", "title": "", "year": "2019" }, { "authors": "Randall Balestriero; Mark Ibrahim; Vlad Sobal; Ari Morcos; Shashank Shekhar; Tom Goldstein; Florian Bordes; Adrien Bardes; Gregoire Mialon; Yuandong Tian; Avi Schwarzschild; Andrew Gordon Wilson; Jonas Geiping; Quentin Garrido; Pierre Fernandez; Amir Bar; Hamed Pirsiavash; Yann Lecun; Micah Goldblum", "journal": "", "ref_id": "b2", "title": "A cookbook of self-supervised learning", "year": "2023" }, { "authors": "Adrien Bardes; Jean Ponce; Yann Lecun", "journal": "", "ref_id": "b3", "title": "VICReg: Variance-invariance-covariance regularization for self-supervised learning", "year": "2022" }, { "authors": "Mathilde Caron; Ishan Misra; Julien Mairal; Priya Goyal; Piotr Bojanowski; Armand Joulin", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "2020" }, { "authors": "Mathilde Caron; Hugo Touvron; Ishan Misra; Hervé Jégou; Julien Mairal; Piotr Bojanowski; Armand Joulin", "journal": "", "ref_id": "b5", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "Ting Chen; Simon Kornblith; Mohammad Norouzi; Geoffrey Hinton", "journal": "PMLR", "ref_id": "b6", "title": "A simple framework for contrastive learning of visual representations", "year": "2020-07" }, { "authors": "Weihua Chen; Xiaotang Chen; Jianguo Zhang; Kaiqi Huang", "journal": "", "ref_id": "b7", "title": "Beyond triplet loss: a deep quadruplet network for person re-identification", "year": "2017" }, { "authors": "Xinlei Chen; Haoqi Fan; Ross Girshick; Kaiming He", "journal": "", "ref_id": "b8", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "Xinlei Chen; Kaiming He", "journal": "", "ref_id": "b9", "title": "Exploring simple siamese representation learning", "year": "2021-06" }, { "authors": "Barret Ekin D Cubuk; Dandelion Zoph; Vijay Mane; Quoc V Vasudevan; Le", "journal": "", "ref_id": "b10", "title": "Autoaugment: Learning augmentation policies from data", "year": "2018" }, { "authors": "Guilherme Turrisi Da Victor; Enrico Costa; Moin Fini; Nicu Nabi; Elisa Sebe; Ricci", "journal": "Journal of Machine Learning Research", "ref_id": "b11", "title": "sololearn: A library of self-supervised methods for visual representation learning", "year": "2022" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b12", "title": "Imagenet: A largescale hierarchical image database", "year": "2009" }, { "authors": "Carl Doersch; Abhinav Gupta; Alexei A Efros", "journal": "", "ref_id": "b13", "title": "Unsupervised visual representation learning by context prediction", "year": "2015" }, { "authors": "Gamaleldin Elsayed; Dilip Krishnan; Hossein Mobahi; Kevin Regan; Samy Bengio", "journal": "", "ref_id": "b14", "title": "Large margin deep networks for classification", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b15", "title": "", "year": "2018" }, { "authors": "Aleksandr Ermolov; Aliaksandr Siarohin; Enver Sangineto; Nicu Sebe", "journal": "", "ref_id": "b16", "title": "Whitening for self-supervised representation learning", "year": "2020" }, { "authors": "Spyros Gidaris; Praveer Singh; Nikos Komodakis", "journal": "", "ref_id": "b17", "title": "Unsupervised representation learning by predicting image rotations", "year": "2018" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b18", "title": "Generative adversarial nets", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b19", "title": "", "year": "2014" }, { "authors": "Jean-Bastien Grill; Florian Strub; Florent Altché; Corentin Tallec; Pierre Richemond; Elena Buchatskaya; Carl Doersch; Bernardo Avila Pires; Zhaohan Guo; Mohammad Gheshlaghi Azar; Bilal Piot; Remi Munos; Michal Valko", "journal": "", "ref_id": "b20", "title": "Bootstrap your own latenta new approach to self-supervised learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b21", "title": "", "year": "2020" }, { "authors": "R Hadsell; S Chopra; Y Lecun", "journal": "", "ref_id": "b22", "title": "Dimensionality reduction by learning an invariant mapping", "year": "2006" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b23", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b24", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Elad Hoffer; Nir Ailon", "journal": "Springer", "ref_id": "b25", "title": "Deep metric learning using triplet network", "year": "2015-10-12" }, { "authors": "Prannay Khosla; Piotr Teterwak; Chen Wang; Aaron Sarna; Yonglong Tian; Phillip Isola; Aaron Maschinot; Ce Liu; Dilip Krishnan", "journal": "", "ref_id": "b26", "title": "Supervised contrastive learning", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b27", "title": "", "year": "2020" }, { "authors": "A Krizhevsky; Hinton", "journal": "", "ref_id": "b28", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Weiyang Liu; Yandong Wen; Zhiding Yu; Meng Yang", "journal": "", "ref_id": "b29", "title": "Large-margin softmax loss for convolutional neural networks", "year": "2017" }, { "authors": "Subhransu Maji; Esa Rahtu; Juho Kannala; Matthew Blaschko; Andrea Vedaldi", "journal": "", "ref_id": "b30", "title": "Finegrained visual classification of aircraft", "year": "2013" }, { "authors": "Hyun Oh Song; Yu Xiang; Stefanie Jegelka; Silvio Savarese", "journal": "", "ref_id": "b31", "title": "Deep metric learning via lifted structured feature embedding", "year": "2016" }, { "authors": "Kihyuk Sohn", "journal": "", "ref_id": "b32", "title": "Improved deep metric learning with multi-class n-pair loss objective", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b33", "title": "", "year": "2016" }, { "authors": "Sainbayar Sukhbaatar; Joan Bruna; Manohar Paluri; Lubomir Bourdev; Rob Fergus", "journal": "", "ref_id": "b34", "title": "Training convolutional networks with noisy labels", "year": "2015" }, { "authors": "Aäron Van Den Oord; Yazhe Li; Oriol Vinyals", "journal": "", "ref_id": "b35", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "Han Xiao; Kashif Rasul; Roland Vollgraf", "journal": "", "ref_id": "b36", "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "year": "2017" }, { "authors": "Yang You; Igor Gitman; Boris Ginsburg", "journal": "", "ref_id": "b37", "title": "Large batch training of convolutional networks", "year": "2017" }, { "authors": "Jure Zbontar; Li Jing; Ishan Misra; Yann Lecun; Stéphane Deny", "journal": "PMLR", "ref_id": "b38", "title": "Barlow twins: Selfsupervised learning via redundancy reduction", "year": "2021" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros", "journal": "Springer", "ref_id": "b39", "title": "Colorful image colorization", "year": "2016" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros", "journal": "", "ref_id": "b40", "title": "Split-brain autoencoders: Unsupervised learning by cross-channel prediction", "year": "2017" }, { "authors": "Shaofeng Zhang; Lyn Qiu; Feng Zhu; Junchi Yan; Hengrui Zhang; Rui Zhao; Hongyang Li; Xiaokang Yang", "journal": "", "ref_id": "b41", "title": "Align representations with base: A new approach to self-supervised learning", "year": "2022-06" }, { "authors": "Mingkai Zheng; Shan You; Fei Wang; Chen Qian; Changshui Zhang; Xiaogang Wang; Chang Xu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b42", "title": "Ressl: Relational self-supervised learning with weak augmentation", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 271.48, 561.14, 233.19, 22.81 ], "formula_id": "formula_0", "formula_text": "L sup = i∈I L sup i (1)" }, { "formula_coordinates": [ 3, 150.75, 605.34, 353.91, 27.27 ], "formula_id": "formula_1", "formula_text": "L sup i = -1 |P (i)| p∈P (i) log( exp(z i .z p /τ ) p ′ ∈P (i) exp(z i .z p ′ /τ ) + n∈N (i) exp(z i .z n /τ ) )(2)" }, { "formula_coordinates": [ 4, 174.25, 109.6, 140.47, 30.01 ], "formula_id": "formula_2", "formula_text": "∂L sup i ∂z i = 1 τ ( p∈P (i) z p (P s ip -X ip )" }, { "formula_coordinates": [ 4, 341.31, 117.02, 163.36, 38.89 ], "formula_id": "formula_3", "formula_text": "z n P s in Gradient response from negatives )(3)" }, { "formula_coordinates": [ 4, 278.08, 193.42, 226.59, 22.31 ], "formula_id": "formula_4", "formula_text": "X ip = 1 |P (i)|(4)" }, { "formula_coordinates": [ 4, 246.55, 224.16, 258.12, 24.72 ], "formula_id": "formula_5", "formula_text": "P s ip = exp(z i .z p /τ ) a∈A(i) exp(z i .z a /τ )(5)" }, { "formula_coordinates": [ 4, 246.14, 257.14, 258.53, 24.72 ], "formula_id": "formula_6", "formula_text": "P s in = exp(z i .z n /τ ) a∈A(i) exp(z i .z a /τ )(6)" }, { "formula_coordinates": [ 4, 274.93, 702.67, 229.74, 22.13 ], "formula_id": "formula_7", "formula_text": "L tcl = i∈I L tcl i (7) L tcl i = -1 |P (i)| p∈P (i) log( exp(z i .z p /τ ) D(z i ) )(8)" }, { "formula_coordinates": [ 5, 134.26, 118.88, 370.41, 36.37 ], "formula_id": "formula_8", "formula_text": "D(z i ) = p ′ ∈P (i) exp(z i .z p ′ /τ ) + k 1 ( p ′ ∈P (i) exp(-z i .z p ′ )) + k 2 ( n∈N (i) exp(z i .z n /τ )) (9) k 1 , k 2 ≥ 1(10)" }, { "formula_coordinates": [ 5, 172.82, 223.67, 158.85, 28.84 ], "formula_id": "formula_9", "formula_text": "∂L tcl i ∂z i = 1 τ ( p∈P (i) z p (P t ip -X ip -Y t ip )" }, { "formula_coordinates": [ 5, 333.33, 229.91, 171.34, 38.9 ], "formula_id": "formula_10", "formula_text": "+ n∈N (i) z n P t in Gradient response from negatives )(11)" }, { "formula_coordinates": [ 5, 278.08, 285.86, 226.59, 22.31 ], "formula_id": "formula_11", "formula_text": "X ip = 1 |P (i)|(12)" }, { "formula_coordinates": [ 5, 265.35, 312.4, 239.31, 23.23 ], "formula_id": "formula_12", "formula_text": "P t ip = exp(z i .z p /τ ) D(z i )(13)" }, { "formula_coordinates": [ 5, 259.45, 338.95, 245.22, 23.22 ], "formula_id": "formula_13", "formula_text": "Y t ip = τ k 1 exp(-z i .z p ) D(z i )(14)" }, { "formula_coordinates": [ 5, 259.46, 365.5, 245.21, 23.23 ], "formula_id": "formula_14", "formula_text": "P t in = k 2 exp(z i .z n /τ ) D(z i )(15)" }, { "formula_coordinates": [ 5, 210.25, 468.34, 294.42, 27.6 ], "formula_id": "formula_15", "formula_text": "|X ip -P t ip + Y t ip | (TCL's hard positive gradient) > |X ip -P s ip | (Supcon's hard positive gradient)(16)" }, { "formula_coordinates": [ 5, 370.79, 613.28, 134.37, 11.15 ], "formula_id": "formula_16", "formula_text": "k 1 -k 1 ( p ′ ∈P (i) exp(-z i .z p ′ ))" }, { "formula_coordinates": [ 13, 172.82, 159.54, 158.85, 28.84 ], "formula_id": "formula_17", "formula_text": "∂L tcl i ∂z i = 1 τ ( p∈P (i) z p (P t ip -X ip -Y t ip )" }, { "formula_coordinates": [ 13, 342.74, 165.78, 161.93, 38.89 ], "formula_id": "formula_18", "formula_text": "z n P t in Gradient response from negatives )(17)" }, { "formula_coordinates": [ 13, 278.08, 222.96, 226.59, 22.31 ], "formula_id": "formula_19", "formula_text": "X ip = 1 |P (i)|(18)" }, { "formula_coordinates": [ 13, 265.35, 250.75, 239.31, 23.22 ], "formula_id": "formula_20", "formula_text": "P t ip = exp(z i .z p /τ ) D(z i )(19)" }, { "formula_coordinates": [ 13, 259.45, 278.53, 245.22, 23.23 ], "formula_id": "formula_21", "formula_text": "Y t ip = τ k 1 exp(-z i .z p ) D(z i )(20)" }, { "formula_coordinates": [ 13, 259.46, 306.31, 245.21, 23.22 ], "formula_id": "formula_22", "formula_text": "P t in = k 2 exp(z i .z n /τ ) D(z i )(21)" }, { "formula_coordinates": [ 13, 225.08, 352.96, 279.58, 27.27 ], "formula_id": "formula_23", "formula_text": "L tcl i = -1 |P (i)| p∈P (i) log( exp(z i .z p /τ ) D(z i ) )(22)" }, { "formula_coordinates": [ 13, 213.59, 385.48, 291.08, 27.27 ], "formula_id": "formula_24", "formula_text": "=⇒ L tcl i = -1 |P (i)| p∈P (i) ( z i .z p τ -log(D(z i ))(23)" }, { "formula_coordinates": [ 13, 166.48, 425.52, 338.19, 60.29 ], "formula_id": "formula_25", "formula_text": "=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - ( p ′ ∈P (i) z p ′ exp(z i .z p ′ /τ ) D(z i ) + τ k 1 ( p ′ ∈P (i) z p ′ exp(-z i .z p ′ )) D(z i ) - k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(24)" }, { "formula_coordinates": [ 13, 140.09, 498.33, 364.58, 65.67 ], "formula_id": "formula_26", "formula_text": "=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p∈P (i) ( p ′ ∈P (i) z p ′ exp(z i .z p ′ /τ )) D(z i ) + p∈P (i) τ k 1 ( p ′ ∈P (i) z p ′ exp(-z i .z p ′ )) D(z i ) - p∈P (i) k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(25)" }, { "formula_coordinates": [ 13, 140.09, 578.12, 364.58, 65.68 ], "formula_id": "formula_27", "formula_text": "=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p ′ ∈P (i) ( p∈P (i) z p ′ exp(z i .z p ′ /τ )) D(z i ) + p ′ ∈P (i) τ k 1 ( p∈P (i) z p ′ exp(-z i .z p ′ )) D(z i ) - p∈P (i) k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(26)" }, { "formula_coordinates": [ 13, 147.42, 658.22, 357.24, 65.37 ], "formula_id": "formula_28", "formula_text": "=⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p ′ ∈P (i) (|P (i)|z p ′ exp(z i .z p ′ /τ )) D(z i ) + p ′ ∈P (i) τ k 1 (|P (i)|z p ′ exp(-z i .z p ′ )) D(z i ) - |P (i)|k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i ) (27) =⇒ ∂L tcl i ∂z i = -1 τ |P (i)| p∈P (i) z p - p∈P (i) (|P (i)|z p exp(z i .z p /τ )) D(z i ) + p∈P (i) τ k 1 (|P (i)|z p exp(-z i .z p )) D(z i ) - |P (i)|k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(28)" }, { "formula_coordinates": [ 14, 175.96, 169.9, 328.71, 65.37 ], "formula_id": "formula_29", "formula_text": "=⇒ ∂L tcl i ∂z i = -1 τ p∈P (i) z p |P (i)| - p∈P (i) (z p exp(z i .z p /τ )) D(z i ) + p∈P (i) τ k 1 (z p exp(-z i .z p )) D(z i ) - k 2 ( n∈N (i) z n exp(z i .z n /τ )) D(z i )(29)" }, { "formula_coordinates": [ 14, 158.21, 253.17, 346.46, 65.37 ], "formula_id": "formula_30", "formula_text": "=⇒ ∂L tcl i ∂z i = 1 τ p∈P (i) z p exp(z i .z p /τ ) D(z i ) - 1 |P (i)| - τ k 1 exp(-z i .z p ) D(z i ) + n∈N (i) z n k 2 exp(z i .z n /τ ) D(z i )(30)" }, { "formula_coordinates": [ 14, 210.25, 395.48, 78.76, 27.6 ], "formula_id": "formula_31", "formula_text": "|X ip -P t ip + Y t ip |(" }, { "formula_coordinates": [ 14, 285.54, 570.77, 219.13, 12.69 ], "formula_id": "formula_33", "formula_text": "P t ip < P s ip(32)" }, { "formula_coordinates": [ 14, 290.26, 605.82, 214.4, 12.69 ], "formula_id": "formula_34", "formula_text": "Y t ip > 0(33)" }, { "formula_coordinates": [ 14, 259.16, 702.11, 245.51, 23.23 ], "formula_id": "formula_35", "formula_text": "P t in = k 2 exp(z i .z n /τ ) D(z i )(34)" } ]
10.1021/acs.jcim.1c00600
2023-05-26
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b4", "b37", "b34", "b0", "b13", "b39", "b0", "b38", "b14", "b2", "b12", "b18", "b23", "b3", "b28", "b37", "b4", "b19", "b40", "b9", "b11", "b9" ], "table_ref": [], "text": "Generative pre-trained Transformer (GPT), like GPT-3 (Brown et al., 2020) and ChatGPT (Ope-nAI, 2022), have obtained great success in natural language processing. They usually have billions of parameters and are trained on large corpus (Taylor et al., 2022;Singhal et al., 2022). By witnessing their great power, people start transferring language models to chemical (Bagal et al., 2022) and biological domains (Ferruz et al., 2022). For example, a small molecule (e.g., an oral drug) can be represented using simplified molecular-input lineentry system (SMILES) (Weininger, 1988), which is a sequence obtained by traversing the molecular graph using depth-first-search and several rules for branching, aromaticity, etc. After serializing molecules, people pre-train language models on SMILES (Bagal et al., 2022;Tong et al., 2021;Frey et al., 2022) and obtain promising results for molecular generation.\nText is the most important record for molecular science and more generally, scientific discovery (Beltagy et al., 2019). It describes detailed properties of molecules, like how to synthesize the molecule (Feng et al., 2016), whether the molecule is toxic (Juurlink et al., 2003), etc. BioGPT (Luo et al., 2022) and PubMedGPT (Bolton et al., 2022) are two language models trained on biomedical literature. Recently, a new trend is to jointly model SMILES and scientific text so as to obtain shared representations across the two modalities. MolT5 is a T5-like (Raffel et al., 2020) model, where several spans of the text/SMILES are masked in the encoder and they should be reconstructed in the decoder. Galactica (Taylor et al., 2022) is a GPTlike (Brown et al., 2020) model pre-trained on various types of inputs, like text, SMILES, protein sequences, etc. Although those models demonstrate progress in prediction and generation tasks, they do not explicitly leverage the relation between molecules and text. An intuition is that, in scientific literature, when a molecule name appears in a sentence, the surrounding context could be a description of the molecule. This should be useful information for joint training but is ignored in those models.\nTo leverage such relations, in this work, we propose a novel molecule-text language model (MolXPT), which is trained on \"wrapped\" sequences: Given a sentence, we detect the molecular names with named entity recognition tools, and if any, replace them to the corresponding SMILES and obtain the \"wrapped\" sequence between SMILES and text. We pre-train a 24-layer MolXPT (with 350M parameters) on 8M wrapped sequences, as well as 30M SMILES from PubChem ( Kim et al., 2022) and 30M titles and abstracts from PubMed (a popular biomedical literature search engine).\nAfter pre-training, we finetune MolXPT on MoleculeNet (a benchmark about molecular property prediction) (Wu et al., 2018) and molecule-text translation (Edwards et al., 2022) using promptbased finetuning. On MoleculeNet, MolXPT outperforms strong baselines with sophisticated design like GEM (Fang et al., 2022). On text-molecule translation, MolXPT performs comparably with the state-of-the-art model, MolT5-large (Edwards et al., 2022). MolT5-large has 800M parameters while MolXPT only uses 44% of its parameters. We also verify that MolXPT has the zero-shot ability on text-to-molecule generation." }, { "figure_ref": [ "fig_0" ], "heading": "Our Method", "publication_ref": [], "table_ref": [], "text": "MolXPT is a language model pre-trained on heterogeneous data including scientific text, SMILES sequences, and \"wrapped\" sequences between SMILES and text. Due to the flexible input, we can finetune it for various text and molecular tasks. The framework of MolXPT is in Figure 1." }, { "figure_ref": [ "fig_0" ], "heading": "Pre-training corpus", "publication_ref": [ "b19", "b36", "b17", "b33", "b32" ], "table_ref": [], "text": "For scientific text, we use the titles and abstracts of 30M papers from PubMed 1 . For molecular SMILES, we randomly choose 30M molecules from PubChem 2 (Kim et al., 2022).\nThe wrapped sequences are constructed via a \"detect and replace\" pipeline. We first use BERN2 (Sung et al., 2022), a widely used named entity recognition (NER) tool for biomedical purpose, to detect all mentions of molecules and link them to the entities in public knowledge bases like ChEBI 1 https://ftp.ncbi.nlm.nih.gov/pubmed/ 2 https://pubchem.ncbi.nlm.nih.gov/ (Hastings et al., 2016). After that, we can retrieve the molecular SMILES of the matched entities. Finally, we replace the molecular mentions to their corresponding SMILES. An example is shown in the left panel of Figure 1. The wrapped sequences must contain at least one molecular SMILES. We eventually obtain 8M wrapped sequences in total.\nText and SMILES are tokenized separately. For text, we use byte-pair encoding (BPE) (Sennrich et al., 2016) to split the words into subwords. The number of BPE merge operation is 40k. For SMILES sequences (including those in wrapped sequences), we tokenize them with the regular expression from (Schwaller et al., 2018). For each SMILES sequence S, we add a start-of-molecule token ⟨som⟩ at the beginning of S and append an end-of-molecule token ⟨eom⟩ at the end of S." }, { "figure_ref": [], "heading": "Model and training", "publication_ref": [ "b27", "b4", "b6", "b16", "b15" ], "table_ref": [], "text": "Model architecture: MolXPT has the same architecture as the GPT models (Radford et al., 2019). Due to computational resource limitation, in this paper, we follow the GPT-2 medium configuration with 24 layers, 1024 hidden size and 16 attention heads. The maximum length of input we can process is 2048 and the vocabulary size is 44536. In total, our model has 350M parameters. Pre-training: The pre-training objective function of MolXPT is the negative log-likelihood. Mathematically, let D = {x i } i denote the collection of sequences of the three types of the data, and Prompt-based finetuning: MolXPT can be finetuned for downstream tasks about molecules and text. Adding classification or regression heads to pre-trained backbone models introduces the gap between pre-training and finetuning (Brown et al., 2020;Chen et al., 2022;Gu et al., 2022). Therefore, we adopt prompt-based finetuning (Gao et al., 2021) to unify different tasks into a sequence generation task, which is consistent with the pre-training objective. Briefly, given a task, we convert the input and output into text and/or SMILES sequences, equip the sequences with task-specific prompts and finetune using language modeling loss. Prompts for MoleculeNet and text-molecule translation are introduced in the Section 3.1 and 3.2 respectively. Discussion: Some works also try to jointly model text and molecules. \nx i = (s i,1 , s i,2 , • • • , s i,n i ) is the i-th sequence with n i tokens. The training objective function is: min - 1 |D| |D| i=1 n i j=1 log P (s i,j |s i,j-1 , s i,j-2 , • • • , s 1 )." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b40", "b9" ], "table_ref": [], "text": "We evaluated MolXPT on two downstream tasks:\n(1) molecular property prediction on MoleculeNet (Wu et al., 2018), which is to predict whether the given molecule has specific properties; (2) the generation between text descriptions and molecules (Edwards et al., 2022), where both molecules and text should be considered. In this section, we focus on introducing task definition, prompt design and results while leaving the detailed finetuning hyper-parameters in Appendix C." }, { "figure_ref": [], "heading": "Results on MoleculeNet", "publication_ref": [ "b40", "b11", "b42", "b37", "b35", "b30", "b30", "b30", "b30", "b22", "b43", "b11" ], "table_ref": [ "tab_0" ], "text": "MoleculeNet (Wu et al., 2018) is a widely-used benchmark for molecular modeling, which has more than 700k compounds for various different properties. We choose six molecular classification tasks for evaluation, which are BBBP, Tox21, Clin-Tox, HIV, BACE and SIDER. Details are left in Appendix A. We follow GEM (Fang et al., 2022) to split the data into training/validation/test sets based on the scaffold. For these tasks, the input is a SMILES and the output is a binary label.\nFinetuning strategy: Previous molecular property prediction models mainly use SMILES sequences or molecular graphs as input, while we can use the \"wrapped\" sequences. For example, one task is to predict the blood-brain barrier penetration (BBBP) of a molecule. Therefore, the prompt is \"We can conclude that the BBB penetration of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is ⟨tag⟩\", where ⟨SMILES⟩ denotes the molecular SMILES, and ⟨tag⟩ denotes the classification result. For the BBBP task, we design ⟨tag⟩ as \"true\" or \"false\", indicating whether the compound can or cannot cross BBB. Different tasks have different prompts (see Appendix C.1), but we put the tags to the last token of the prompt for all tasks. Let (s i,1 , s i,2 , • • • , s i,T i ) denote the i-th wrapped sequence for the downstream task with T i tokens, where s i,T i is the tag of the sequence. Denote that there are N samples for finetuning. The finetuning strategy could be either\nmin - 1 N N i=1 log P (s i,T i |s i,<T i ),(1)\nindicating that we finetune the tags only, or\nmin - 1 N N i=1 1 T i T i j=1 log P (s i,j |s i,<j ), (2)\nindicating that we finetune the full prompts. According to our exploration, Eqn.(1) achieves slightly better results and we use it for all tasks (see Appendix C.4 for the results). Let p true and p false denote the probabilities of tags \"true\" and \"false\" after encoding the prefix \"We can conclude that the BBB penetration of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is\". The probabilities that ⟨SMILES⟩ can and cannot cross blood-brain barrier are normalized as p true /(p true + p false ) and p false /(p true + p false ) respectively. The finetuning hyper-parameters are in Appendix C.2.\nWe compare MolXPT with two types of baselines: (1) pre-trained language model baselines including KV-PLM (Zeng et al., 2022), Galactica (Taylor et al., 2022) and MoMu (Su et al., 2022).\n(2) pre-trained Graph Neural Network (GNN) baselines including G-Contextual (Rong et al., 2020), G-Motif (Rong et al., 2020), GROVER base (Rong et al., 2020), GROVER large (Rong et al., 2020), GraphMVP (Liu et al., 2022), MGSSL (Zhang et al., 2021) and GEM (Fang et al., 2022). The evaluation metric is the ROC-AUC score. The results are in Table 1.\nMolXPT outperforms the GNN baselines pretrained on pure molecular data, indicating the effectiveness of pre-training with scientific text corpus. Compared with Galactica which also uses both SMILES and text for pre-training GPT-like model, MolXPT obtains better performance. Note that Galactica does not purposely build and train on the \"wrapped\" sequences, whose importance is demonstrated via our empirical results. A possible explanation of the superior performance is that the SMILES describes the component and structural information of molecules, while the text describes the general properties. They are complementary to each other, and joint training on them brings more effective representations." }, { "figure_ref": [], "heading": "Results on text-molecule translation", "publication_ref": [ "b10", "b9", "b9", "b25", "b21", "b1", "b9", "b8", "b31", "b29", "b26" ], "table_ref": [ "tab_2", "tab_3" ], "text": "We evaluated the performance of MolXPT on CheBI-20 (Edwards et al., 2021), a bidirectional text-molecule translation dataset. It consists of 33,010 molecule-description pairs. We use the data split provided by MolT5 (Edwards et al., 2022), where the training, validation and test sets account 80%, 10% and 10% of total data. For molecule-totext generation, given a molecular SMILES S, the prompt is: \"The description of ⟨som⟩ S ⟨eom⟩ is: The molecule is\", followed by the text description of S. For text-to-molecule generation, given a text description T , the prompt is: \"T . The compound is ⟨som⟩\", and the model will generate the molecular SMILES ended with ⟨eom⟩. We compare our method with MolT5 (Edwards et al., 2022).\nFor molecule-to-text generation, the results are evaluated by NLP metrics including BLEU (Papineni et al., 2002), Rouge (Lin, 2004) and ME-TEOR (Banerjee and Lavie, 2005). \"Text2mol\" is a deep learning based metric proposed by Edwards et al. (2022) to measure the similarity of the text-molecule pairs. For text-to-molecule generation, we evaluate the following metrics: the proportion of the generated SMILES that exactly match the reference SMILES (denoted as \"Exact\"); the Tanimoto similarity of three types of fingerprints: MACCS (Durant et al., 2002), RDK (Schneider et al., 2015) and Morgan (Rogers and Hahn, 2010); the FCD score (Preuer et al., 2018), which measures the molecule distances by a pretrained model; the percentage of the valid generated SMILES. The results are reported in Table 2.\nWe observe that MolXPT achieves significantly better performance than MolT5-small and MolT5-base, and has comparable performance with MolT5-large. Note that MolT5-large has 800M parameters while MolXPT only uses 44% of its parameters. For both tasks, our model performs the best on Text2Mol metric, indicating that MolXPT captures the alignment between text and molecule better. We attribute it to the wrapped sequences, by which the model can learn the relation between molecule and text explicitly.\nWe further verify the zero-shot text-to-molecule generation ability of MolXPT. The pre-trained MolXPT takes the text as input and directly generates molecules without finetuning. The top-1 and top-5 fingerprint similarity is in Table 3. Indeed, compared with the full data setting, the performance drops, but still reasonable numbers. In addition, the zero-shot MolXPT successfully recovers 33 molecules based on the text (see Appendix D)." }, { "figure_ref": [], "heading": "Conclusions and Future Work", "publication_ref": [], "table_ref": [], "text": "We propose MolXPT, a generative model pretrained on scientific text, molecular SMILES and For future work, first, we will train larger MolXPT to further verify the performances across different tasks and the zero-shot/in-context (Xie et al., 2022) learning ability. Second, how to further enhance the interaction between molecules and text (e.g., using contrastive learning to enhance consistency) should be studied. Third, how to effectively adapt MolXPT into other molecule and text tasks such as text-guided molecule optimization is another direction to explore.\nThe molecule is a sesquiterpene lactone and active principle of Feverfew (Tanacetum parthenium). It has a role as a nonsteroidal anti-inflammatory drug, a non-narcotic analgesic, a peripheral nervous system drug, an inhibitor and a drug allergen.\nThe molecule is the (R)-enantiomer of mevalonic acid. It is a conjugate acid of a (R)-mevalonate. It is an enantiomer of a (S)mevalonic acid.\nThe molecule is a bile acid taurine conjugate of ursocholic acid. It has a role as a human metabolite and a rat metabolite. It derives from an ursocholic acid. It is a conjugate acid of a tauroursocholate. " }, { "figure_ref": [], "heading": "Input text Generated molecule", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A Datasets and Baselines of MoleculeNet", "publication_ref": [], "table_ref": [], "text": "We choose the following tasks of MoleculeNet for evaluation:\n(1) BBBP contains compounds with binary labels on blood-brain barrier penetration.\n(2) Tox21 is a dataset for predicting the human toxicity of compounds on 12 different targets.\n(3) ClinTox contains drugs approved by the FDA and those that have failed clinical trials for toxicity reasons.\n(4) HIV aims to predict whether a drug can inhibit HIV replication.\n(5) BACE describes binding results for a set of inhibitors of human β-secretase 1. ( 6) SIDER has compounds used in marketed medicines with 27 categories of side effects. We compare MolXPT with the following baselines:\n(1) GROVER is a self-supervised pre-trained graph Transformer model. G-Contextual and G-Motif are two variants of it pre-trained with contextual property prediction task and motif prediction task.\n(2) GraphMVP is a self-supervised pre-trained GNN model using both 2D topological structures and 3D geometric views of molecules.\n(3) MGSSL leverages a retrosynthesis-based algorithm BRICS and additional rules to find the motifs and combines motif layers with atom layers. (4) GEM is a geometry-enhanced pre-trained GNN model.\n(5) Galactica is a GPT-like model trained on a large scientific corpus and many natural sequences like SMILES. We report the result of Galactica-120B. ( 6) KV-PLM is a BERT-like model where SMILES sequences are appended after molecule names for pre-training. (7) MoMu uses contrastive learning to jointly pretrain a BERT model for text and a GNN model for molecules." }, { "figure_ref": [], "heading": "B Pre-training hyper-parameters", "publication_ref": [], "table_ref": [], "text": "MolXPT is pre-trained for 200k steps on eight A100 GPUs. The batchsize is 2048 tokens per GPU. The gradients are accumulated for 16 steps before updating. We use Adam (Kingma and Ba, 2015) optimizer for optimization. The peak learning rate is 0.0005 and the warm-up steps are 20000. The learning rate scheduler is inverse square root decay scheduler. The dropout is 0.1." }, { "figure_ref": [], "heading": "C Finetuning details of downstream tasks", "publication_ref": [], "table_ref": [], "text": "C.1 Prompts for finetuning MoleculeNet (1) BBBP: \"We can conclude that the BBB penetration of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is true/false.\"\n(2) Tox21: \"We can conclude that the ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ activity outcome on ⟨target⟩ is active/inactive. \" where ⟨target⟩ refers to corresponding receptor or enzyme for each subtask, e.g. the ⟨target⟩ of subtask \"AR\" is \"Androgen Receptor\".\n(3) ClinTox:\"We can conclude that the clinical trial toxicity of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is true/false.\" for subtask CT_TOX and \"We can conclude that the FDA approval status of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is true/false.\" for subtask FDA_APPROVED. (4) HIV: \"We can conclude that the screening result of ability to inhibit HIV replication of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is active/inactive.\" (5) BACE: \"We can conclude that the binding result on beta-secretase 1 of ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ is true/false.\" (6) SIDER:\"We can conclude that the ⟨som⟩ ⟨SMILES⟩ ⟨eom⟩ can bring about the side effect of ⟨side-effect⟩ is true/false.\" where ⟨side-effect⟩ refers to corresponding side-effect for each subtask." }, { "figure_ref": [], "heading": "C.2 Details of finetuning MoleculeNet", "publication_ref": [], "table_ref": [], "text": "We grid search the following hyper-parameters: learning rate in {3 × 10 -5 , 5 × 10 -5 }; dropout in {0.1, 0.3}; total epochs from {30, 50}. The model is selected according to validation performance." }, { "figure_ref": [], "heading": "C.3 Details of finetuning text-molecule generation", "publication_ref": [], "table_ref": [], "text": "For similarity(m, mi ).\n(3)\nMolXPT generates 33 molecules that can exactly match the reference molecules without finetuning.\nFigure 2 shows three of the cases." } ]
Generative pre-trained Transformer (GPT) has demonstrates its great success in natural language processing and related techniques have been adapted into molecular modeling. Considering that text is the most important record for scientific discovery, in this paper, we propose MolXPT, a unified language model of text and molecules pre-trained on SMILES (a sequence representation of molecules) wrapped by text. Briefly, we detect the molecule names in each sequence and replace them to the corresponding SMILES. In this way, the SMILES could leverage the information from surrounding text, and vice versa. The above wrapped sequences, text sequences from PubMed and SMILES sequences from PubChem are all fed into a language model for pre-training. Experimental results demonstrate that MolXPT outperforms strong baselines of molecular property prediction on MoleculeNet, performs comparably to the best model in text-molecule translation while using less than half of its parameters, and enables zero-shot molecular generation without finetuning.
MolXPT: Wrapping Molecules with Text for Generative Pre-training
[ { "figure_caption": "Figure 1 :1Figure 1: Framework of MolXPT. MolXPT is pretrained on text from PubMed, SMILES from PubChem and wrapped sequences between SMILES and text. The wrapped sequences are obtained by applying NER and entity linking to text and then replacing matched molecular mentions with SMILES. MolXPT can be finetuned for various text and molecular downstream tasks, like molecular property prediction and molecule-text translation.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Examples for zero-shot text-to-molecule generation. We randomly pick up three cases that MolXPT can successfully generate the reference molecules without finetuning.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "± 3.4 73.2 ± 0.8 77.8 ± 2.0 73.8 ± 1.4 73.4 ± 4.0 60.6 ± 1.1 70.9 GROVER base 70.0 ± 0.1 74.3 ± 0.1 81.2 ± 3.0 62.5 ± 0.9 82.6 ± 0.7 64.8 ± 0.6 72.6 Results on MoleculeNet. The evaluation metric is ROC-AUC. Bold fonts indicate the best results.", "figure_data": "DatasetBBBPTox21ClinToxHIVBACESIDERAvg#Molecules2039783114784112715131478G-Contextual 70.3 ± 1.6 75.2 ± 0.3 59.9 ± 8.2 75.9 ± 0.9 79.2 ± 0.3 58.4 ± 0.6 69.8G-Motif 66.4 GROVER large 69.5 ± 0.1 73.5 ± 0.1 76.2 ± 3.7 68.2 ± 1.1 81.0 ± 1.4 65.4 ± 0.1 72.3GraphMVP72.4 ± 1.6 75.9 ± 0.5 79.1 ± 2.8 77.0 ± 1.2 81.2 ± 0.9 63.9 ± 1.2 74.9MGSSL70.5 ± 1.1 76.5 ± 0.3 80.7 ± 2.1 79.5 ± 1.1 79.7 ± 0.8 61.8 ± 0.8 74.8GEM72.4 ± 0.4 78.1 ± 0.1 90.1 ± 1.3 80.6 ± 0.9 85.6 ± 1.1 67.2 ± 0.4 79.0KV-PLM74.6 ± 0.9 72.7 ± 0.6-74.0 ± 1.2-61.5 ± 1.5-Galactica66.168.982.674.561.763.269.5MoMu70.5 ± 2.0 75.6 ± 0.3 79.9 ± 4.1 76.2 ± 0.9 77.1 ± 1.4 60.5 ± 0.9 73.3MolXPT80.0 ± 0.5 77.1 ± 0.2 95.3 ± 0.2 78.1 ± 0.4 88.4 ± 1.0 71.7 ± 0.2 81.9", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Zeng et al. (2022) propose KV-PLM, where SMILES sequences are appended after molecule names for pre-training.Su et al. (2022) use contrastive learning between text and molecular graphs. Our MolXPT is a generative model while the above two models are not. Both of them are built upon SciBERT(Beltagy et al., 2019), a BERT model(Devlin et al., 2019) for scientific literature. MolXPT is complementary to them.", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Results of molecule-to-text (top) and text-to-molecule generation (bottom). For FCD, the smaller, the better. For the remaining metrics, the larger, the better. MolT5 results are from Table1 and 2 of(Edwards et al., 2022). MolT5 parameters are from https://github.com/blender-nlp/MolT5. Bold fonts indicate the best results.", "figure_data": "Molecule-to-textBLEU-2 BLEU-4 Rouge-1 Rouge-2 Rouge-L METEOR Text2MolMolT5-small (77M)0.5190.4360.6200.4690.5630.5510.540MolT5-base (250M)0.5400.4570.6340.4850.5780.5690.547MolT5-Large (800M)0.5940.5080.6540.5100.5940.6140.582MolXPT (350M)0.5940.5050.6600.5110.5970.6260.594Text-to-moleculeExact↑ MACCS↑RDK↑Morgan↑FCD↓Text2mol↑ Validity↑MolT5-small0.0790.7030.5680.5172.490.4820.721MolT5-medium0.0810.7210.5880.5292.180.4960.772MolT5-large0.3110.8340.7460.6841.200.5540.905MolXPT0.2150.8590.7570.6670.450.5780.983MACCS RDK MorganZero-shot (Top-1)0.5400.3830.228Zero-shot (Top-5)0.5800.4230.423Full data (Top-1)0.8410.7460.660", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Zero-shot text-to-molecule generation.", "figure_data": "their wrapped sequences. We train a 24-layerMolXPT with 350M parameters. By prompt-basedfinetuning, it improves strong baselines on Molecu-leNet and achieves comparable results with the bestmodel on molecule-text translation but using muchfewer parameters.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Dev full prompt 98.8 ± 0.2 78.8 ± 0.1 98.8 ± 0.1 82.9 ± 1.0 78.4 ± 0.3 67.7 ± 0.7 84.2 Dev tags only 98.9 ± 0.3 78.8 ± 0.2 97.7 ± 0.1 85.3 ± 0.2 75.8 ± 0.8 69.4 ± 0.6 84.3 Test full prompt 78.1 ± 0.4 77.2 ± 0.1 93.4 ± 0.1 78.1 ± 0.9 87.9 ± 0.3 70.0 ± 0.2 80.8 Test tags only 80.0 ± 0.5 77.1 ± 0.2 95.3 ± 0.2 78.1 ± 0.4 88.4 ± 1.0 71.7 ± 0.2 81.9 Comparison of different finetuning strategies on MoleculeNet. \"Dev\" and \"Test\" denote validation set and test set respectively. Subscripts represent finetuning full prompts (Eqn.(2)) or tags only respectively (Eqn.(1)). The evaluation metric is ROC-AUC.", "figure_data": "DatasetBBBPTox21ClinToxHIVBACESIDERAvg", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Zequn Liu; Wei Zhang; Yingce Xia; Lijun Wu; Shufang Xie; Tao Qin; Ming Zhang; Yan Liu
[ { "authors": "Viraj Bagal; P K Aggarwal; U Deva Vinod; Priyakumar", "journal": "Journal of Chemical Information and Modeling", "ref_id": "b0", "title": "Molgpt: Molecular generation using a transformer-decoder model", "year": "2022" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "", "ref_id": "b1", "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Iz Beltagy; Kyle Lo; Arman Cohan", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "SciB-ERT: A pretrained language model for scientific text", "year": "2019" }, { "authors": "Elliot Bolton; David Hall; Michihiro Yasunaga; Tony Lee; Chris Manning; Percy Liang", "journal": "Pub-MedGPT", "ref_id": "b3", "title": "B", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell; Sandhini Agarwal; Ariel Herbert-Voss; Gretchen Krueger; Tom Henighan; Rewon Child; Aditya Ramesh; Daniel Ziegler; Jeffrey Wu; Clemens Winter; Chris Hesse; Mark Chen; Eric Sigler; Mateusz Litwin; Scott Gray; Benjamin Chess; Jack Clark; Christopher Berner; Sam Mccandlish; Alec Radford; Ilya Sutskever; Dario Amodei", "journal": "", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b5", "title": "", "year": "" }, { "authors": "Yulong Chen; Yang Liu; Li Dong; Shuohang Wang; Chenguang Zhu; Michael Zeng; Yue Zhang", "journal": "", "ref_id": "b6", "title": "Adaprompt: Adaptive model training for prompt-based nlp", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Burton A Joseph L Durant; Douglas R Leland; James G Henry; Nourse", "journal": "Journal of chemical information and computer sciences", "ref_id": "b8", "title": "Reoptimization of mdl keys for use in drug discovery", "year": "2002" }, { "authors": "Carl Edwards; Tuan Lai; Kevin Ros; Garrett Honke; Heng Ji", "journal": "", "ref_id": "b9", "title": "Translation between molecules and natural language", "year": "2022" }, { "authors": "Carl Edwards; Chengxiang Zhai; Heng Ji", "journal": "", "ref_id": "b10", "title": "Text2mol: Cross-modal molecule retrieval with natural language queries", "year": "2021" }, { "authors": "Xiaomin Fang; Lihang Liu; Jieqiong Lei; Donglong He; Shanzhuo Zhang; Jingbo Zhou; Fan Wang; Hua Wu; Haifeng Wang", "journal": "Nature Machine Intelligence", "ref_id": "b11", "title": "Geometry-enhanced molecular representation learning for property prediction", "year": "2022" }, { "authors": "Minghao Feng; Bingqing Tang; Steven H Liang; Xuefeng Jiang", "journal": "Current topics in medicinal", "ref_id": "b12", "title": "Sulfur containing scaffolds in drugs: synthesis and application in medicinal chemistry", "year": "2016" }, { "authors": "Noelia Ferruz; Steffen Schmidt; Birte Höcker", "journal": "Nature Communications", "ref_id": "b13", "title": "Protgpt2 is a deep unsupervised language model for protein design", "year": "2022" }, { "authors": "Nathan Frey; Ryan Soklaski; Simon Axelrod; Siddharth Samsi; Rafael Gomez-Bombarelli; Connor Coley; Vijay Gadepally", "journal": "ChemRxiv", "ref_id": "b14", "title": "Neural scaling of deep chemical models", "year": "2022" }, { "authors": "Tianyu Gao; Adam Fisch; Danqi Chen", "journal": "", "ref_id": "b15", "title": "Making pre-trained language models better few-shot learners", "year": "2021" }, { "authors": "Yuxian Gu; Xu Han; Zhiyuan Liu; Minlie Huang", "journal": "", "ref_id": "b16", "title": "Ppt: Pre-trained prompt tuning for few-shot learning", "year": "2022" }, { "authors": "Janna Hastings; Gareth Owen; Adriano Dekker; Marcus Ennis; Namrata Kale; Steve Venkatesh Muthukrishnan; Neil Turner; Pedro Swainston; Christoph Mendes; Steinbeck", "journal": "Nucleic acids research", "ref_id": "b17", "title": "Chebi in 2016: Improved services and an expanding collection of metabolites", "year": "2016" }, { "authors": "Muhammad David N Juurlink; Alexander Mamdani; Andreas Kopp; Donald A Laupacis; Redelmeier", "journal": "Jama", "ref_id": "b18", "title": "Drug-drug interactions among elderly patients hospitalized for drug toxicity", "year": "2003" }, { "authors": "Sunghwan Kim; Jie Chen; Tiejun Cheng; Asta Gindulyte; Jia He; Siqian He; Qingliang Li; Benjamin A Shoemaker; Paul A Thiessen; Bo Yu; Leonid Zaslavsky; Jian Zhang; Evan E Bolton", "journal": "Nucleic Acids Research", "ref_id": "b19", "title": "PubChem 2023 update", "year": "2022" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b20", "title": "Adam: A method for stochastic optimization", "year": "2015" }, { "authors": "Chin-Yew Lin", "journal": "", "ref_id": "b21", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Shengchao Liu; Hanchen Wang; Weiyang Liu; Joan Lasenby; Hongyu Guo; Jian Tang", "journal": "", "ref_id": "b22", "title": "Pretraining molecular graph representation with 3d geometry", "year": "2022" }, { "authors": "Renqian Luo; Liai Sun; Yingce Xia; Tao Qin; Sheng Zhang; Hoifung Poon; Tie-Yan Liu", "journal": "Briefings in Bioinformatics", "ref_id": "b23", "title": "BioGPT: generative pre-trained transformer for biomedical text generation and mining", "year": "2022" }, { "authors": " Openai", "journal": "", "ref_id": "b24", "title": "Chatgpt: Optimizing language models for dialogue", "year": "2022" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b25", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Kristina Preuer; Philipp Renz; Thomas Unterthiner; Sepp Hochreiter; Gunter Klambauer", "journal": "Journal of chemical information and modeling", "ref_id": "b26", "title": "Frechet chemnet distance: a metric for generative models for molecules in drug discovery", "year": "2018" }, { "authors": "Alec Radford; Jeffrey Wu; Rewon Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "OpenAI blog", "ref_id": "b27", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b28", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "David Rogers; Mathew Hahn", "journal": "Journal of chemical information and modeling", "ref_id": "b29", "title": "Extendedconnectivity fingerprints", "year": "2010" }, { "authors": "Yu Rong; Yatao Bian; Tingyang Xu; Weiyang Xie; Ying Wei; Wenbing Huang; Junzhou Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b30", "title": "Self-supervised graph transformer on largescale molecular data", "year": "2020" }, { "authors": "Nadine Schneider; Roger A Sayle; Gregory A Landrum", "journal": "Journal of chemical information and modeling", "ref_id": "b31", "title": "Get your atoms in order: An opensource implementation of a novel and robust molecular canonicalization algorithm", "year": "2015" }, { "authors": "Philippe Schwaller; Theophile Gaudin; David Lanyi; Costas Bekas; Teodoro Laino", "journal": "Chemical science", "ref_id": "b32", "title": "found in translation\": predicting outcomes of complex organic chemistry reactions using neural sequence-tosequence models", "year": "2018" }, { "authors": "Rico Sennrich; Barry Haddow; Alexandra Birch", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Neural machine translation of rare words with subword units", "year": "2016" }, { "authors": "Karan Singhal; Shekoofeh Azizi; Tao Tu; Sara Mahdavi; Jason Wei; Hyung Won Chung; Nathan Scales; Ajay Tanwani; Heather Cole-Lewis; Stephen Pfohl", "journal": "", "ref_id": "b34", "title": "Large language models encode clinical knowledge", "year": "2022" }, { "authors": "Bing Su; Dazhao Du; Zhao Yang; Yujie Zhou; Jiangmeng Li; Anyi Rao; Hao Sun; Zhiwu Lu; Ji-Rong Wen", "journal": "", "ref_id": "b35", "title": "A molecular multimodal foundation model associating molecule graphs with natural language", "year": "2022" }, { "authors": "Mujeen Sung; Minbyul Jeong; Yonghwa Choi; Donghyeon Kim; Jinhyuk Lee; Jaewoo Kang", "journal": "", "ref_id": "b36", "title": "Bern2: an advanced neural biomedical named entity recognition and normalization tool", "year": "2022" }, { "authors": "Ross Taylor; Marcin Kardas; Guillem Cucurull; Thomas Scialom; Anthony Hartshorn; Elvis Saravia; Andrew Poulton; Viktor Kerkez; Robert Stojnic", "journal": "", "ref_id": "b37", "title": "Galactica: A large language model for science", "year": "2022" }, { "authors": "Xiaochu Tong; Xiaohong Liu; Xiaoqin Tan; Xutong Li; Jiaxin Jiang; Zhaoping Xiong; Tingyang Xu; Hualiang Jiang; Nan Qiao; Mingyue Zheng", "journal": "Journal of Medicinal Chemistry", "ref_id": "b38", "title": "Generative models for de novo drug design", "year": "2021" }, { "authors": "David Weininger", "journal": "Journal of chemical information and computer sciences", "ref_id": "b39", "title": "Smiles, a chemical language and information system. 1. introduction to methodology and encoding rules", "year": "1988" }, { "authors": "Zhenqin Wu; Bharath Ramsundar; Evan N Feinberg; Joseph Gomes; Caleb Geniesse; S Aneesh; Karl Pappu; Vijay Leswing; Pande", "journal": "Chemical science", "ref_id": "b40", "title": "Moleculenet: a benchmark for molecular machine learning", "year": "2018" }, { "authors": "Sang Michael Xie; Aditi Raghunathan; Percy Liang; Tengyu Ma", "journal": "", "ref_id": "b41", "title": "An explanation of in-context learning as implicit bayesian inference", "year": "2022" }, { "authors": "Zheni Zeng; Yuan Yao; Zhiyuan Liu; Maosong Sun", "journal": "Nature Communications", "ref_id": "b42", "title": "A deep-learning system bridging molecule structure and biomedical text with comprehension comparable to human professionals", "year": "2022" }, { "authors": "Zaixi Zhang; Qi Liu; Hao Wang; Chengqiang Lu; Chee-Kong Lee", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Motif-based graph selfsupervised learning for molecular property prediction", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 306.14, 673.62, 228.19, 74.32 ], "formula_id": "formula_0", "formula_text": "x i = (s i,1 , s i,2 , • • • , s i,n i ) is the i-th sequence with n i tokens. The training objective function is: min - 1 |D| |D| i=1 n i j=1 log P (s i,j |s i,j-1 , s i,j-2 , • • • , s 1 )." }, { "formula_coordinates": [ 3, 343.44, 743.25, 181.7, 33.71 ], "formula_id": "formula_1", "formula_text": "min - 1 N N i=1 log P (s i,T i |s i,<T i ),(1)" }, { "formula_coordinates": [ 4, 96.61, 95.81, 193.26, 33.96 ], "formula_id": "formula_2", "formula_text": "min - 1 N N i=1 1 T i T i j=1 log P (s i,j |s i,<j ), (2)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction 1.Sampling", "publication_ref": [], "table_ref": [], "text": "We would like to generate a sample from a probability distribution µ in n dimensions:\nx * ∼ µ(dx) for µ ∈ P(R n ) ,\n(1.1) (Here and below, P(Ω) denotes the set of probability distributions on Ω.) We have in mind two types of settings. In the first setting, the probability distribution µ is 'explicitly given.' Often, this means that we are able to compute efficiently probability ratios µ(x 1 )/µ(x 2 ) or the score function ∇ log µ(x). (By an abuse of notation, we are using µ(x) to denote the density of µ with respect to some reference measure.) This is oftern the case in statistical physics and in Bayesian statistics.\nIn a second setting, there is no explicit form for µ, but we have access to a collection of i.i.d. samples x 1 , . . . , x M ∼ iid µ. This is the case in 'generative modeling' in machine learning.\nMonte Carlo Markov Chain (MCMC) attempts to solve the problem in the first case by constructing a Markov Chain whose stationary distribution coincides with µ, and sampling from the Markov Chain starting from a fixed initialization. If the probability measure µ is supported on Ω ⊆ R n , the Markov chain can be thought as a random walk on the set Ω.\nHere, we will consider a class of sampling algorithms that generate a stochastic process m t ∈ R n , t ∈ [0, T ] (possibly T = ∞) such that m 0 is deterministic and, as t → T m t -→ x = m T ∼ µ .\n(1.2)\nIn contrast with the MCMC setting, this process will be -in general-non-reversible and timeinhomogeneous. Further, the distribution of m t for t < T is different from µ and -in generalm t does not takes values in the support of µ.\nIn the rest of this introduction, we will present two approaches towards constructing such a process m t , respectively based on time-reversal of diffusion processes and on stochastic localization. We then give a brief overview of the literature in Section 2, generalize the stochastic localization approach in Section 3, and describe several specific instantiations in Section 4. While earlier sections are mainly expository, in Sections 5, 6, 7 we illustrate how the present point of view can be exploited to address problems arising with standard denoising diffusions. Namely, we show how the choice of the stochastic localization process can simplify the learning task. The appendices contain omitted technical details. In particular, in Appendix A we spell out the form taken by a natural loss functions (Kullback-Leibler divergence) in various examples." }, { "figure_ref": [], "heading": "Diffusions", "publication_ref": [ "b13" ], "table_ref": [], "text": "Sampling algorithms based on diffusions were introduced in [SDWMG15, SE19, HJA20, SSDK + 21] and were originally motivated by the idea of time-reversal. Fixing S ∈ (0, ∞], the construction starts with a Itô diffusion process (Z s ) s∈[0,S] initialized at Z 0 = x ∼ µ: dZ s = F (s, Z s ) ds + g(s) dB s , Z 0 = x ∼ µ( • ) .\n(1.3) Here (B s ) s≥0 is a standard n-dimensional Brownian motion F : [0, S] × R n → R n is a drift term, and g : [0, S] → R ≥0 is a diffusion coefficient1 .\nWe will denote by µ Z s the marginal distribution of Z s under the above process, so that, by construction µ Z 0 = µ. The drift and diffusion coefficients in Eq. (1.3) can be constructed so that the final distribution µ Z S =: ν is easy to sample from. Next, a sampling process is obtained by time-reversing the process (1.3). Namely, we let t : [0, S] → [0, T ] be a continuously differentiable time change with first derivative t (s) < 0 for all s ∈ [0, S], t(0) = T , t(S) = 0, and, let s : [0, T ] → [0, S] denote its inverse. We then define the process (Y t ) t∈[0,T ] via\ndY t = F (t, Y t ) dt + g(t) dB t , Y 0 ∼ ν( • ) ,(1.4)\nwhere (B t ) t≥0 is a standard Brownian motion, and the drift and diffusion coefficients are given by: F (t, y) = -F (s(t), y) + g(s(t))∇ z log µ Z s(t) (y) |s (t)| , (1.5)\ng(t) = g(s(t)) |s (t)| . (1.6)\nIt is a well known result [HP86] that (Y t ) t∈[0,T ] so defined is distributed as (Z s(t) ) t∈[0,T ] , and in particular Y T ∼ µ( • ). In particular this implies that for each t ∈ [0, T ]\nµ Y t = µ Z s(t) .\n(1.7)\nHence the stochastic differential equation (SDE) (1.4) can be used (after suitable discretization) to sample from µ. Of course, in order for this to be a viable strategy, we need to be able to: (i) sample from ν = µ Z S , and (ii) compute the drift F . A specific construction that facilitates this goal was put forward in [SE19, HJA20, SSDK + 21], which suggested to use the Ornstein-Uhlenbeck process2 \ndZ s = -Z s ds + √ 2 dB s . (1.8)\nIn other words, we set F (s, z) = -z and g(s) = 2 in Eq. (1.3). By integrating this equation with initial condition Z 0 = x, we\nZ s d = e -s x + 1 -e -2s G G ∼ N(0, I d ) ⊥ ⊥ x .\n(1.9) thus recovering the well known fact the distribution µ Z s converges exponentially fast to µ Z ∞ = N(0, I n ) (e.g. in chi-squared or Wasserstein-2 distance). Hence, if we choose S large, we can approximately sample from µ Z S . In order to evaluate formula (1.5), we note that µ Z s is the distribution of a scaling of x corrupted by Gaussian noise with variance 1 -e -2s . Tweedie's formula3 [Rob56] states that\n∇ z log µ Z s (z) = 1 1 -e -2s E[e -s x|Z s = z] -z (1.10)\nIt is convenient to introduce a notation for the posterior expectation of x ∼ µ given a Gaussian observation. We define\nm(y; t) := E[x|t x + √ tG = y] , (x, G) ∼ µ ⊗ N(0 I n ) .\n(1.11)\nWe then apply the general formula (1.5) using F (s, z) = -z, g(s) = 2, and setting t(s) = 1/(e 2s -1) with T = S = ∞, we obtain\nF (t, y) = - 1 + t t(1 + t) y + 1 t(1 + t) m t(1 + t)y; t , (1.12) g(t) = 1 t(1 + t)\n.\n(1.13)\nUsing this drift and diffusion coefficients, the process (1.4) initialized at\nY 0 ∼ N(0, I d ) is such that Y ∞ ∼ µ. Explicitly dY t = - 1 + t t(1 + t) Y t + 1 t(1 + t) m t(1 + t)Y t ; t dt + 1 t(1 + t) dB t . (1.14)\nHence, stopping at an earlier time T yields an approximate sample from µ." }, { "figure_ref": [], "heading": "A special stochastic localization process", "publication_ref": [], "table_ref": [], "text": "General stochastic localization is defined in [Eld13, Eld20, Eld22, CE22] as a stochastic process taking values in the space of probability measures in R n . At each time t ∈ [0, ∞), we are given a random probability measure µ t . This process must satisfy two properties First, as t → ∞, µ t 'localizes', i.e. µ t ⇒ δ x * for a random x * . Second, it must be a martingale.\nIf random probability measures sound unfamiliar to the reader, there is a potentially simpler and equivalent4 way to think about stochastic localization processes. Sample x * ∼ µ and, at each time t let Y t be a noisy observation of x * (a random vector), with Y t becoming 'more informative' as t increases. We then set µ t to be the conditional distribution of x * given Y t : µ t (x ∈ • ) = P(x ∈ • |Y t ). 'More informative' can be formalized in many ways, but one possibility is to require that, for any t 1 ≤ t 2 , x * -Y t 2 -Y t 1 forms a Markov chain (in general, an inhomogeneous one).\nWe will refer to such a process (Y t ) t≥0 as to the observation process. A crucial observation (formalized in Section 3 and illustrated in Section 4) is that Y t does not need to take values in the same space as x * .\nFor instance, Y t does not need to have the same dimensions as x * .\nTo begin with the simplest example, consider the case in which Y t is Gaussian with\nY t = t x * + W t ,(1.15)\nwhere (W t ) t≥0 a standard Brownian motion. It is intuitively clear and easy to check that Y t becomes 'more informative' about x * in the technical sense given above. Roughly speaking, despite we are adding noise via the Brownian motion W t , we are also increasing the signal-to-noise ratio.\nThe most straightforward way to check this formally is to write the joint distribution of x * , Y t 1 , Y t 2 . A more elegant approach is to define X σ := σ 2 Y 1/σ 2 and noting that, by invariance properties of the Brownian motion X σ = x * + W σ 2 where ( W σ 2 ) σ 2 ≥0 is a Brownian motion. The Markov property follows from the Markov property of the Brownian motion.\nWe can now write explicitly µ t using Bayes rule:\nµ t (dx) = 1 Z µ(dx) exp - 1 2t Y t -tx 2 2 = 1 Z µ(dx) e Y t,x -1 2t x 2 , (1.16)\nwhere Z and Z are normalizations, depending on Y t . In other words, µ t is a random tilt of µ.\nNotice that µ t=0 = µ is the original distribution, and µ t ⇒ δ x * as t → ∞. The tilting factor completely localizes the measure on x * . Hence if we can generate the stochastic process (µ t ) t≥0 , we can also sample from µ. For instance, we can compute the baricenter M t := x µ t (dx) and note that M t → x * as t → ∞, with x * ∼ µ.\nAt first sight, this strategy appears problematic for two reasons. First, the definition of µ t in Eq. (1.16) depends on Y t , which is itself defined in terms of x * ∼ µ. It might seem that we need a sampling algorithm to begin with. Second, we (µ t ) t≥0 is -in general-a stochastic process taking values in an infinite-dimensional space. Even if we consider a discrete setting, e.g. x ∈ {+1, -1} n , (µ t ) t≥0 takes place in exponentially many dimensions.\nBoth of these issues are taken care of by the following classical fact in the theory of stochastic processes [LS77, Section 7.4].\nProposition 1.1. Assume µ has finite second moment. Then, (Y t ) t≥0 is the unique solution of the following stochastic differential equation (with initial condition Y 0 = 0)\ndY t = m(Y t ; t) dt + dB t .\n(1.17)\nHere (B t ) t≥0 is a standard Brownian motion and m(y; t) is the conditional expectation defined in Eq. (1.11), i.e.\nm(y; t) := E[x|t x + √ tG = y] , (x, G) ∼ µ ⊗ N(0 I n ) . (1.18)\nWe found therefore a different way to construct a diffusions-based algorithm to sample from µ. In a nutshell, we discretize the SDE of Eq. (1.17), for t ∈ [0, T ] for some large T , and then use Y T /T as an approximate sample from µ. Alternatively, we can output m(Y T ; T ).\nThe SDE of Eq. (1.17) looks tantalizingly similar to the one of Eq. (1.14), despite the fact that we derived them by quite different arguments (time-reversal in the first case and stochastic localization in the second). A simple calculation shows that they in fact coincide after the change of variables\nY t = t(1 + t) Y t .\n(1.19)\nThe rest of this paper is devoted to generalizing this construction. In particular, in Section 3 we describe the natural generalization of this construction to general stochastic localization processes." }, { "figure_ref": [], "heading": "Comparison with the literature", "publication_ref": [ "b6", "b6" ], "table_ref": [], "text": "As already mentioned, the present paper is closely related to [EAMS22], whose approach was further developed in [MW23], and aims at exploring the connection between this line of work and parallel developments in the deep learning literature.\nA number of generalizations and variations on the original construction of [SE19, HJA20, SSDK + 21] have been investigated in the last three years. An incomplete summary includes:\n• The discrete-time version of the Ornstein-Uhlenbeck process (1.8) is a linear autoregressive process that is driven by Gaussian noise. The idea of adding some form of non-Gaussian noise was proposed in [NRW21]. (See also [DSL21] for a related idea).\n• Discrete analogous of diffusion processes (for vectors x with values in {1, . . . , k}) were introduced in [SDWMG15, HNJ + 21]. In the forward process, each coordinate of x is flipped to a uniformly random one independently at a certain rate. The reverse process requires to estimate the conditional expectation of the vector x given the current state.\n• More general constructions of discrete diffusions were introduced in [AJH + 21] which considered independent coordinate flipping with arbitrary (time-dependent) transition rates. The use of absorbing states allows to incorporate examples in which some coordinates are masked.\n• An even more general framework for discrete diffusions was proposed in [CBDB + 22]. The authors consider a forward process that is a general continuous-time Markov process with discrete state space and construct the sampling process, again by time reversal. A variant of this approach is developed in [SYD + 22].\n• A different approach towards treating discrete variables was advocated in [CZH22]. In a first step, [CZH22] reduces the problem of sampling discrete vectors to the one of sampling binary vectors (representing discrete variables as binary strings). Unlike earlier works, these authors do not modify the forward process in the diffusion (and hence do not modify the reverse process either). They obtain discrete samples by rounding.\n• Further recent work on diffusions in discrete or constrained domains includes [MCSE22, VKS + 22, YWL22, AVE22, LWY + 23].\nIn summary, the time-reversal approach has been extended to encompass a broad array of sampling schemes. This generality is not surprising, since any sampling procedure is the time reversal of a process that is can be viewed as a 'noising' of the original distribution.\nAs we will see, the stochastic localization approach is equally general, but also naturally suggests a different palette of sampling schemes." }, { "figure_ref": [], "heading": "General stochastic localization sampling", "publication_ref": [ "b5" ], "table_ref": [], "text": "Given x ∼ µ a random variable in R n , we construct a sequence of random vectors5 (Y t ) t∈I indexed by I ⊆ [0, ∞] (typically I will be an interval, but this does not need to be the case).\nWe assume that Y t is increasingly more informative about X as θ increases, as formalized by the following definition. Definition 3.1. We say that the process (Y t ) t∈I is an observation process with respect to x if for each integer k, and for each\nt 1 < t 2 < • • • < t k ∈ I, sequence of random variables x, Y t k , Y t k-1 ,. . . ,Y t 1 forms a Markov chain. Namely the conditional distribution P(Y t i-1 ∈ • |x, Y t i , . . . Y t k ) coincides with the probability distribution P(Y t i-1 ∈ • |Y t i ).\nGiven such a process Y • := (Y t ) t∈I , we define the stochastic localization process (or scheme) to be the sequence of posteriors (µ t ) t∈I :\nµ t ( • ) := P(x ∈ • |Y t ) . (3.1)\nWe can interpret Y t as noisy observations or measurements of the underlying random variable x, which become less noisy as t increases. Indeed the notion of ordering among random variables Y t k , Y t k-1 ,. . . ,Y t 1 introduced above is common in the information theory literature, and referred to as 'ordering by physical degradation' [Ber73]. We can therefore interpret t either as 'time' or as 'signal strength.' Remark 3.1 (Relation to earlier definitions, I). As we will see in detail below, Definition 3.1 encompasses various proposals in the machine learning literature, e.g. [SE19, HJA20, SSDK + 21, HNJ + 21, AJH + 21, CBDB + 22, SYD + 22]. It is worth repeating that, according to Definition 3.1 observations Y t can take values in a space different from the one of x * , and in different spaces for different t, while this possibility was not exploited in earlier works.\nBecause of this feature, we can in fact construct a scheme satisfying Definition 3.1 given any sequence of random variables (Z t ) t∈I : it is sufficient to define Y t = (Z s : s ≤ t).\nRemark 3.2 (Relation to earlier definitions, II). The stochastic process (3.1) is a Doob martingale [Wil91]. Chen and Eldan [CE22] define a 'Doob localization scheme' in terms of a filtration (F t ) t by letting µ t ( • ) := P(x ∈ • |F t ). Of course Eq. (3.1) fits into this definition by letting F t := σ({Y t : t ≤ t}) (the σ-algebra generated by observations up to time t). Viceversa, for standard Borel spaces, any Doob localization scheme can be written in the form (3.1) (just take Y t that generates F t ). More generally, any stochastic localization scheme can be written in the form6 (3.1). We refer also to [EAM22] where this connection was explored in a special case.\nWe will always assume that the observation process is complete in the sense that observing the whole path (Y t ) t∈I gives complete information about x. In other words, any (measurable) set\nA ⊆ R n : µ ∞ (A) := P(x ∈ A|Y t , t ∈ I) ∈ {0, 1} . (3.2) The notation µ ∞ is justified since lim t→∞ µ t (A) = µ ∞ (A) almost surely (by Levy's martingale convergence theorem). Since µ ∞ (A) ∈ {0, 1} for all A, it follows that µ ∞ (A) = 1 x∈A .\nFinally, without loss of generality, we can assume 0 ∈ I with Y 0 independent of all the other random variables.\nWe now make a trivial, yet important remark:\nRemark 3.3. Since x, Y t k , Y t k-1 ,. . . ,Y t 1 , Y 0 forms a Markov Chain, so is the reverse sequence Y 0 , Y t 1 , Y t 2 ,. . . ,Y t k , x.\nThere exists therefore a transition probability P t,t (y|A) = P(Y t ∈ A|Y t = y) indexed by t, t ∈ I ∪ ∞ (with Y ∞ := x). This provides the blueprint for constructing a general sampling scheme:\n1. Discretize (if necessary) the time index set to I m := (t 0 = 0, t 1 , . . . , t m )." }, { "figure_ref": [], "heading": "Construct approximate probability kernels Pt", "publication_ref": [], "table_ref": [], "text": "k ,t k+1 (y k | • ) ≈ P t k ,t k+1 (y k | • ). 3. For each k ∈ {0 . . . , m}, sample y k+1 ∼ Pt k ,t k+1 (y k | • ) . (3.3)\nOf course this procedure yields an algorithm only if the transition probability P t,t (y| • ) can be approximated efficiently for t close to t. In the next sections we will discuss a few special cases.\n4 Ten examples of sampling schemes" }, { "figure_ref": [], "heading": "The isotropic Gaussian process", "publication_ref": [], "table_ref": [], "text": "This is simply the construction of Section 1.3, whose definition we copy here for the readers' convenience (recall that W is a standard Brownian motion)\nY t = tx * + W t , t ∈ I = [0, ∞) (4.1)\nAs we discussed there, this process (Y t ) t∈I satisfies the conditions of an observation process. The SDE (1.17), namely\ndY t = m(Y t ; t) dt + dB t ,(4.2)\nconfirms that it is a Markov process, as anticipated by Remark 3.3. An approximate transition probability can be constructed by a Euler discretization of this SDE. Namely, given a mesh I m := (t 0 = 0, t 1 , . . . , t m ), we compute\nŶ k+1 = Ŷ k+1 + m( Ŷ k ; t k ) δ k + G k δ k , δ k := t k+1 -t k , (4.3) where (G k ) k≥1 ∼ i.i.d. N(0, I n ). The corresponding approximate transition probability is Pt k ,t k+1 (y k |dy k+1 ) = 1 (2πδ k ) n/2 exp - 1 2δ k y k+1 -y k -δ k m(y k ; t k ) 2 2 dy k+1 . (4.4) Improved discretizations are given in [SME20, KAAL22]." }, { "figure_ref": [], "heading": "The anisotropic Gaussian process", "publication_ref": [], "table_ref": [], "text": "An obvious generalization is to allow for non-identity covariance. Namely, for Q : [0, ∞) → S + (n) (the cone of n × n positive semidefinite matrices) we define\nY t = t 0 Q(s)x * ds + t 0 Q(s) 1/2 dW s , t ∈ I = [0, ∞) . (4.5)\nThis satisfies the SDE\ndY t = Q(t)m(Y t ; Ω(t)) dt + Q(t) 1/2 dB t , Ω(t) := t 0 Q(s) ds . (4.6)\nwhere, for\nΩ ∈ S + (n), we let m(y; Ω) := E[x|Ωx + Ω 1/2 G = y] . (4.7)\nThe following Euler discretization can be used to sample:\nŶ k+1 = Ŷ k+1 + Q(t k )m( Ŷ k ; Ω(t k )) δ k + Q(t k ) 1/2 G k δ k , δ k := t k+1 -t k , (4.8)" }, { "figure_ref": [], "heading": "The erasure process", "publication_ref": [], "table_ref": [], "text": "For each i ∈ [n], we let T i be an independent random variable T i ∼ Unif([0, 1]) and set\nY t,i = x i if t ≥ T i , * if t < T i . (4.9)\nIn this case I = [0, 1] and Y t ∈ (R ∪ { * }) n is obtained by 'erasing' independently each coordinate of x with probability 1 -t.\nThe associated sampling algorithm is the standard sequential sampling procedure. Namely, for t ∈ {1, . . . , n}, we sample\nx i(t) ∼ µ x i(t) ∈ • x i(1) , . . . , x i(t-1) .\n(4.10)\nOf course this process can be modified by choosing the revealing times T i to be a deterministic sequence. In that way we obtain sequential sampling with i(1), . . . i(n) any predefined order." }, { "figure_ref": [], "heading": "The binary symmetric process", "publication_ref": [], "table_ref": [], "text": "We next give a (continuous-time) reformulation of the binary sampling scheme of [SDWMG15, HNJ + 21].\nWe assume x ∈ {+1, -1} n , x ∼ µ and set (with the Hadamard product)\nY t = x Z t ,(4.11)\nwhere (Z t ) t∈I , I = [0, 1] is a suitable noise process taking values in {+1, -1} n . Before defining the process, we highlight that Z t,i ∈ {+1, -1} with EZ t,i = P(T i < t) = t. Equivalently\nP(Z t,i = +1) = 1 -P(Z t,i = -1) = 1 + t 2 .\nIn particular Y 0 is uniformly random in {+1, -1} n , and Y 1 = x. In other words, the signal-to-noise ratio becomes larger as t grows from 0 to 1.\nInformally, the process (Z t ) t∈I is defined by the fact that its coordinates are independent and identically distributed with each coordinate defined as follows. Start with Z i,1 = +1, and generate Z i,t proceeding backward in time. For each interval (t -δ, t] replace Z t,i with a fresh random variable independent of the (Z s,i ) s≥t with probability δ/t + o(δ).\nIt is clear from this definition that, for any\nt 1 < t 2 < • • • < t k , Z t k , Z t k-1 . . . , Z t 1 forms a Markov chain, and hence so does x, Y t k , Y t k-1 . . . , Y t 1 .\nFurther, calling T i,1 the first time (proceeding backward from 1) at which Z i,t is resampled, if follows from the definition that P(T i,1 < t) = exp(-[t,1] s -1 ds) = t. In other words, T i,1 is uniformly random in [0, 1], whence Eq. (4.12) immediately follows.\nRemark 4.1. This remark provides a more rigorous definition of the process (Z t ) t∈[0,1] . Let, independently for each i ≤ n, T i,1 > T i,2 > • • • be the arrival times of a Poisson process with density ν(dt) = 1 [0,1] (t)t -1 dt, and let {R i, } ≥1 , R i, ∼ Unif({+1, -1}). Further, define R i,0 = +1, T i,0 = 1. We then set\nZ t,i = R i, ⇔ T i, ≥ t > T i, +1 .\n(4.12)\nRemark 4.2. An equivalent definition is as follows. Let (X s ) s≥0 be continuous random walk in the hypercube started at X 0 = x. Namely, within any interval [s, s + δ), with probability δ + o(δ), coordinate i is replaced independently from the others by a uniformly random variable in {+1, -1}.\nWe then set Y t = X log(1/t) , for t ∈ (0, 1].\nIn agreement with our general Remark 3.3, the process (Y t ) t∈[0,1] is also Markov forward in time. Indeed it is a continuous-time Markov chain initialized at Y 0 ∼ Unif({+1, -1} n ). In the interval [t, t + δ), coordinate i of Y t flips, independently of the others with probability (here y (i) is defined by y\n(i) i = -y i and y (i) j = y j for j ∈ [n] \\ i): P(Y t+δ = y (i) |Y t = y) = p i (y; t) δ + o(δ) .\n(4.13)\nThe transition rates are given by\np i (y; t) = 1 + t 2 2t(1 -t 2 ) - 1 1 -t 2 y i m i (t; y) , (4.14) m i (t; y) := E[x i |Y t = y] . (4.15)" }, { "figure_ref": [], "heading": "The symmetric process", "publication_ref": [], "table_ref": [], "text": "We can generalize the previous process to the case of a q-ary alphabet x i ∈ [q] = {1, . . . , q}, the result being equivalent to the process introduced in [HNJ + 21]. As before, I = [0, 1] and, for each i ∈ [n], we let {T i, } ≥1 be an independent Poisson point process with rate ν(dt) = 1 [0,1] (t)t -1 dt, and {R i, } ≥1 an independent sequence of random variables R i, ∼ Unif([q]). We then set\nY t,i = x i if T i,1 < t i ≤ 1, R i, if T i, +1 < t ≤ T i, .(4.16)\nAs noted before T i,1 ∼ Unif([0, 1]) and therefore (Y t,i ) i≤n are conditionally independent given x, with\nP(Y t,i = y|x) = (1 + (q -1)t)/q if y = x i , (1 -t)/q if y = x i . (4.17)\nAgain, by the general Remark 3.3, the process andy (i,z) ) i = z. Then the transition rates are given by\n(Y t ) t∈[0,1] is a Markov forward in time. For y ∈ [q] n , z ∈ [q] \\ {y i }, let y (i,z) ) j = y j if j = i,\nP(Y t+δ = y (i,z) |Y t = y) = p i (y, z; t) δ + o(δ) ,(4.18)\nP(Y t+δ = y|Y t = y) = 1 - z∈[q]\\{y i } p i (y, z; t) δ + o(δ) ,(4.19)\nwhere By discretizing Eq. (4.23) as in Section 4.1, we can sample Y ∞ , an approximation of Y ∞ = Ax. Note that, unlike for the original construction, once the diffusion process is terminated, we still need to generate x from the Y ∞ = Ax + error, in a way that is robust to sampling errors. Two examples in which this can be done easily:\np i (y, z; t) = 1 qt + 1 1 -t b i (y, z; t) - 1 1 + (q -1)t b i (\n• A has full column rank (in particular, m ≥ n). Then we can output x := A † Y ∞ (with A † := (A T A) -1 A T Y ∞ the pseudoinverse of A).\n• A does not have full column rank (for instance, m ≥ n), but x is structured, for instance is sparse. In this case, we can find x by using compressed sensing techniques, e.g. by solving minimize x 1 , (4.25)\nsubj. to Ax = Y ∞ . (4.26)\nAn alternative, construction would be instead to add noisy linear measurements. In this case time t ∈ N is discrete, Y t ∈ R t and\nY t = (Y 1 , . . . , Y t ) ,(4.27)\nY t = a t , θ + ε t , ε t ∼ N(0, σ 2 ) . (4.28)\nHere a 1 , a 2 , • • • ∈ R n is a sequence of vectors generated with a predefined process (either deterministic or random). The transition probability of this Markov chain is given by\nP(Y t+1 ≤ a|Y t = y) = Φ a -s σ ν t (ds|Y t = y) (4.29)\nwhere Φ(u\n) := u -∞ exp(-v 2 /2)/ √ 2π dv is the standard Gaussian distribution and ν t ( • |Y t = y) is the conditional law of a t+1 , x given Y t = y." }, { "figure_ref": [], "heading": "The information percolation process", "publication_ref": [], "table_ref": [], "text": "Let x ∈ Z m×n be a grayscale image and G m,n = (V m,n , E m,n ) be the two-dimensional grid with vertex set {0, . . . , m}×{0, . . . , n}. For each edge e ∈ E m,n , choose a direction arbitrarily: e = (o, t), and further order the edge set arbitrarily: E m,n = (e(1), . . . , e(N )), e = (o( ), t( )), N = 2mn + m + n. Let Y ( ) = x t(1) -x o(1) , . . . , x t( ) -x o( ) , * , . . . , * (4.30)\nIn words, at time , we revealed the difference of values along the first edges. It is easy to check that this satisfies the conditions of our general construction, indeed it is a simple change of variables of the erasure process of Section 4.3.\nThe transition probabilities are easy to compute\nP Y ( ) +1 = y Y ( ) = P x t( +1) -x o( +1) = y Y ( ) . (4.31)\nIn other words, at each step, one needs to compute the conditional distribution of (x t( +1) -x o( +1) ) given the information graph revealed thus far." }, { "figure_ref": [], "heading": "The Poisson observation process", "publication_ref": [], "table_ref": [], "text": "In this case, we assume that x ∈ R n ≥0 is non-negative, and let Y t ∈ N n , with coordinates conditionally independent given x, and (Y t,k ) t≥0 for each k a Poisson Point Process (PPP) of rate\nx k (Y t,k ) t≥0 x ∼ PPP(x k dt) . (4.32)\nInformally, Y 0,k = 0 and Y t,k is incremented by one in the interval [t, t + dt) independently with probability x k dt. In particular, for each k, t, Y t,k ∼ Poisson(tx k )\nThe transition probabilities are given by \nP Y t+δ = y Y t = y = 1 -δ" }, { "figure_ref": [], "heading": "The half-space process", "publication_ref": [], "table_ref": [], "text": "Let x ∈ R n and {H } ≥1 be a sequence of half spaces in R n . Namely,\nH k := {z ∈ R n : a k , z ≥ b k }, for some a k ∈ R n , b k ∈ R. For ≥ 0, we let Y = 1 x∈H 1 , . . . , 1 x∈H , .(4.35)\nNote that at step , Y is a binary vector of length ." }, { "figure_ref": [ "fig_2", "fig_2", "fig_4" ], "heading": "All of the above", "publication_ref": [], "table_ref": [], "text": "One useful property of the present approach is that it provides a natural way to combine two observation processes in a new one. Namely, given observation processes (Y\n(1)\nt ) t∈I 1 , (Y(2)\nt ) t∈I 2 , with I 1 , I 2 ⊆ R ≥0 , we can combine them by defining\nY t = Y (1) s , Y (2) s : s ≤ t . (4.36)\nFor instance, combining the isotropic diffusion process of Section 4.1 and the erasure process of Section 4.3, we obtain a new observation process in which, at time t a fraction of the coordinates of x is observed without nose, while the others are observed as corrupted by Gaussian noise.\n5 The role of the sampling scheme: Generating from a mixture Consider a mixture of two well-separated Gaussians in n dimensions with centers a 1 , a 2 ∈ R n , and weights p 1 = p, p 2 = 1 -p. For simplicity, we will assume the two Gaussians to have common (known) covariance that therefore we can assume to be equal to identity, and that the overall mean pa 1 + (1 -p)a 2 is known. Therefore the mean can be removed from the data and we are left with the simple model\nµ = p • N((1 -p)a; I n ) + (1 -p) • N(-pa; I n ) . (5.1)\nwhere a := a 1 -a 2 . We will further assume that p ∈ (0, 1) is independent of n, and that the radius of each of these Gaussians (which is of order √ n) is of the same order as the norm a 2 . (These assumptions are mainly introduced for convenience of presentation.)\nIn Figure 1, we display attempts to sample from µ using isotropic diffusions, i.e. the process of Eq. (1.17). We use a = 1, p = 0.7, n = 128. Each row is obtained using a different model for the posterior expectation m(y; t), and reports the histogram of X t , a / a 2 2 obtained by 1000 independent runs of the generation process. Here X t = m(Y t ; t) is the sample generated at time t. These empirical results are compared with the correct distribution X, a / a 2 2 under X ∼ µ.\nAlgorithm 1: Forward function; 2-Layer fully connected denoiser (for Gaussian mixture)\nFunction Forward(x ∈ R n , α ∈ [0, π/2]): φ ← (cos(α • i), sin(α • i); i ≤ 20) s ← Lin 0 (φ) x 1 ← ReLU • Lin 1 (x)) x 2 ← Flatten(s ⊗ x 1 ) x out ← cos(α)x + Lin 2 (x 2 ) ∈ R n return x out\nThe four models used in to generate data in Figure 1 have the same architecture, namely a two-layer fully connected ReLU network with m hidden nodes, three L × 20 linear layers encode time dependence, and a skip connection. Pseudocode for this architecture is given as Algorithm 1, whereby:\n• We encode t in terms of the angle variable α = arctan(1/ √ t).\n• Lin i is a fully connected linear map, with Lin 0 : R 40 → R L , Lin 1 : R n → R m , Lin 2 : R mn → R n .\n• ⊗ denotes tensor (outer) product.\nWe trained on N samples from µ. Parameters where chosen as follows (from top to bottom): (i) N = 5, 000, 500 epochs, L = 3, m = 256; (ii) N = 20, 000, 500 epochs, L = 3, m = 256; (We refer to Appendix B for further details.) It is quite clear that the distribution generated is significantly different from the target one along this one-dimensional projection.\nX t , 1 /n 0 1 2 3 t = 0.21 2 1 0 1 2 X t , 1 /n t = 1.16 2 1 0 1 2 X t , 1 /n t = 644.50\nIn Figure 2 we repeat the experiment keeping the same data generation process and same network architecture, but introducing a small change in the generation process. Given data x 1 , x 2 , . . . , x N , we compute the principal eigenvector v of the empirical covariance Σ := n i=1 x i x T i /n. and the empirical fraction q of samples that have a positive projection onto this vector. Namely, q := #{i ≤ N : x i , v ≥ 0}/N . Further, two distinct denoisers m + (y; t), m -(y; t) are learnt respectively from the samples x i such that x i , v 1 ≥ 0 and from those such that x i , v 1 < 0. The estimated probability q is stored along side the neural networks for m + , m -.\nThe generation process then proceeds as follows:\n1. Sample S ∈ {+1, -1} with probabilities P(S = +1) = q = 1 -P(S = -1).\n2. Generate Y t , t ≥ 0 running the isotropic diffusion process with denoiser m S ( • ; t). Alongside the original data perturbed by Gaussian noise, we reveal v, x for a fixed vector v." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6" ], "heading": "Return", "publication_ref": [], "table_ref": [], "text": "t = 0.21 t = 1.16 t = 644.50 2 1 0 1 2 X t , 1 /n 0 1 2 3 4 t = 0.21 2 1 0 1 2 X t , 1 /n t = 1.16 2 1 0 1 2 X t , 1 /n t = 644.50\nIt is straightforward to see that this is a combination (in the technical sense of Section 4.10) of the isotropic process of Section 4.1 and the half-space process of Section 4.9. A related idea was developed in [MW23]\nFigure 2 demonstrates that the modified process produces a distribution that matches better the target along direction a. While it is likely that similar results could have been obtained without changing the sampling scheme, using a more complex architecture to approximate m(y; t), the new sampling scheme simplifies this task and offers a convenient alternative.\nWhat is the origin of the difficulty in sampling via isotropic diffusions? A simple calculation shows that the posterior expectation takes the form m(y; t) = y 1 + t + a ϕ a, y a 2 ; t , (5.2)\nϕ(s; t) := p(1 -p) 1 + t e (1-p)s 1+t - t(1-p) 2 2(1+t) a 2 -e -ps 1+t + tp 2 2(1+t) a 2 pe (1-p)s 1+t - t(1-p) 2 2(1+t) a 2 + (1 -p)e -ps 1+t + tp 2 2(1+t) a 2 . (5.3)\nIt is easy to see that such a function can be accurately approximated by a ReLU network with one hidden layer. \n; t) =        y + (1 -p)a 1 + t + O(1/n) if a, y a 2 ≥ (1 -2p)t + ∆ , y -pa 1 + t + O(1/n) if a, y a 2 ≤ (1 -2p)t -∆ .\n(5.4)\nFurther, the two behaviors are matched on a window of size Θ(1/ a 2 ) = Θ(1/n) around a, y / a 2 = (1 -2p)t, which corresponds to the midpoint between the two cluster centers, scaled by t. The derivative of ϕ with respect to its first argument in this window is positive and of order n: as a consequence, the evolution along the direction a is highly sensitive to correctly estimating ϕ.\n6 The role of architecture: Images with long range correlations\nIn this section, we illustrate the interplay between sampling process and the network architecture by considering a simple numerical example.\nWe generate synthetic RGB images x i ∈ R 3×w×h , i ∈ {1, . . . , n} according to a simple distribution that is specified in Appendix C with w = h = 512. This distribution results in images that are either mostly blue (with probability 1/2) or mostly red (with probability 1/2), with some smooth variations. Samples generated according to this distribution are shown in Figure 3.\nWe try to learn a generative diffusion model for these images using two slightly different methods: (1) An isotropic diffusion as defined in Section 4.1; (2) A linear observation process as defined in Section 4.6. In both cases, we use a simple 2-layer convolutional network as denoiser.\nBefore providing further details, we point to Figure 4, which presents sample trajectories from the two generating processes7 . Despite the common denoiser architecture, the two generating processes behave very differently. Indeed, the isotropic process is locally correlated but misses global correlations. The linear observation process instead captures these correlations while keeping the same short-ranged denoiser architecture. " }, { "figure_ref": [ "fig_6" ], "heading": "Isotropic diffusion", "publication_ref": [], "table_ref": [], "text": "In our first approach, we train a simple two-layer convolutional neural network to denoise these images, i.e. we attempt to minimize the empirical risk\nRn (θ) = 1 n n i=1 E x i -mθ (tx i + √ tG; t) 2 . (6.1)\nHere expectation is taken with respect to t = tan(α) -2 , α ∼ Unif([0, π/2]), and G ∼ N(0, I 3×w×h ).\nAlgorithm 2: Forward function; 2-Layer CNN denoiser (standard)\nFunction Forward(x ∈ R 3×w×h , α ∈ [0, π/2]): φ ← (cos(α • i), sin(α • i); i ≤ 10) for i ← 0 to 4 do s i ← Lin i (φ) x 1 ← s 0 + s 1 Relu(Conv 1 (x)) ∈ R k×w×h x out ← cos(α) • s 2 x + s 3 x + s 4 Conv 2 (x 1 ) ∈ R 3×w×h return x out\nWe use a simple convolutional network with one hidden layer. Pseudocode for this network is given as Algorithm 2. A few details:\n• We encode t in terms of the angle variable α = arctan(1/ √ t).\n• Lin i is a fully connected linear map.\n• Conv i is a two-dimensional convolution with window size 5. Conv 1 has k = 12 channels and Conv 2 has 3 channels.\n• denotes entrywise product with the PyTorch broadcasting convention.\nSince the convolutional layers have window size 5×5, each output pixel i in mθ (y; t) is a function of a 9 × 9 patch around i in the input y. This appears to result in the short range correlations in the images of Fig. 4." }, { "figure_ref": [ "fig_6" ], "heading": "Linear observation process", "publication_ref": [], "table_ref": [], "text": "In our second approach, we use the linear observation process of Section 4.6, with the simple linear operator L : R 3×w×h → R 3×w×h × R 3 defined as follows. Writing\nx = (x ijl : i ≤ 3, j ≤ w, l ≤ h) for the entries of x Lx = x bx av , x av i := 1 wh j≤w,l≤h x ijl , i ∈ {1, 2, 3} . (6.2)\nIn words, the operator L appends to x the averages of the values in each of the three channels, scaled by a factor b. In our simulations we use b = 2. Roughly speaking, the factor b implies that information about the average x av is revealed a factor b faster than about other coordinates in x.\nAlgorithm 3: Forward function; 2-Layer CNN denoiser (linear obs.)\nFunction Forward(z ∈ R 6×w×h , α ∈ [0, π/2]): φ ← (cos(α • i), sin(α • i); i ≤ 10) for i ← 0 to 4 do s i ← Lin i (φ) x 1 ← s 0 + s 1 Relu(Conv 1 (x)) ∈ R k×w×h x 2 ← cos(α) • s 2 x + s 3 x + s 4 Conv 2 (x 1 ) ∈ R k×w×h z out ← Lx 2 return z out\nThe corresponding generative process (4.23) can be implemented with minimal modifications with respect to the previous section. Namely, we encode Lx as a tensor z ∈ R 6×w×h with 6 channels, whereby channels 4, 5, 6 are constant in space and contain the averages bx av 1 ,bx av 2 , bx av 3 . In the generative process, we add the same noise to all entries in each of these channels.\nThe denoiser in this case is detailed in Algorithm 3 and presents minimal modifications with respect to the one in the previous section:\n• The input has 6 channels instead of 3.\n• Correspondingly, the middle layer has 15 channels instead of 12.\n• At the output we enforce that channels 4, 5, 6 contain the mean of previous ones by applying the operator L.\nAs shown in Figure 4, these minimal modification produce a significantly different behavior. The images generated show stronger long range correlations." }, { "figure_ref": [], "heading": "The role of architecture: Shift-invariant Gaussians", "publication_ref": [], "table_ref": [], "text": "In order to gain more understanding of the example in the previous section, we consider the toy problem of samplig from a centered Gaussian µ = N(0, Σ). We will focus on the case in which Σ is a symmetric circulant matrix. Namely there exists c :\nZ → R, such that Σ i,j = c(i -j) and c(k + n) = c(k), c(k) = c(-k) for all k.\nEquivalently, µ is a centered Gaussian process invariant under shifts.\nAs a running example, we will consider the case Σ = I + α11 T . In other words, for g 0 ∼ N(0, 1) independent of g ∼ N(0, I n ), we have\nx = √ αg 0 1 + g . (7.1)\nThe condition number is κ(Σ) = (1 + nα).\nIt is worth reminding that a standard sampling procedure would be to generate g ∼ N(0, I), and then let x = Σ -1/2 g. We will make no attempt to beat this simple method, and will instead study the behavior of other approaches on this example." }, { "figure_ref": [], "heading": "Sampling via isotropic Gaussian diffusions", "publication_ref": [], "table_ref": [], "text": "The posterior expectation defined in Eq. (1.11) is a linear function of y. A simple calculation yields m(y; t) = A t y , A t := (I + tΣ) -1 Σ , (\nand therefore Y t satisfies the simple SDE\ndY t = A t Y t dt + dG t .(7.3)\nOf course, Y t is normal with mean zero and covariance t 2 Σ + tI. Letting X t := m(Y t ; t), we have X t ∼ N(0, Σ t ) where\nΣ t = (1 + tΣ) -1 tΣ 2 . (7.4)\nwhence an accurate approximation8 of µ is achieved for t 1/λ min (Σ). Note that accurate discretization requires stepsize δλ max (Σ) 1, and therefore the total number of iterations will scale as the condition number κ(Σ) := λ max (Σ)/λ min (Σ).\nIn general, the denoiser m(y; t) will be replaced by an approximation. How does architecture of the denoise impact the generated distribution? Since the distribution µ is Gaussian and shiftinvariant, it is natural to use a convolutional linear denoiser. However, we will constrain the convolution window size to be 2r + 1 n. Namely we use a matrix in L(r, n), where\nL(r, n) := M ∈ R n×n : M i,j = M (i -j) , M (k) = M (-k) = M (n + k)∀k , M (k) = 0∀|k| > r .(7.5)\nWe learn such a denoiser by minimizing the mean square error:\nA (r) t := arg min E x -A(tx + √ tg) 2 2 : A ∈ L(r, n) . (7.6)\nA simple calculation reveals (A (r) t ) i,j = t (|i -j|) where ( t (u)) -r≤u≤r solves\nt (u) + t r v=-r c(u -v) t (v) = c(u) ,(7.7)\nwith t (-u) = t (u). Given a solution of this equation we can determine the distribution of Y t (by integrating Eq. (7.3) whereby A t is replaced by A (r) t ) and hence the distribution of X t = L (r) t Y t . It follows from the symmetries of the problem that X t ∼ N(0, Σ gen t ) where Σ gen t is a symmetric circulant matrix. We limit ourselves to giving the results of this calculation when the correlation structure is given by (7.1). We get lim t→∞ Σ gen t = Σ gen where Σ gen ij = c gen (i -j) and\nc gen ( ) = 1 n q∈Bn ĉgen (q) e iq , B n := q = 2πk n : -(n/2) + 1 ≤ k ≤ (n/2) ,(7.8)\nĉgen (q) = F (ν(q), c 0 ) , (7.9)\nc 0 := 1 1 + (2r + 1)α\n, ν(q) := sin(q(r + 1/2)) (2r + 1) sin(q/2) , (7.10)\nwhere F : R × R → R is defined in Appendix D. The only fact that we will use is that x → F (x; c 0 ) is differentiable at x = 1, with F (1; c 0 ) = 1/c 0 , F (1; c 0 ) > 0.\nWe claim that the the generated distribution µ gen = N(0, Σ gen ) is very far from the target one µ = N(0, Σ). The fundamental reason for this is that -as in the numerical example of the last section-the measure µ has long range correlations (indeed E µ (x i x j ) = α > 0 for any i = j) while the finite width convolutional denoiser cannot produce such long-range correlations.\nThese remarks are formalized by the statement below.\nProposition 7.1. For any fixed r ∈ N, α > 0, let µ n = N(0, Σ n ) be the Gaussian measure, with covariance Σ n = I n + α1 n 1 T n , and denote by µ gen n,r be the generative distribution produced by the diffusion sampler with convolutional denoiser of window size 2r + 1.\nThen we have, for all (2r + 1) ≤ n/8 and nα ≥ 4\nW 2 (µ n , µ gen n,r ) ≥ 1 2 √ nα ,(7.11)\nlim n→∞ µ n -µ gen n,r TV = 1 .\n(7.12)\nProof. For any coupling γ of µ n ,µ gen n,r , letting (x, x gen ) ∼ γ, we have\nE x -x gen 2 2 1/2 ≥ 1 √ n E x -x gen , 1 2 1/2 ≥ 1 √ n E x, 1 2 1/2 - 1 √ n E x gen , 1 2 1/2 = 1 n 1, Σ n 1 - 1 n 1, Σ gen n,r 1 .\nHere the first inequality follows from Cauchy-Schwarz and the second is triangular inequality. On the other hand, using the above formulas To prove Eq. (7.12), define the random variables Z := x, 1 /n, Z gen := x gen , 1 /n. By the above calculation we have Z ∼ N(0, α + n -1 ), Z gen ∼ N(0, (1 + (2r + 1)α)n) and therefore µ n -µ gen n,r TV ≥ P Z -P Z gen TV → 1 .\n1, Σ n 1 = αn 2 + n ,(7.13\n(7.16)\nThe notion that the measure µ gen has only short range correlations can be easily made more precise. A simple calculation shows that correlations only extend to distances of order r, see Appendix D." }, { "figure_ref": [], "heading": "Sampling via the linear observation process", "publication_ref": [], "table_ref": [], "text": "We consider the same linear observation process as in our numerical experiments of Section 6.2 (with obvious adaptations). Namely, the observation process is defined by\nY t = Y 0,t Y * ,t , Y 0,t = bt n x, 1 + B 0,t , Y * ,t = t x + B * ,t ,(7.17)\nwhere {(B 0,t , B * ,t )} t≥0 is an (n + 1)-dimensional Brownian motion. This corresponds to the general construction of Section 4.6, whereby the matrix L is given by\nL =         b/n b/n b/n • • • b/n 1 0 0 • • • 0 0 1 0 • • • 0 • • • • • • • • • • • • • • 0 0 0 • • • 1         . (7.18)\nWe follow the general approach of Section 4.6 for sampling, namely\ndY t = m L (Y t ; t)dt + dG t ,(7.19)\nwhere now m L ( • ; t) : R n+1 → R n , and (G t ) g≥0 is an (n + 1)-dimensional Brownian motion, and m L (y; t) = E[Lx|Y t = y]. In fact we know that the correct m is linear (since the distribution of x is Gaussian) and shift invariant (since the distribution of x shift-invariant). It is understood that -th shift acts on\nR n+1 = R × R n via S        z 0 z 1 z 2 . . . z n        =        z 0 z 1+ z 2+ . . . z        . (7.20)\nIn other words, we can always consider (writing y = (y 0 , y * ))\nm L (y 0 , y * ; t) =: m 0 (y 0 , y * ; t) m * (y 0 , y * ; t) = d t c t 1 T a t 1 A t • y 0 y * ,(7.21)\nwhere A t is a circulant matrix and a t , c t , d t are scalars.\nWith the objective of understanding the experiments of Section 6.2, we attempt to approximate the optimal A t using finite-window covolutions:\na (r) t , c (r) t , d (r) t , A (r) t := arg min E Lx - d c1 T a1 A • Y 0,t Y * ,t2\n2 : A ∈ L(r, n) . (7.22)\nwhere Y 0,t , Y * ,t are distributed as specified above.\nTo simplify calculations, rather than explicitly solving the above quadratic optimization problem, we will guess a good feasible solution and check whether it gives the desired probability approximation. Our guess will be m0 (y 0 , y * ; t) = αb 2 y 0 1 + αb 2 t , (7.23) m * (y 0 , y *\n; t) = 1 1 + t y * - αby 0 1 + αb 2 t 1 + αby 0 1 + αb 2 t 1 . (7.24)\nThe rationale for this choice is as follows. Recall that x = √ αg 0 1 + g and therefore letting z = Lx, we have z 0 = √ αbg 0 + (G 1 / √ n) with (g 0 , G 1 ) independent standard normals. Therefore, the proposed m0 (y 0 , y * ; t) is the Bayes optimal estimator for z 0 given Y 0,t up to terms O(1/n). As for the m * (y 0 , y * ; t), notice that (for i ≥ 1) z i -(z 0 /b) ∼ N(0, 1). Hence, if z 0 was known, the optimal estimator for z i would be (z 0 /b) + (y i -z 0 /b)/(1 + t). The estimator (7.24) replaces z 0 by its estimate given by Eq. (7.23).\nProposition 7.2. Let µ n = N(0, Σ n ) be the Gaussian measure, with covariance Σ n = I n + α1 n 1 T n , and denote by μgen,t n the distribution generated by the (continuous time) linear observation process (with estimators of Eq. (7.23), (7.24)) at time t.\nThen:\nlim t→∞ W 2 (µ n , μgen,t n ) = lim t→∞ µ n -μgen,t n TV = 0 . (7.25)\nSubstituting in the above, we have:\nD(P P ) = T 0 n i=1 E ∆ p i (Y t ; t) pi (Y t ; t) dt . (A.16)\nIt is immediate to generalize this formula for the symmetric process of Section 4.5.\nAnother special case is given by the Poisson process of Section 4.8. In this case, the KL divergence is \nD(P P ) = T 0 n i=1 E ∆ m i (Y t ; t)" }, { "figure_ref": [], "heading": "C Sampling images: Omitted technical details C.1 The distribution over images", "publication_ref": [], "table_ref": [], "text": "Here we define the distribution over images that was used in the experiments of Section 6. It is convenient to recast an image x ∈ R 3×w×h as x = (x(i 1 , i 2 )) i 1 ≤w-1,i 2 ≤h-1 where x(i 1 , i 2 ) ∈ R 3 . In other words, x(i 1 , i 2 ) is the RGB encoding of pixel i 1 , i 2 . We then set\nx(i 1 , i 2 ) = tanh(ψ(i 1 , i 2 )) , ψ(i 1 , i 2 ) = ψ 0 + ψ 1 cos(q 1 i 1 + q 2 i 2 ) , (C.1)\nand generate a random image by drawing (ψ 0 , ψ 1 , q) randomly (here q = (q 1 , q 2 )).\nMore specifically, in our experiments we took these three vectors to be independent with ψ 0 = (1.95, 0, 0.05) with probability 1/2, (0.05, 0, 1.95) with probability 1/2, (C.2)\nψ 1 ∼ N 0, (1/16)I 3 , (C.3) q = 4π w U 1 , 4π h U 2 , U 1 , , U 2 ∼ Unif([0, 1]) . (C.4)" }, { "figure_ref": [], "heading": "C.2 Some details of the training", "publication_ref": [], "table_ref": [], "text": "We used stochastic gradient descent with batch size 4 over n = 300 samples (images generated according to the model in the previous section), for 100 epochs. While these samples are kept fixed, the noise vector G and signal-to-noise ratio t (cf. Eq. (6.1)) are resampled independently at each SGD sample." }, { "figure_ref": [], "heading": "D Shift-invariant Gaussians: Omitted derivations D.1 General formulas for isotropic diffusions", "publication_ref": [], "table_ref": [], "text": "The optimal convolution with window size 2r + 1 is obtained by solving Eq. (7.7). This results in t (u) = Again, at any t, we get X t ∼ N(0, Σ X t ) where Σ X t is a circulant matrix with (by construction), with eigenvalue decomposition Σ X t = q∈Bn σ X t (q)φ q φ * q , (φ q ) = 1 √ n e iq (D.5)\nB n := q = 2πk n : -(n/2) + 1 ≤ k ≤ (n/2) , .\n(D.6)\nAs t → ∞, we get σ X t (q) → σ X (q), where σ X (q) = F (ν(q); c 0 ) , (D.7) c 0 := 1 1 + (2r + 1)α , ν(q) := sin(q(r + 1/2)) (2r + 1) sin(q/2) , (D. For q → 0 we have the Taylor expansion σ X (q) = F (1; c 0 ) -1 6 F (1; c 0 ) r(r + 1) q 2 + O(q 4 ) . (D.12)\nDefine the second moment correlation length for the generated process The generated process Y 0,t satisfies dY We note that the right-hand side has the target distribution µ n , while the left-hand side is Gaussian for every t (because Y t is Gaussian as pointed out above). Therefore the claim follows from the following standard fact. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I would like to thank Marc Laugharn for a stimulating collaboration that motivated the present report." }, { "figure_ref": [], "heading": "A Loss functions", "publication_ref": [], "table_ref": [], "text": "Sampling schemes based on diffusions or stochastic localization reduce the problem of sampling from the target distribution µ to the one of estimating the transition probabilities P t,t+δ (y, A) = P Y t+δ ∈ A Y t = y (A.1)\nIf Pt,t+δ are estimated transition probabilities, then it is useful to measure the error due to inaccurate estimate via the Kullback-Leibler (KL) divergence between the two processes (a.k.a. relative entropy):\nHere, with an abuse of notation, we denote by P , P the probability measures over the paths Y T 0 induced by the above transition probabilities. Throughout we will assume Y 0 to have the same distribution under measures P and P as to simplify our formulas.\nIt is useful to keep in mind two uses the KL metric. From a theoretical point of view, it can be used to bound the distance between the target distribution µ, and the one produced by the algorithm that uses the estimated transition probabilities, call it μ. Indeed, by the data processing inequality, we have\nThe second use is as a loss function for estimating the transition rates P . Given data (x i ) i≤N ∼ iid µ, we generate realizations (Y i,t ) i≤n,t≥0 of the observation process. We can then estimate a parametric model Pt,t+δ (y, θ; • ) by minimizing the empirical risk\nThe purpose of this appendix is to collect known explicit formulas for the general KL divergence (A.3). In particular, we will cover all examples detailed in the previous sections." }, { "figure_ref": [], "heading": "A.1 Gaussian observation processe", "publication_ref": [], "table_ref": [], "text": "Consider the Gaussian process of Section 4.2, which we stop at time T . We use the estimated drift m(y; Ω) to generate\nAn immediate application of Girsanov's theorem yileds\nThis of course includes the standard diffusion of Section 4.1 as a special case. Also, the linear information process of Section 4.6 also fits this framework if we reinterpret m(Y t ; t) as m A (Y t ; t)." }, { "figure_ref": [], "heading": "A.2 Discrete time Markov chains", "publication_ref": [], "table_ref": [], "text": "Assume t ∈ I := {0, 1, . . . , T }. We denote the transition probabilities by P t (y; A) := P(Y t+1 ∈ A|Y t = y) and Pt (y; A) := P(Y t+1 ∈ A|Y t = y).\nIf Pt (y; • ) has a density with respect to P t (y; • ), then we have:\nFor instance this is the case for the erasure process of Section 4.3. In this case, the transition probabilities are estimates of the conditional laws µ x i(t) ∈ • x i(1) , . . . , x i(t-1)\nSimilarly, for the information percolation process of Section 4.7, we have As a special case, we have the symmetric process of Section 4.4 to generate x ∈ {+1, -1} n . Recall that in this case, we need an estimate mi (t; y) of the conditional expectation m i (t; y) = E[x i |x ⊗ Z t = y] (where Z t,i ∈ {+1, -1}, EZ t,i = t). We use it to form the probabilities pi (y; t) = 1 + t 2 2t(1 -t 2 ) -1 1 -t 2 y i mi (t; y) .\n(A.15)" } ]
Diffusions are a successful technique to sample from high-dimensional distributions can be either explicitly given or learnt from a collection of samples. They implement a diffusion process whose endpoint is a sample from the target distribution and whose drift is typically represented as a neural network. Stochastic localization is a successful technique to prove mixing of Markov Chains and other functional inequalities in high dimension. An algorithmic version of stochastic localization was introduced in [EAMS22], to obtain an algorithm that samples from certain statistical mechanics models. This notes have three objectives: (i) Generalize the construction [EAMS22] to other stochastic localization processes; (ii) Clarify the connection between diffusions and stochastic localization. In particular we show that standard denoising diffusions are stochastic localizations but other examples that are naturally suggested by the proposed viewpoint; (iii) Describe some insights that follow from this viewpoint.
Sampling, Diffusions, and Stochastic Localization
[ { "figure_caption": "mk (t; y) + o(δ) , (4.33) P Y t+δ,k = y k + 1 Y t = y = δ m k (t; y) + o(δ) . (4.34) wehre, as before, m k (t; y) := E[X k |Y t = y].", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: Generating from a mixture of two Gaussians in n = 128 dimensions using isotropic diffusions We compare the empirical distribution of the projection along the direction of the means difference, with the correct distribution. Each row corresponds to a different model for the posterior mean, and each column to a different time in the generation process.", "figure_data": "", "figure_id": "fig_2", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "X T = m S (Y T ; T ) for some large T .", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure2: Generating from a mixture of two Gaussians in n = 128 dimensions. The setting and network architecture are the same as in Fig.1, although the generating process is different. Alongside the original data perturbed by Gaussian noise, we reveal v, x for a fixed vector v.", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Sample images from the synthetic distribution used in for experiments in Section 6. See Appendix C.1 for a full definition.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Learning the distribution of Figure 3 and sampling via stochastic localization. Upper block: Standard isotropic diffusion. Lower block: A linear observation process. Each row corresponds to an independent realization of the generating process, with time progressing from left to right.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": ")1, Σ gen n,r 1 = ĉgen (0)n = F (1, c 0 )n = (1 + (2r + 1)α)n . (7.14)Substituting in the above, and using the definition of Wasserstein distance, we haveW 2 (µ n , µ gen n,r ) ≥ √ nα + 1 -(2r + 1)α + 1 , (7.15)and the claim (7.11) follows by a simple calculation.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "example Σ = I + α11 T , we obtain, for b r,α := 1 + (2r + 1)α, t)(1 + b α,r t) , (D.3) t (j) = α (1 + t)(1 + b α,r t)for 1 ≤ |j| ≤ r. (D.4)", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fs) 2(1-ν) (c + s) 2ν ds . (D.9) For any c ∈ (0, 1) (which is the case when c = c 0 defined above), the function ν → F (ν; c) is strictly positive, continuously differentiable and convex. Further F (1; c) = 1/c, F (0; c) = 1 and, writing F (ν; c) for the derivative of F with respect to ν s ≥ 0, c ∈ (0, 1), 1 ≤ log((1 + s)/(c + s)) ≤ log(1/c), whence 2 c ≤ F (1; c) ≤ 2 c log(1/c) . (D.11)", "figure_data": "", "figure_id": "fig_9", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "ξ 22(r) := lim n→∞ 1≤i,j≤n Σ X i,j d n (i, j) 2 (r) ≤ r(r + 1) log(1 + (2r + 1)α) 3 . (D.15)D.2 Proof of Proposition 7.2", "figure_data": "", "figure_id": "fig_10", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Lemma D. 1 .1Let (Z k ) k≥1 be a sequence of Gaussian vectors and assume Z k → Z ∞ almost surely, wehre Z ∞ is a non-degenerate Gaussian vector. Then, denoting by ν k the law of Z k , we havelim k→∞ W 2 (ν k , ν ∞ ) = 0, , lim k→∞ ν k -ν ∞ TV = 0 . (D.26)", "figure_data": "", "figure_id": "fig_11", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "For a fixed matrix A ∈ R m×n and an m-dimensional standard Brownian motion (B t ) t≥0 , we observe [MW23]Y t = t Ax + B t ,(4.22)where x ∼ µ. By the same argument in Section 4.1, Y t satisfies the SDEdY t = m A (Y t ; t) dt + dB t ,(4.23)where m A (Y t ; t) is the minimum mean square error estimator of Ax", "figure_data": "4.6 The linear observation processm A (y; t) = E Ax tAx +√tG = y .(4.24)(4.21)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "mi (Y t ; t) dt .We used stochastic gradient descent with batch size 50 over a number of epochs and samples that changes depending on the row of Figures1, 2. A fixed number of samples is generated, while the noise G and signal-to-noise ratio t are resampled independently at each SGD sample. At training time we sample t = tan(α) -2 with α ∼ Unif([0, π/2]). At generation time we use a Euler discretization with K equi-spaced values of α, K ∈ {200, 400} and check that results are insensitive to the value of K.", "figure_data": "(A.17)B Sampling Mixtures: Omitted technical details", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "In particular, since (Y 0,t ) t≥0 is Gaussian and independent of (B i,t ) i≥1,t≥0 , for any fixed t, Y * ,t = (Y i,t ) 1≤i≤n is centered Gaussian. Indeed its covariance takes the form Σ Y ,t = c 1 (t)I + c 2 (t)11 T for suitable constants c 1 (t), c 2 (t).Further letting G 0 be defined as per Eq. (D.18), we have the almost sure limit -1 dB i,s . We note that the (G i ) 0≤i≤n is a collection of i.i.d. standard normal random variables and, using the representation (D.20), almost surely", "figure_data": "lim t→∞0t1 (1 + s) 2 •αbs 1 + αb 2 sY 0,s ds = =∞ α G 0 . 0 (1 + s) 2 • 1 √√α G 0 ds(D.21) (D.22)Define G i :=∞ 0 (1 + s) lim t→∞1 1 + tY i,t =√α G 0 + G i .(D.23)Finally, definingX t = m * (Y 0,t , Y * ,t ; t) =1 1 + tY * ,t +αbt (1 + αb 2 t)(1 + t)Y 0,t 1 .(D.24)and using again Eqs. (D.18) and D.23, we obtainlim t→∞X t =√α G 0 1 + G * .(D.25)0,t =αb 2 Y 0,t 1 + αb 2 tdt + dB 0,t ,(D.16)which is easily integrated to yieldY 0,t =0t1 + αb 2 t 1 + αb 2 sdB 0,t ,(D.17)In particular, there exists a standard normal random variable G 0 such that the following limit holdsalmost surelylim n→∞αb 2 Y 0,t 1 + αb 2 t=√αb 2 G 0 .(D.18)Next consider the generated process Y i,t for any i ≥ 1:dY i,t =1 1 + tY i,t dt +αbt (1 + αb 2 t)(1 + t)Y 0,t dt + dB i,t ,(D.19)which yieldsY i,t =0t1 + t (1 + s) 2 •αbs 1 + αb 2 sY 0,s ds +0t1 + t 1 + sdB i,s ,(D.20)", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Andrea Montanari
[ { "authors": "Jacob Austin; Jonathan Daniel D Johnson; Daniel Ho; Rianne Tarlow; Van Den; Berg", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Structured denoising diffusion models in discrete state-spaces", "year": "2021" }, { "authors": "S Michael; Eric Albergo; Vanden-Eijnden", "journal": "", "ref_id": "b1", "title": "Building normalizing flows with stochastic interpolants", "year": "2022" }, { "authors": " Bergmans", "journal": "IEEE Transactions on Information Theory", "ref_id": "b2", "title": "Random coding theorem for broadcast channels with degraded components", "year": "1973" }, { "authors": " ", "journal": "", "ref_id": "b3", "title": "", "year": "" }, { "authors": "Andrew Campbell; Joe Benton; Valentin De Bortoli; Thomas Rainforth; George Deligiannidis; Arnaud Doucet", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b4", "title": "A continuous time framework for discrete denoising models", "year": "2022" }, { "authors": "Yuansi Chen; Ronen Eldan", "journal": "IEEE", "ref_id": "b5", "title": "Localization schemes: A framework for proving mixing bounds for Markov Chains", "year": "2022" }, { "authors": "Ting Chen; Ruixiang Zhang; Geoffrey Hinton", "journal": "", "ref_id": "b6", "title": "Analog bits: Generating discrete data using diffusion models with self-conditioning", "year": "2022" }, { "authors": "Jacob Deasy; Nikola Simidjievski; Pietro Liò", "journal": "", "ref_id": "b7", "title": "Heavy-tailed denoising score matching", "year": "2021" }, { "authors": "Ahmed El ; Alaoui ; Andrea Montanari", "journal": "IEEE Transactions on Information Theory", "ref_id": "b8", "title": "An information-theoretic view of stochastic localization", "year": "2022" }, { "authors": "Ahmed El Alaoui; Andrea Montanari; Mark Sellke", "journal": "IEEE", "ref_id": "b9", "title": "Sampling from the sherrington-kirkpatrick gibbs measure via algorithmic stochastic localization", "year": "2022" }, { "authors": "Ronen Eldan", "journal": "", "ref_id": "b10", "title": "Taming correlations through entropy-efficient measure decompositions with applications to mean-field approximation", "year": "2013" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b11", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Emiel Hoogeboom; Didrik Nielsen; Priyank Jaini; Patrick Forré; Max Welling", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b12", "title": "Argmax flows and multinomial diffusion: Learning categorical distributions", "year": "2021" }, { "authors": "G Ulrich; Etienne Haussmann; Pardoux", "journal": "The Annals of Probability", "ref_id": "b13", "title": "Time reversal of diffusions", "year": "1986" }, { "authors": "Tero Karras; Miika Aittala; Timo Aila; Samuli Laine", "journal": "Springer", "ref_id": "b14", "title": "Elucidating the design space of diffusion-based generative models", "year": "1977" }, { "authors": " Lwy +", "journal": "", "ref_id": "b15", "title": "", "year": "" }, { "authors": "Xingchao Liu; Lemeng Wu; Mao Ye", "journal": "", "ref_id": "b16", "title": "Learning diffusion bridges on constrained domains", "year": "2023" }, { "authors": "Chenlin Meng; Kristy Choi; Jiaming Song; Stefano Ermon; Andrea Montanari; Yuchen Wu", "journal": "", "ref_id": "b17", "title": "Concrete score matching: Generalized score matching for discrete data", "year": "2022" }, { "authors": "Eliya Nachmani; Robin San Roman; Lior Wolf; ; Herbert Robbins", "journal": "", "ref_id": "b18", "title": "An empirical bayes approach to statistics", "year": "1956" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "PMLR", "ref_id": "b19", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Yang Song; Stefano Ermon", "journal": "Advances in neural information processing systems", "ref_id": "b20", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b21", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": " Ssdk +", "journal": "", "ref_id": "b22", "title": "", "year": "" }, { "authors": "Yang Song; Jascha Sohl-Dickstein; P Diederik; Abhishek Kingma; Stefano Kumar; Ben Ermon; Poole", "journal": "", "ref_id": "b23", "title": "Score-based generative modeling through stochastic differential equations", "year": "2021" }, { "authors": "", "journal": "", "ref_id": "b24", "title": "SYD +", "year": "" }, { "authors": "Haoran Sun; Lijun Yu; Bo Dai; Dale Schuurmans; Hanjun Dai", "journal": "", "ref_id": "b25", "title": "Score-based continuous-time discrete diffusion models", "year": "2022" }, { "authors": "", "journal": "VKS +", "ref_id": "b26", "title": "", "year": "" }, { "authors": "Clement Vignac; Igor Krawczuk; Antoine Siraudin; Bohan Wang; Volkan Cevher; Pascal Frossard", "journal": "Cambridge University Press", "ref_id": "b27", "title": "Digress: Discrete denoising diffusion for graph generation", "year": "1991" }, { "authors": "Mao Ye; Lemeng Wu; Qiang Liu", "journal": "", "ref_id": "b28", "title": "First hitting diffusion models for generating manifold, graph and categorical data", "year": "2022" } ]
[ { "formula_coordinates": [ 2, 193.83, 574.17, 346.17, 10.67 ], "formula_id": "formula_0", "formula_text": "dY t = F (t, Y t ) dt + g(t) dB t , Y 0 ∼ ν( • ) ,(1.4)" }, { "formula_coordinates": [ 2, 190.16, 643.08, 349.84, 9.57 ], "formula_id": "formula_1", "formula_text": "g(t) = g(s(t)) |s (t)| . (1.6)" }, { "formula_coordinates": [ 3, 278.71, 110.74, 54.57, 14.7 ], "formula_id": "formula_2", "formula_text": "µ Y t = µ Z s(t) ." }, { "formula_coordinates": [ 3, 242.04, 204.47, 297.96, 20.19 ], "formula_id": "formula_3", "formula_text": "dZ s = -Z s ds + √ 2 dB s . (1.8)" }, { "formula_coordinates": [ 3, 184.23, 267.65, 243.55, 14.95 ], "formula_id": "formula_4", "formula_text": "Z s d = e -s x + 1 -e -2s G G ∼ N(0, I d ) ⊥ ⊥ x ." }, { "formula_coordinates": [ 3, 192.89, 369.8, 347.11, 24.43 ], "formula_id": "formula_5", "formula_text": "∇ z log µ Z s (z) = 1 1 -e -2s E[e -s x|Z s = z] -z (1.10)" }, { "formula_coordinates": [ 3, 169.13, 426.62, 273.74, 20.03 ], "formula_id": "formula_6", "formula_text": "m(y; t) := E[x|t x + √ tG = y] , (x, G) ∼ µ ⊗ N(0 I n ) ." }, { "formula_coordinates": [ 3, 178.08, 493.34, 361.92, 55.15 ], "formula_id": "formula_7", "formula_text": "F (t, y) = - 1 + t t(1 + t) y + 1 t(1 + t) m t(1 + t)y; t , (1.12) g(t) = 1 t(1 + t)" }, { "formula_coordinates": [ 3, 72, 559.59, 468, 57.15 ], "formula_id": "formula_8", "formula_text": "Y 0 ∼ N(0, I d ) is such that Y ∞ ∼ µ. Explicitly dY t = - 1 + t t(1 + t) Y t + 1 t(1 + t) m t(1 + t)Y t ; t dt + 1 t(1 + t) dB t . (1.14)" }, { "formula_coordinates": [ 4, 265.33, 364.98, 274.67, 10.67 ], "formula_id": "formula_9", "formula_text": "Y t = t x * + W t ,(1.15)" }, { "formula_coordinates": [ 4, 204.14, 513.93, 335.86, 50.31 ], "formula_id": "formula_10", "formula_text": "µ t (dx) = 1 Z µ(dx) exp - 1 2t Y t -tx 2 2 = 1 Z µ(dx) e Y t,x -1 2t x 2 , (1.16)" }, { "formula_coordinates": [ 5, 242.93, 194.94, 126.14, 10.67 ], "formula_id": "formula_11", "formula_text": "dY t = m(Y t ; t) dt + dB t ." }, { "formula_coordinates": [ 5, 169.13, 248.14, 370.87, 20.03 ], "formula_id": "formula_12", "formula_text": "m(y; t) := E[x|t x + √ tG = y] , (x, G) ∼ µ ⊗ N(0 I n ) . (1.18)" }, { "formula_coordinates": [ 5, 258.66, 397.21, 94.68, 10.67 ], "formula_id": "formula_13", "formula_text": "Y t = t(1 + t) Y t ." }, { "formula_coordinates": [ 6, 72, 504.79, 468, 38.75 ], "formula_id": "formula_14", "formula_text": "t 1 < t 2 < • • • < t k ∈ I, sequence of random variables x, Y t k , Y t k-1 ,. . . ,Y t 1 forms a Markov chain. Namely the conditional distribution P(Y t i-1 ∈ • |x, Y t i , . . . Y t k ) coincides with the probability distribution P(Y t i-1 ∈ • |Y t i )." }, { "formula_coordinates": [ 6, 252.61, 584.87, 287.39, 10.63 ], "formula_id": "formula_15", "formula_text": "µ t ( • ) := P(x ∈ • |Y t ) . (3.1)" }, { "formula_coordinates": [ 7, 72, 316.9, 468, 74.28 ], "formula_id": "formula_16", "formula_text": "A ⊆ R n : µ ∞ (A) := P(x ∈ A|Y t , t ∈ I) ∈ {0, 1} . (3.2) The notation µ ∞ is justified since lim t→∞ µ t (A) = µ ∞ (A) almost surely (by Levy's martingale convergence theorem). Since µ ∞ (A) ∈ {0, 1} for all A, it follows that µ ∞ (A) = 1 x∈A ." }, { "formula_coordinates": [ 7, 72, 447.93, 468, 25.23 ], "formula_id": "formula_17", "formula_text": "Remark 3.3. Since x, Y t k , Y t k-1 ,. . . ,Y t 1 , Y 0 forms a Markov Chain, so is the reverse sequence Y 0 , Y t 1 , Y t 2 ,. . . ,Y t k , x." }, { "formula_coordinates": [ 7, 85.33, 564.51, 454.67, 58.18 ], "formula_id": "formula_18", "formula_text": "k ,t k+1 (y k | • ) ≈ P t k ,t k+1 (y k | • ). 3. For each k ∈ {0 . . . , m}, sample y k+1 ∼ Pt k ,t k+1 (y k | • ) . (3.3)" }, { "formula_coordinates": [ 8, 225.39, 168.27, 314.62, 10.67 ], "formula_id": "formula_19", "formula_text": "Y t = tx * + W t , t ∈ I = [0, ∞) (4.1)" }, { "formula_coordinates": [ 8, 242.93, 230.84, 297.07, 10.67 ], "formula_id": "formula_20", "formula_text": "dY t = m(Y t ; t) dt + dB t ,(4.2)" }, { "formula_coordinates": [ 8, 72, 305.39, 468, 97.04 ], "formula_id": "formula_21", "formula_text": "Ŷ k+1 = Ŷ k+1 + m( Ŷ k ; t k ) δ k + G k δ k , δ k := t k+1 -t k , (4.3) where (G k ) k≥1 ∼ i.i.d. N(0, I n ). The corresponding approximate transition probability is Pt k ,t k+1 (y k |dy k+1 ) = 1 (2πδ k ) n/2 exp - 1 2δ k y k+1 -y k -δ k m(y k ; t k ) 2 2 dy k+1 . (4.4) Improved discretizations are given in [SME20, KAAL22]." }, { "formula_coordinates": [ 8, 169.02, 487.21, 370.98, 28.58 ], "formula_id": "formula_22", "formula_text": "Y t = t 0 Q(s)x * ds + t 0 Q(s) 1/2 dW s , t ∈ I = [0, ∞) . (4.5)" }, { "formula_coordinates": [ 8, 147.53, 545.18, 392.47, 28.58 ], "formula_id": "formula_23", "formula_text": "dY t = Q(t)m(Y t ; Ω(t)) dt + Q(t) 1/2 dB t , Ω(t) := t 0 Q(s) ds . (4.6)" }, { "formula_coordinates": [ 8, 123.27, 584.87, 416.73, 34.72 ], "formula_id": "formula_24", "formula_text": "Ω ∈ S + (n), we let m(y; Ω) := E[x|Ωx + Ω 1/2 G = y] . (4.7)" }, { "formula_coordinates": [ 8, 120.91, 655.75, 419.09, 13.44 ], "formula_id": "formula_25", "formula_text": "Ŷ k+1 = Ŷ k+1 + Q(t k )m( Ŷ k ; Ω(t k )) δ k + Q(t k ) 1/2 G k δ k , δ k := t k+1 -t k , (4.8)" }, { "formula_coordinates": [ 9, 253.46, 123.38, 286.54, 26.89 ], "formula_id": "formula_26", "formula_text": "Y t,i = x i if t ≥ T i , * if t < T i . (4.9)" }, { "formula_coordinates": [ 9, 219.56, 227.44, 172.88, 11.22 ], "formula_id": "formula_27", "formula_text": "x i(t) ∼ µ x i(t) ∈ • x i(1) , . . . , x i(t-1) ." }, { "formula_coordinates": [ 9, 272.82, 377.92, 267.18, 10.67 ], "formula_id": "formula_28", "formula_text": "Y t = x Z t ,(4.11)" }, { "formula_coordinates": [ 9, 207.2, 431.01, 197.59, 24.43 ], "formula_id": "formula_29", "formula_text": "P(Z t,i = +1) = 1 -P(Z t,i = -1) = 1 + t 2 ." }, { "formula_coordinates": [ 9, 72, 553.08, 468, 25.23 ], "formula_id": "formula_30", "formula_text": "t 1 < t 2 < • • • < t k , Z t k , Z t k-1 . . . , Z t 1 forms a Markov chain, and hence so does x, Y t k , Y t k-1 . . . , Y t 1 ." }, { "formula_coordinates": [ 9, 227.41, 694.55, 157.19, 10.77 ], "formula_id": "formula_31", "formula_text": "Z t,i = R i, ⇔ T i, ≥ t > T i, +1 ." }, { "formula_coordinates": [ 10, 130.08, 185.94, 276.32, 40.64 ], "formula_id": "formula_32", "formula_text": "(i) i = -y i and y (i) j = y j for j ∈ [n] \\ i): P(Y t+δ = y (i) |Y t = y) = p i (y; t) δ + o(δ) ." }, { "formula_coordinates": [ 10, 208.5, 259.31, 331.5, 43.64 ], "formula_id": "formula_33", "formula_text": "p i (y; t) = 1 + t 2 2t(1 -t 2 ) - 1 1 -t 2 y i m i (t; y) , (4.14) m i (t; y) := E[x i |Y t = y] . (4.15)" }, { "formula_coordinates": [ 10, 226.15, 417.64, 313.85, 27.03 ], "formula_id": "formula_34", "formula_text": "Y t,i = x i if T i,1 < t i ≤ 1, R i, if T i, +1 < t ≤ T i, .(4.16)" }, { "formula_coordinates": [ 10, 196.47, 493.78, 343.53, 26.89 ], "formula_id": "formula_35", "formula_text": "P(Y t,i = y|x) = (1 + (q -1)t)/q if y = x i , (1 -t)/q if y = x i . (4.17)" }, { "formula_coordinates": [ 10, 72, 538.74, 468, 25.78 ], "formula_id": "formula_36", "formula_text": "(Y t ) t∈[0,1] is a Markov forward in time. For y ∈ [q] n , z ∈ [q] \\ {y i }, let y (i,z) ) j = y j if j = i," }, { "formula_coordinates": [ 10, 167.42, 588.33, 372.59, 13.27 ], "formula_id": "formula_37", "formula_text": "P(Y t+δ = y (i,z) |Y t = y) = p i (y, z; t) δ + o(δ) ,(4.18)" }, { "formula_coordinates": [ 10, 184.01, 610.29, 355.99, 23.65 ], "formula_id": "formula_38", "formula_text": "P(Y t+δ = y|Y t = y) = 1 - z∈[q]\\{y i } p i (y, z; t) δ + o(δ) ,(4.19)" }, { "formula_coordinates": [ 10, 169.93, 663.49, 234.31, 24.43 ], "formula_id": "formula_39", "formula_text": "p i (y, z; t) = 1 qt + 1 1 -t b i (y, z; t) - 1 1 + (q -1)t b i (" }, { "formula_coordinates": [ 11, 88.4, 331.65, 451.61, 26.51 ], "formula_id": "formula_40", "formula_text": "• A has full column rank (in particular, m ≥ n). Then we can output x := A † Y ∞ (with A † := (A T A) -1 A T Y ∞ the pseudoinverse of A)." }, { "formula_coordinates": [ 11, 271.08, 425.55, 268.92, 10.67 ], "formula_id": "formula_41", "formula_text": "subj. to Ax = Y ∞ . (4.26)" }, { "formula_coordinates": [ 11, 224.92, 492.23, 315.08, 10.67 ], "formula_id": "formula_42", "formula_text": "Y t = (Y 1 , . . . , Y t ) ,(4.27)" }, { "formula_coordinates": [ 11, 228.73, 507.46, 311.27, 13.13 ], "formula_id": "formula_43", "formula_text": "Y t = a t , θ + ε t , ε t ∼ N(0, σ 2 ) . (4.28)" }, { "formula_coordinates": [ 11, 188.1, 569.82, 351.9, 24.43 ], "formula_id": "formula_44", "formula_text": "P(Y t+1 ≤ a|Y t = y) = Φ a -s σ ν t (ds|Y t = y) (4.29)" }, { "formula_coordinates": [ 11, 72, 599.25, 467.99, 33.21 ], "formula_id": "formula_45", "formula_text": ") := u -∞ exp(-v 2 /2)/ √ 2π dv is the standard Gaussian distribution and ν t ( • |Y t = y) is the conditional law of a t+1 , x given Y t = y." }, { "formula_coordinates": [ 12, 175.72, 197.14, 364.28, 11.25 ], "formula_id": "formula_46", "formula_text": "P Y ( ) +1 = y Y ( ) = P x t( +1) -x o( +1) = y Y ( ) . (4.31)" }, { "formula_coordinates": [ 12, 72, 317.38, 468, 34.23 ], "formula_id": "formula_47", "formula_text": "x k (Y t,k ) t≥0 x ∼ PPP(x k dt) . (4.32)" }, { "formula_coordinates": [ 12, 189.78, 419.63, 139.3, 10.81 ], "formula_id": "formula_48", "formula_text": "P Y t+δ = y Y t = y = 1 -δ" }, { "formula_coordinates": [ 12, 72, 525.19, 468, 47.83 ], "formula_id": "formula_49", "formula_text": "H k := {z ∈ R n : a k , z ≥ b k }, for some a k ∈ R n , b k ∈ R. For ≥ 0, we let Y = 1 x∈H 1 , . . . , 1 x∈H , .(4.35)" }, { "formula_coordinates": [ 12, 449.36, 652.27, 65.85, 15.63 ], "formula_id": "formula_50", "formula_text": "t ) t∈I 1 , (Y(2)" }, { "formula_coordinates": [ 12, 242.7, 687.99, 297.3, 14.19 ], "formula_id": "formula_51", "formula_text": "Y t = Y (1) s , Y (2) s : s ≤ t . (4.36)" }, { "formula_coordinates": [ 13, 195.95, 249.31, 344.05, 10.75 ], "formula_id": "formula_52", "formula_text": "µ = p • N((1 -p)a; I n ) + (1 -p) • N(-pa; I n ) . (5.1)" }, { "formula_coordinates": [ 13, 88.94, 417.36, 199.76, 93.49 ], "formula_id": "formula_53", "formula_text": "Function Forward(x ∈ R n , α ∈ [0, π/2]): φ ← (cos(α • i), sin(α • i); i ≤ 20) s ← Lin 0 (φ) x 1 ← ReLU • Lin 1 (x)) x 2 ← Flatten(s ⊗ x 1 ) x out ← cos(α)x + Lin 2 (x 2 ) ∈ R n return x out" }, { "formula_coordinates": [ 14, 89.26, 280.42, 428.93, 82.52 ], "formula_id": "formula_54", "formula_text": "X t , 1 /n 0 1 2 3 t = 0.21 2 1 0 1 2 X t , 1 /n t = 1.16 2 1 0 1 2 X t , 1 /n t = 644.50" }, { "formula_coordinates": [ 15, 89.26, 235, 428.93, 127.95 ], "formula_id": "formula_55", "formula_text": "t = 0.21 t = 1.16 t = 644.50 2 1 0 1 2 X t , 1 /n 0 1 2 3 4 t = 0.21 2 1 0 1 2 X t , 1 /n t = 1.16 2 1 0 1 2 X t , 1 /n t = 644.50" }, { "formula_coordinates": [ 15, 146.35, 623.24, 393.65, 40.09 ], "formula_id": "formula_56", "formula_text": "ϕ(s; t) := p(1 -p) 1 + t e (1-p)s 1+t - t(1-p) 2 2(1+t) a 2 -e -ps 1+t + tp 2 2(1+t) a 2 pe (1-p)s 1+t - t(1-p) 2 2(1+t) a 2 + (1 -p)e -ps 1+t + tp 2 2(1+t) a 2 . (5.3)" }, { "formula_coordinates": [ 16, 170.56, 220.75, 288.71, 54.22 ], "formula_id": "formula_57", "formula_text": "; t) =        y + (1 -p)a 1 + t + O(1/n) if a, y a 2 ≥ (1 -2p)t + ∆ , y -pa 1 + t + O(1/n) if a, y a 2 ≤ (1 -2p)t -∆ ." }, { "formula_coordinates": [ 18, 197.4, 134.04, 342.61, 33.71 ], "formula_id": "formula_58", "formula_text": "Rn (θ) = 1 n n i=1 E x i -mθ (tx i + √ tG; t) 2 . (6.1)" }, { "formula_coordinates": [ 18, 88.94, 223.38, 300.78, 96.02 ], "formula_id": "formula_59", "formula_text": "Function Forward(x ∈ R 3×w×h , α ∈ [0, π/2]): φ ← (cos(α • i), sin(α • i); i ≤ 10) for i ← 0 to 4 do s i ← Lin i (φ) x 1 ← s 0 + s 1 Relu(Conv 1 (x)) ∈ R k×w×h x out ← cos(α) • s 2 x + s 3 x + s 4 Conv 2 (x 1 ) ∈ R 3×w×h return x out" }, { "formula_coordinates": [ 18, 72, 591.94, 468, 63.25 ], "formula_id": "formula_60", "formula_text": "x = (x ijl : i ≤ 3, j ≤ w, l ≤ h) for the entries of x Lx = x bx av , x av i := 1 wh j≤w,l≤h x ijl , i ∈ {1, 2, 3} . (6.2)" }, { "formula_coordinates": [ 19, 88.94, 93.56, 294.83, 109.57 ], "formula_id": "formula_61", "formula_text": "Function Forward(z ∈ R 6×w×h , α ∈ [0, π/2]): φ ← (cos(α • i), sin(α • i); i ≤ 10) for i ← 0 to 4 do s i ← Lin i (φ) x 1 ← s 0 + s 1 Relu(Conv 1 (x)) ∈ R k×w×h x 2 ← cos(α) • s 2 x + s 3 x + s 4 Conv 2 (x 1 ) ∈ R k×w×h z out ← Lx 2 return z out" }, { "formula_coordinates": [ 19, 72, 531.99, 468, 23.12 ], "formula_id": "formula_62", "formula_text": "Z → R, such that Σ i,j = c(i -j) and c(k + n) = c(k), c(k) = c(-k) for all k." }, { "formula_coordinates": [ 19, 266.71, 607, 273.29, 19.02 ], "formula_id": "formula_63", "formula_text": "x = √ αg 0 1 + g . (7.1)" }, { "formula_coordinates": [ 20, 251.74, 169.71, 288.26, 10.67 ], "formula_id": "formula_65", "formula_text": "dY t = A t Y t dt + dG t .(7.3)" }, { "formula_coordinates": [ 20, 255.33, 226.75, 284.67, 13.13 ], "formula_id": "formula_66", "formula_text": "Σ t = (1 + tΣ) -1 tΣ 2 . (7.4)" }, { "formula_coordinates": [ 20, 72, 361.42, 473.84, 29.64 ], "formula_id": "formula_67", "formula_text": "L(r, n) := M ∈ R n×n : M i,j = M (i -j) , M (k) = M (-k) = M (n + k)∀k , M (k) = 0∀|k| > r .(7.5)" }, { "formula_coordinates": [ 20, 163.73, 419.25, 376.27, 22.71 ], "formula_id": "formula_68", "formula_text": "A (r) t := arg min E x -A(tx + √ tg) 2 2 : A ∈ L(r, n) . (7.6)" }, { "formula_coordinates": [ 20, 228.07, 478.04, 311.93, 33.26 ], "formula_id": "formula_69", "formula_text": "t (u) + t r v=-r c(u -v) t (v) = c(u) ,(7.7)" }, { "formula_coordinates": [ 20, 125.56, 606.43, 414.44, 29.64 ], "formula_id": "formula_70", "formula_text": "c gen ( ) = 1 n q∈Bn ĉgen (q) e iq , B n := q = 2πk n : -(n/2) + 1 ≤ k ≤ (n/2) ,(7.8)" }, { "formula_coordinates": [ 20, 145.31, 659.42, 91.94, 24.43 ], "formula_id": "formula_71", "formula_text": "c 0 := 1 1 + (2r + 1)α" }, { "formula_coordinates": [ 21, 267.47, 254.27, 272.53, 25.44 ], "formula_id": "formula_72", "formula_text": "W 2 (µ n , µ gen n,r ) ≥ 1 2 √ nα ,(7.11)" }, { "formula_coordinates": [ 21, 158.96, 337.01, 293.58, 85.12 ], "formula_id": "formula_73", "formula_text": "E x -x gen 2 2 1/2 ≥ 1 √ n E x -x gen , 1 2 1/2 ≥ 1 √ n E x, 1 2 1/2 - 1 √ n E x gen , 1 2 1/2 = 1 n 1, Σ n 1 - 1 n 1, Σ gen n,r 1 ." }, { "formula_coordinates": [ 21, 189.53, 468.24, 345.82, 13.13 ], "formula_id": "formula_74", "formula_text": "1, Σ n 1 = αn 2 + n ,(7.13" }, { "formula_coordinates": [ 22, 163.7, 137.75, 376.3, 24.77 ], "formula_id": "formula_75", "formula_text": "Y t = Y 0,t Y * ,t , Y 0,t = bt n x, 1 + B 0,t , Y * ,t = t x + B * ,t ,(7.17)" }, { "formula_coordinates": [ 22, 224.34, 212.18, 315.66, 77.97 ], "formula_id": "formula_76", "formula_text": "L =         b/n b/n b/n • • • b/n 1 0 0 • • • 0 0 1 0 • • • 0 • • • • • • • • • • • • • • 0 0 0 • • • 1         . (7.18)" }, { "formula_coordinates": [ 22, 240.54, 333, 299.46, 10.74 ], "formula_id": "formula_77", "formula_text": "dY t = m L (Y t ; t)dt + dG t ,(7.19)" }, { "formula_coordinates": [ 22, 156.36, 396.06, 383.64, 94.57 ], "formula_id": "formula_78", "formula_text": "R n+1 = R × R n via S        z 0 z 1 z 2 . . . z n        =        z 0 z 1+ z 2+ . . . z        . (7.20)" }, { "formula_coordinates": [ 22, 168.74, 525.97, 371.26, 27.2 ], "formula_id": "formula_79", "formula_text": "m L (y 0 , y * ; t) =: m 0 (y 0 , y * ; t) m * (y 0 , y * ; t) = d t c t 1 T a t 1 A t • y 0 y * ,(7.21)" }, { "formula_coordinates": [ 22, 86.85, 618.65, 315.84, 28.51 ], "formula_id": "formula_80", "formula_text": "a (r) t , c (r) t , d (r) t , A (r) t := arg min E Lx - d c1 T a1 A • Y 0,t Y * ,t2" }, { "formula_coordinates": [ 22, 398.46, 629.73, 141.54, 20.35 ], "formula_id": "formula_81", "formula_text": "2 : A ∈ L(r, n) . (7.22)" }, { "formula_coordinates": [ 23, 227.14, 126.01, 312.86, 24.43 ], "formula_id": "formula_82", "formula_text": "; t) = 1 1 + t y * - αby 0 1 + αb 2 t 1 + αby 0 1 + αb 2 t 1 . (7.24)" }, { "formula_coordinates": [ 23, 197.01, 322.58, 343, 15.72 ], "formula_id": "formula_83", "formula_text": "lim t→∞ W 2 (µ n , μgen,t n ) = lim t→∞ µ n -μgen,t n TV = 0 . (7.25)" }, { "formula_coordinates": [ 28, 189.67, 91.85, 350.33, 33.71 ], "formula_id": "formula_84", "formula_text": "D(P P ) = T 0 n i=1 E ∆ p i (Y t ; t) pi (Y t ; t) dt . (A.16)" }, { "formula_coordinates": [ 28, 185.58, 181.06, 163.43, 33.71 ], "formula_id": "formula_85", "formula_text": "D(P P ) = T 0 n i=1 E ∆ m i (Y t ; t)" }, { "formula_coordinates": [ 28, 147.94, 475.54, 392.06, 11.65 ], "formula_id": "formula_86", "formula_text": "x(i 1 , i 2 ) = tanh(ψ(i 1 , i 2 )) , ψ(i 1 , i 2 ) = ψ 0 + ψ 1 cos(q 1 i 1 + q 2 i 2 ) , (C.1)" }, { "formula_coordinates": [ 28, 193.44, 569.96, 346.56, 39.63 ], "formula_id": "formula_87", "formula_text": "ψ 1 ∼ N 0, (1/16)I 3 , (C.3) q = 4π w U 1 , 4π h U 2 , U 1 , , U 2 ∼ Unif([0, 1]) . (C.4)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b8", "b48", "b20", "b22", "b8", "b26", "b4", "b50", "b2", "b4", "b24", "b4", "b4", "b50", "b56", "b30", "b34", "b28", "b50", "b8", "b26", "b8", "b11", "b20", "b48", "b26", "b8", "b11" ], "table_ref": [], "text": "Gradient Boosting Decision Tree (GBDT) is one of the most widely used machine learning models and has been applied in numerous domains, including medicine, finance, climate science, and healthcare. GBDT is particularly popular in modeling tabular data [Chen and Guestrin, 2016;Shwartz-Ziv and Armon, 2022;Gorishniy et al., 2021;Grinsztajn et al., 2022]. In the training process of a GBDT model, * Equal Contribution.\nwe need to construct decision trees one by one. During tree construction, we need to determine the best split and the split finding algorithm is one of the most crucial components in GBDT. In the standard implementation [Friedman, 2001;Chen and Guestrin, 2016;Ke et al., 2017], the best split is chosen based on the reduction in loss (decrease in impurity) of all candidate splits in all features. However, this split finding algorithm has long been criticized for its bias towards features that exhibit more potential splits [Breiman et al., 1984;Strobl et al., 2007;Boulesteix et al., 2012;Nicodemus, 2011]. Due to the increased flexibility afforded by a larger number of potential split points, features with higher cardinality (such as continuous features and features with a large number of categories) have a higher probability of being split than features with lower cardinality (such as binary features). This bias introduces two problems in GBDT:\n• Interpretability issue. The gain importance [Breiman et al., 1984;Hastie et al., 2001] in GBDT sums up the total reduction of loss in all splits for a given feature, and is frequently used to explain how influential a feature is on the models' predictions. However, gain importance is not reliable due to its bias towards features with high cardinality [Breiman et al., 1984]. As we illustrate in Example 1 in Section 4, a continuous feature independent of the target may have higher gain importance than a binary feature related to the target.\n• Overfitting issue. During tree construction, the split finding algorithm biases towards choosing features with high cardinality [Breiman et al., 1984;Strobl et al., 2007]. Moreover, the split finding algorithm uses training set statistics to determine the best split and does not evaluate the generalization performance of each split.\nExisting studies to address the bias problems mostly fall into two categories: 1) they propose a post hoc approach to calculate unbiased or debiased feature importance measurement [Zhou and Hooker, 2021;Li et al., 2019], and 2) they propose new tree building algorithms by redesigning split finding algorithms [Loh and Shih, 1997;Kim and Loh, 2001;Loh, 2009;Strobl et al., 2007]. However, these methods mostly focus on random forests, and cannot generalize to GBDT. One of the main reasons is that, different from most random forest implementations, existing GBDT implementations employ the second-order approximation of the objective arXiv:2305.10696v1 [cs.LG] 18 May 2023 function to evaluate split-improvement [Chen and Guestrin, 2016;Ke et al., 2017] (see more detailed discussions in the related work section). Since popular GBDT implementations, such as XGBoost [Chen and Guestrin, 2016] and Cat-Boost [Dorogush et al., 2018], have been dominating tabular data modeling [Gorishniy et al., 2021;Shwartz-Ziv and Armon, 2022], there is an urgent need to address the interpretability and overfitting issues caused by the bias in GBDT.\nTo study the causes of the bias in GBDT, we conduct a finegrained analysis, which reveals that the bias originates from: 1) the systematic bias in each split's gain estimation. We discover that the calculation of gain is a biased estimation of the split improvement, and is almost always positive. 2) The bias in the split finding algorithm due to the fact that it evaluates the split improvement and determines the best split using the same set of data. According to the analysis, first, we construct an unbiased measurement of feature importance for GBDT by using out-of-bag samples. This new measurement is unbiased in the sense that features with no predictive power for the target variable has an importance score of zero in expectation. Next, we incorporate the unbiased property into the split finding algorithm during tree construction and propose UnbiasedGBM. Compared with existing GBDT implementations (such as LightGBM [Ke et al., 2017], XGBoost [Chen and Guestrin, 2016], and CatBoost [Dorogush et al., 2018]), UnbiasedGBM has two advantages:\n1. The split finding algorithm unbiasedly chooses among features with different cardinality to mitigate overfitting. 2. UnbiasedGBM evaluates the generalization performance of each split and performs leaf-wise earlystopping to avoid overfitting splits. The contributions of this paper are summarized as follows:\n1. We propose unbiased gain, an unbiased measurement of feature importance in GBDT to address the interpretability issue due to the bias in the split finding algorithm. 2. We propose UnbiasedGBM by integrating the unbiased property into the split finding algorithm to mitigate overfitting. 3. We provide a large-scale empirical study comprising 60 datasets to show that: 1) UnbiasedGBM exhibits better performance on average than LightGBM, XGBoost, and Catboost, and 2) unbiased gain achieves better average performance in feature selection than gain importance, permutation feature importance, and SHAP importance." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b44", "b46", "b40", "b30", "b56", "b46", "b40", "b56", "b34", "b28", "b50", "b50" ], "table_ref": [], "text": "Existing methods to correct the bias in the split finding algorithm fall primarily into two categories: 1) they propose a new method to compute debiased or unbiased feature importance measurement. 2) They propose new tree construction algorithms by redesigning the split finding algorithm.\nThere has been a line of work to develop new methods for computing debiased or unbiased feature importance. Quinlan [Quinlan, 1986] proposed information gain ratio to overcome the bias in classification trees. Sandri and Zuccolotto [Sandri and Zuccolotto, 2008] decomposed splitimprovement into the reduction in loss and a positive bias.\nThey used a pseudo dataset to estimate and subtract the bias. Nembrini et al. [Nembrini et al., 2018] then improved the computing efficiency of this approach. Li et al. [Li et al., 2019] proposed a debiased feature importance measure. However, their method still yields biased results. Zhou and Hooker [Zhou and Hooker, 2021] proposed an unbiased measurement of feature importance in random forests. Nonetheless, the theoretical analysis relies on using mean squared error to justify the unbiased property of their method and cannot be generalized to GBDT, which often employs different loss functions for tree construction. In this paper, we propose unbiased gain, an unbiased measurement of feature importance in GBDT. Our method enjoys several advantages compared with previous methods: 1) Our method does not generate pseudo data that incurs additional cost as in Sandri and Zuccolotto [Sandri and Zuccolotto, 2008] and Nembrini et al. [Nembrini et al., 2018]. 2) Our method can be easily used in GBDT implementations and has the theoretical guarantee of being unbiased, whereas Zhou and Hooker [Zhou and Hooker, 2021] cannot generalize to GBDT.\nThere has been another line of works that develop new tree building algorithms to remove the bias, such as QUEST [Loh and Shih, 1997], CRUISE [Kim and Loh, 2001], GUIDE [Loh, 2009], and cforest [Strobl et al., 2007]. However, these methods cannot generalize to GBDT for a variety of reasons. For example, QUEST, CRUISE, and GUIDE use classification trees, whereas GBDT uses regression trees for both classification and regression tasks and supports various loss functions. cforest [Strobl et al., 2007] separates the variable selection and the splitting procedure to remove the bias. However, this method incurs an excessive amount of computational overhead, as variable selection is typically costly. We are the first to integrate the unbiased property into GBDT and develop UnbiasedGBM to address the overfitting problem caused by the bias in GBDT." }, { "figure_ref": [], "heading": "Background", "publication_ref": [ "b8", "b26", "b8" ], "table_ref": [], "text": "We briefly introduce the GBDT model in which the second order approximation is used in the training (e.g., XGBoost [Chen and Guestrin, 2016], LightGBM [Ke et al., 2017]). Note that we formulate the GBDT objective under the population distribution as opposed to the traditional formulation of GBDT utilizing the empirical distribution [Chen and Guestrin, 2016;Li, 2012]. This formulation is essential, and it allows us to examine and comprehend the bias in GBDT." }, { "figure_ref": [], "heading": "Gradient Boosting Decision Trees", "publication_ref": [], "table_ref": [], "text": "Consider the dataset D = {(x, y)}, where (x, y) are independent and identically distributed from an unknown distribution T . A tree ensemble model uses K additive functions to model the distribution T and predict the output:\nŷ = φ(x) = K k=1 f k (x), f k ∈ F,\nwhere F = {f (x) = w q(x) } is the space of regression trees. Here q represents the tree structure and q(x) maps an example x to the leaf index. We construct the tree ensemble in an additive manner to minimize the objective function\nL(φ) = E x,y [l(φ(x), y)]\n. Let φ t be the model at the t-th iteration and ŷt be the corresponding prediction. We greedily add a new regression tree f t that most improves the objective function L(φ t-1 + f t ). This is achieved by using the secondorder approximation:\nL(φ t-1 + f t ) ≈ E x,y [l(ŷ t-1 , y) + g(x, y)f t (x) + 1 2 h(x, y)f t (x) 2 ],\nwhere\ng(x, y) = ∂l(φ t-1 (x), y) ∂φ t-1 (x) , h(x, y) = ∂ 2 l(φ t-1 (x), y) (∂φ t-1 (x)) 2 .\nWe can simplify the objective function by removing the constant terms:\nL(φ t-1 + f t ) = E x,y g(x, y)f t (x) + 1 2 h(x, y)f t (x) 2 .\nFor a leaf node I in the tree structure, the loss L(I) contributed by the leaf is\nL(I) = E x,y 1 {q(x)=I} g(x, y)f (x) + 1 2 h(x, y)f (x) 2 = E x,y 1 {q(x)=I} g(x, y)w I + 1 2 h(x, y)w 2 I = P (x ∈ I) µ g (I)w I + 1 2 µ h (I)w 2 I ,\nwhere\nµ g (I) = E x,y [g(x, y)] and µ h (I) = E x,y [h(x, y)].\nWe can calculate the optimal weight w I of leaf I by\nw I = - µ g (I) µ h (I)\nand compute the corresponding optimal loss by\nL(I) = - 1 2 µ g (I) 2 µ h (I) P (x ∈ I).(1)\nConsider a split on feature X j at a splitting point s, which results in two child nodes I L = {(x, y) x j ≤ s} and I R = {(x, y) x j > s}. The gain of the split θ = (j, s) is defined as the reduction in loss:\nGain(I, θ) = L(I) -L(I L ) -L(I R ).\n(2)\nIn practice, the distribution T is usually unknown, therefore we cannot directly calculate µ g (I) and µ h (I). Instead, we use the training dataset to estimate µ g (I) and µ h (I). \nL(I) = - 1 2 1 n I ∑ i∈I g i 2 1 n I ∑ i∈I h i n I n = - 1 2n G 2 I H I ,(3)\nGain(I, θ) = 1 2n G 2 L H L + G 2 R H R - G 2 I H I ,(4)\nwhere G I = ∑ i∈I g i , H I = ∑ i∈I h i , and n I is the number of samples on node I." }, { "figure_ref": [], "heading": "Gain Importance", "publication_ref": [ "b4", "b24" ], "table_ref": [], "text": "Gain importance [Breiman et al., 1984;Hastie et al., 2001], also known as mean decrease in impurity, is a kind of feature importance in tree-based methods. It is frequently used to explain how influential a feature is on the model's predictions. Gain importance is calculated by summing up the split gain in Eq 4 of all the splits for each feature respectively." }, { "figure_ref": [], "heading": "Analysis of Bias in GBDT", "publication_ref": [], "table_ref": [], "text": "We analyze the bias in GBDT and demonstrate that it stems from the systematic bias in the gain estimation and the bias in the split finding algorithm. We show how this bias might lead to serious interpretability and overfitting problems in GBDT." }, { "figure_ref": [], "heading": "Bias in The Gain Estimation", "publication_ref": [], "table_ref": [], "text": "Gain estimates the reduction in loss of a given split on a feature, which is used in both tree construction and the interpretation of a feature's importance in the tree ensemble model. Intuitively, we would like the Gain to be unbiased, i.e., it should be zero in expectation when randomly splitting on a feature that is independent of the target. However, Gain is always non-negative for any split on any feature.\nTheorem 1. For a dataset (X, Y ) sampled from a distribution T , for any split θ of node I on a given feature X j , we always have\nGain(I, θ) ≥ 0.\nAccording to the theorem, the split gain for a random split on a feature independent of the target is almost always positive (the split gain is zero in very rare cases, see the proof and more discussions in Appendix A). This implies that 1) we may split on an uninformative feature, and 2) a positive split gain does not necessarily indicate that the feature contributes to the model. One of the reasons causing this bias is that,\n( 1 n ∑ n i=1 g i ) 2\nis not an unbiased estimation of µ 2 g :\nE D ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 n I i∈I g i 2⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = E D ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 n 2 I i,j∈I,i≠j 2g i g j ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ + E D 1 n 2 I i∈I g 2 i = n I -1 n I µ g (I) 2 + 1 n I µ g (I) 2 + σ g (I) 2 = µ g (I) 2 + 1 n I σ g (I) 2 .\nFor an uninformative feature that is independent of the target, any split on the feature yields µ g\n(I) = µ g (I L ) = µ g (I R ) and σ g (I) = σ g (I L ) = σ g (I R\n). Consider a regression problem with the MSE loss where the hessian is always a constant, according to Eq. 3 we have hence the split gain on the uninformative feature is\nE D L(I) = 1 2n E D [n I ] µ g (I) 2 + σ g (I) 2 , X1X2\nE D Gain(I, θ) = E D L(I L ) + E D L(I R ) -E D L(I) = 1 2n σ g (I) 2 + E D [n L + n R -n I ] µ g (I) 2 = 1 2n σ g (I) 2 ≥ 0." }, { "figure_ref": [ "fig_1" ], "heading": "Bias in The Split Finding Algorithm", "publication_ref": [], "table_ref": [], "text": "One of the main challenges in tree learning is to find the optimal split that maximize the reduction of the loss, as shown in Eq. 2. To do this, a split finding algorithm iterates over candidate splits on all features to identify the optimal split that minimizes loss on the training dataset. This strategy for identifying the optimal split introduces two problems in tree learning: 1) the split finding algorithm favors features with high cardinality (such as continuous features or categorical features with many categories). Higher cardinality features have a greater number of candidate splits, and thus a greater likelihood of being split.\n2) The split finding algorithm always selects the best split on the training set, without evaluating the generalization performance of each split. The two problems together lead to the overfitting problem in GBDT.\nWe use an example to illustrate how these two problems adversely affect tree learning.\nExample 1. We generate a synthetic dataset, so that X 1 is a binary feature, X 2 is a categorical feature with 6 categories (each category has equal probability), and X 3 ∼ N (0, 1) is continuous. Consider a regression problem with y = 0.1X 1 + where ∼ N (0, 1). We train a GBDT on the synthetic dataset and plot the gain importance of each feature in Figure 1(a). We can see that the importance of X 2 and X 3 is larger than that of X 1 , even if X 2 and X 3 are independent with the target variable. This shows that GBDT overfits on the noise due to the bias in the split finding algorithm. In addition, this bias introduces interpretability issues, as X 2 and X 3 are more important than X 1 based on the gain importance." }, { "figure_ref": [], "heading": "Our Method", "publication_ref": [], "table_ref": [], "text": "To solve the interpretability issue caused by the bias in GBDT, we propose \"unbiased gain\", an unbiased measurement of feature importance in Section 5.1. Then we incorporate the unbiased property into the split finding algorithm and propose UnbiasedGBM in Section 5.2 to address the issue of overfitting caused by the bias in GBDT." }, { "figure_ref": [ "fig_1" ], "heading": "Unbiased Gain", "publication_ref": [], "table_ref": [], "text": "Our earlier analysis revealed that there are two sources of the bias in gain importance. First, gain importance biases towards features with high cardinality due to the split finding algorithm. Second, gain importance is always non-negative due to the biased estimation of Eq 2. Our goal is to propose an unbiased measurement of feature importance (unbiased gain). This new measurement is unbiased in a sense that an uninformative feature will receive an importance score of zero in expectation.\nIn order to design an unbiased measurement of feature importance, we need to eliminate two sources of bias in the current gain importance measurement mentioned above. The intuitive rationale for the first source of bias is that we should not determine the best split and assess its performance using the same set of data. Therefore, a separate validation set is considered to estimate the gain importance. However, directly computing gain importance using the validation set still suffers from the second source of bias. Therefore, we construct a new form of estimation using the validation set that meets the zero-expectation criterion.\nAssume we have a training dataset D = {(x i , y i )} and a validation dataset \nD ′ = {(x ′ i , y ′ i )}.\nμg (I) = 1 n I G I = 1 n I q(xi)=I g i . Then, we randomly select k examples from n ′ I validation examples, where k = min(n ′ L , n ′ R ).\nNext, we estimate µ g (I) and µ h (I) using k randomly selected validation examples\nμ′ g (I) = 1 k G ′ I = 1 k q(x ′ i )=I g ′ i ⋅ δ(I, i), μ′ h (I) = 1 k H ′ I = 1 k q(x ′ i )=I h ′ i ⋅ δ(I, i),\nwhere δ(I, i) is a binary indicator showing whether a validation sample has been selected. Finally we can calculate the loss of leaf node I by\nL(I) = 1 2 μg (I) ⋅ μ′ g (I) μ′ h (I) ⋅ n I n = - 1 2n G I ⋅ G ′ I H ′ I .\nHere, G I is computed using the training set while G ′ I and H ′ I are computed using the validation set. We can also calculate L(I L ) and L(I R ) in a similar way (the number of selected validation example k is the same for I, I L , and I R ).\nFinally, the unbiased gain is calculated as\nGain ub (I, θ) = L(I) -L(I L ) -L(I R ).\n(5)\nTheorem 2. For a feature X j , a leaf node I, and a split θ, if X j is marginally independent of y within the region defined by the leaf node I, then\nE D ′ Gain ub (I, θ) = 0.\nA critical design in the unbiased gain is that, instead of estimating µ g (I), µ g (I L ), and µ g (I R ) using all the validation examples on node I, I L , and I R , we randomly select k examples from node I, I L , and I R respectively for estimation. This design is critical for the unbiased property of Eq 7 (see the proof of Theorem 2 and more explanations in Appendix B)\nThe unbiased gain we propose serves as a post hoc method to address the interpretability issue. In Figure 1(b), we plot the unbiased gain of the GBDT trained on the synthetic data. We can see that the unbiased gain correctly assigns X 1 with the highest importance, and the importance of X 2 and X 3 is zero in expectation." }, { "figure_ref": [], "heading": "UnbiasedGBM", "publication_ref": [], "table_ref": [], "text": "We propose UnbiasedGBM to address the overfitting problem introduced by the bias in GBDT: 1) The choice of each split biases towards features with high cardinality. 2) We always choose the best split on the training set, without evaluating the generalization performance of each split.\nIn order to eliminate these two biases, we need two validation sets. Assume we divide the training set into a subtraining set D and two validation sets D ′ 1 and D ′ 2 . Unbi-asedGBM eliminates the bias by redesigning the split finding algorithm. The design is conceptually simple but requires a good understanding of the bias in GBDT. First, we calculate the gain of each split Gain 1 in the original fashion using the sub-training set D. We determine the best split of each feature using Gain 1 of each split. Next, we calculate the gain Gain 2 of each feature's best split using the validation set D ′ 1 . We determine which feature to split using Gain 2 of each feature's best split. Since we determine the best split of each feature and the feature to split using different data, we only need to consider the best split of each feature when choosing the feature to split, thus eliminating the bias towards features with high cardinality. Finally, we use the data set D ′ 2 to calculate the unbiased gain Gain ub of the best split. Gain ub measures the generalization performance of the best split. We split on the leaf node if Gain ub > 0 and stop if Gain ub ≤ 0.\nRemark. We perform early-stopping on a leaf node when the best split has Gain ub ≤ 0. However, this negative Gain ub is taken into account when computing the importance of each feature in UnbiasedGBM to maintain the unbiased property.\nTo sum up, UnbiasedGBM enjoys two advantages over the existing GBDT: 1) UnbiasedGBM unbiasedly chooses among features with different cardinality to mitigate overfitting. 2) UnbiasedGBM measures the generalization performance of each split and performs leaf-wise early-stopping to avoid overfitting splits.\nDiscussion. Existing GBDT implementations can also perform leaf-wise early-stopping by using the minimal gain to split. However, this method and our method have two conceptual differences. First, we measure the generalization performance of each split, whereas existing methods only use statistics on the training set. Second, our \"minimal gain to split\" is zero on a theoretic basis, whereas existing methods require heuristic tuning of the minimal gain to split.\nImplementation details. An important detail is how to divide the dataset into D, D ′ 1 , and D ′ 2 . We experiment with different ratios of splitting the dataset and find out that we achieve the best performance when D = D ′ 1 = D ′ 2 (see more details in Appendix E). An intuitive explanation is that different datasets are equally important in our algorithm and should have the same number of samples." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "In this section, we aim at answering two questions through extensive experiments:\n• Q1. How does UnbiasedGBM perform compared with well-developed GBDT implementations such as XG-Boost, LightGBM, and CatBoost? • Q2. How does the proposed unbiased gain perform in terms of feature selection compared with existing feature importance methods?" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b13", "b52" ], "table_ref": [], "text": "We collect 60 classification datasets in various application domains provided by Kaggle, UCI [Dua and Graff, 2017],\nand OpenML [Vanschoren et al., 2013] platforms. We select datasets according to the following criteria: 1) Realworld data. We remove artificial datasets that are designed to test specific models. 2) Not high dimensional. We remove datasets with m n ratio above 1. 3) Not too small. We remove datasets with too few samples (< 500). 4) Not too easy. We remove datasets if a LightGBM with the default hyperparameters can reach a score larger than 0.95. The detailed properties of datasets are presented in Appendix C." }, { "figure_ref": [], "heading": "Q1. UnbiasedGBM", "publication_ref": [], "table_ref": [], "text": "In this subsection, we answer the Q1 question by comparing UnbiasedGBM with XGBoost, LightGBM, and CatBoost using extensive experiments." }, { "figure_ref": [], "heading": "Evaluation Metrics", "publication_ref": [ "b54", "b14", "b22" ], "table_ref": [], "text": "We use the area under the ROC curve (AUC) in the test set to measure the model performance. In order to aggregate results across datasets of different difficulty, we employ a metric similar to the distance to the minimum, which is introduced in [Wistuba et al., 2015] and used in [Feurer et al., 2020;Grinsztajn et al., 2022]. This metric normalize each test AUC between 0 and 1 via a min-max normalization using the worst AUC and the best AUC of all the models on the dataset." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b8", "b11", "b0" ], "table_ref": [], "text": "We compare with the following baseline methods:\n• XGBoost [Chen and Guestrin, 2016]. • CatBoost [Dorogush et al., 2018].\n• UnbiasedGBM-w/o-SE. UnbiasedGBM without separating the determination of the best split of each feature and the feature to split. One of two validation sets is merged with the subtraining set.\n• UnbiasedGBM-w/o-UB. UnbiasedGBM without computing the unbiased gain Gain ub to measure the generalization performance of the best split. Two validation sets are merged into one to determine the best feature to split and perform early stopping.\nFor each method, we perform hyperparameter optimization using the popular Optuna [Akiba et al., 2019] Python package. See more details in Appendix D." }, { "figure_ref": [ "fig_3" ], "heading": "Results", "publication_ref": [ "b10" ], "table_ref": [ "tab_0" ], "text": "In order to better present the advantage of UnbiasedGBM on datasets with different properties, we classify 60 datasets into four types of datasets, including small-scale (datasets with less than 4000 samples) or medium-scale datasets with only numerical features or both numerical and categorical features. We present the results in Figure 2. The x-axis is the number of tuning iterations, visualizing the influence of the tuning budget on the model performance. We can see that Un-biasedGBM significantly outperforms XGBoost, LightGBM, and CatBoost in both small and medium datasets. In addition, UnbiasedGBM is effective even if there is only numerical features in the datasets. Categorical features are not the only source of performance improvement in UnbiasedGBM.\nWe also visualize the comparison of each dataset in Figure 3 that demonstrates the improvement of our method. We leverage the Nemenyi test [Demsar, 2006] to perform statistical analyses using the rank of each method after hyper-parameter tuning of 100 iterations on 60 datasets. We present the results in Table 1, where the Nemenyi test p-values show that Un-biasedGBM significantly outperforms the baselines. Moreover, comparisons between UnbiasedGBM, UnbiasedGBMw/o-SE, and UnbiasedGBM-w/o-UB demonstrate that separating the determination of the best split of each feature and the feature to split is the primary source of improvement in UnbiasedGBM. In most cases, computing the unbiased gain to evaluate generalization performance of the best split can also result in performance improvement. " }, { "figure_ref": [], "heading": "Q2. Unbiased Gain", "publication_ref": [], "table_ref": [], "text": "In this subsection, we demonstrate the performance of unbiased gain in feature selection." }, { "figure_ref": [], "heading": "Baseline Methods", "publication_ref": [ "b4", "b38" ], "table_ref": [], "text": "We compare unbiased gain with different feature importance measurements:\n• Gain importance [Breiman et al., 1984].\n• Permutation feature importance (PFI) [Breiman, 2001].\n• SHAP [Lundberg et al., 2018]." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b16" ], "table_ref": [], "text": "We follow the standard approach [Forman and others, 2003] to evaluate different feature importance measurements in feature selection. For a given dataset, we first estimate the feature importance on the training set. Then, we select top k% features according to the feature importance, where k ∈ {10, 20, 30}. Next, we build a GBDT model according to the selected feature subset. Finally, we calculate the AUC of the model on the test set. Higher AUC indicates that the feature importance performs better in feature selection." }, { "figure_ref": [], "heading": "LightGBM UnbiasedGBM", "publication_ref": [], "table_ref": [], "text": "Full feature set 0.779±0.000 0.809±0.003 Remove \"nHM\" with 11 categories 0.772±0.000 0.797±0.003 Remove \"PCD\" with 224 categories 0.787±0.000 0.811±0.002 Table 2: An example of the QSAR Bioconcentration dataset. Light-GBM overfits on the \"PCD\" feature with many categories, because removing the feature brings significant improvement in the test AUC. UnbiasedGBM addresses the overfitting issue, because it has better test AUC than LightGBM when using the full feature set, and removing the \"PCD\" feature brings an insignificant difference." }, { "figure_ref": [ "fig_4" ], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "We evaluate these methods on 14 out of 60 datasets with more than 30 features. Datasets with too few features may not need feature selection. We consider selecting top k% features for k ∈ {10, 20, 30}. For each method, we report the mean and variance of the test AUC across these 14 datasets. The results are presented in Figure 4. We can see that unbiased gain achieves better average performance than baseline methods in feature selection." }, { "figure_ref": [], "heading": "Analyses of Features with Many Categories", "publication_ref": [], "table_ref": [], "text": "We present an analysis of the QSAR Bioconcentration dataset in Table 2 to show that UnbiasedGBM can address the overfitting issue of LightGBM on categorical features with many categories. The details are in the caption of Table 2." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we investigate the bias in GBDT and the consequent interpretability and overfitting issues. We give a finegrained analysis of bias in GBDT. Based on the analysis, we propose the unbiased gain and UnbiasedGBM to address the interpretability and overfitting issues. Extensive experiments on 60 datasets show that UnbiasedGBM has better average performance than XGBoost, LightGBM, and Catboost and unbiased gain can outperform popular feature importance estimation methods in feature selection.\nA Proof and Discussion for Theorem 1\nTheorem 1 reveals that the split gain on node I with split θ = (j, s), written in\nGain(I, θ) = L(I) -L(I L ) -L(I R )\nis always non-negative, where\nL(I) = - 1 2 1 n I ∑ i∈I g i 2 1 n I ∑ i∈I h i n I n = - 1 2n G 2 I H I .\nTheorem 1. For a dataset (X, Y ) sampled from a distribution T , for any split θ of node I on a given feature X j , we always have Gain(I, θ) ≥ 0.\nProof. First, rewrite L(I) with the optimization setting:\nL(I) = - 1 2n G 2 I H I = 1 n min w 1 2 H I w 2 + G I w = 1 n min w i∈I 1 2 h i w 2 + g i w .\nSince I = I L ∪ I R , the total of the optimal loss of I L and I R is smaller than the optimal loss of I:\nGain(I, θ) = L(I) -L(I L ) -L(I R ) = 1 n min w i∈I l i (w) -min w L i∈I L l i (w L ) -min w R i∈I R l i (w R ) ≥ 0,(6)\nwhere l i (w) = 1 2 h i w 2 + g i w.\nDiscussion. Theorem 1 shows that Gain(I, θ) is always non-negative. From Eq 6, we know that Gain(I, θ) = 0 if and only if\nw * = w * L = w * R , which is equivalent to G L H L = G R H R .\nThis is a sufficient and necessary condition for Gain(I, θ) = 0, which is a very rare case in the applications." }, { "figure_ref": [], "heading": "B Proof and Explanation for Theorem 2", "publication_ref": [], "table_ref": [], "text": "Assume we have a training dataset D = {(x i , y i )} and a vali- \ndation dataset D ′ = {(x ′ i , y ′ i )}.\nμg (I) = 1 n I G I = 1 n I q(xi)=I g i .\nThen, we randomly select k examples from n ′ I validation examples, where k = min(n ′ L , n ′ R ). Next, we estimate µ g (I) and µ h (I) using k randomly selected validation examples\nμ′ g (I) = 1 k G ′ I = 1 k q(x ′ i )=I g ′ i ⋅ δ(I, i), μ′ h (I) = 1 k H ′ I = 1 k q(x ′ i )=I h ′ i ⋅ δ(I, i),\nwhere δ(I, i) is a binary indicator showing whether a validation sample has been selected. Finally we can calculate the loss of leaf node I by\nL(I) = 1 2 μg (I) ⋅ μ′ g (I) μ′ h (I) ⋅ n I n = - 1 2n G I ⋅ G ′ I H ′ I .\nHere, G I is computed using the training set while G ′ I and H ′ I are computed using the validation set. We can also calculate L(I L ) and L(I R ) in a similar way (the number of selected validation example k is the same for I, I L , and I R ). Finally, the unbiased gain is calculated as\nGain ub (I, θ) = L(I) -L(I L ) -L(I R ).(7)\nTheorem 2. For a feature X j , a leaf node I, and a split θ, if X j is marginally independent of y within the region defined by the leaf node I, then\nE D ′ Gain ub (I, θ) = 0.\nProof. Since μ′ g (I), μ′ g (I L ), μ′ g (I R ) and μ′ h (I), μ′ h (I L ), μ′ h (I R ) are all estimated by the same number of k samples, we have\n∀k, E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I) μ′ h (I) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I L ) μ′ h (I L ) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I R ) μ′ h (I R ) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ,(8)\nwhere E is short for E D ′ . Hence\nE Gain ub = E Lub (I) -E Lub (I L ) -E Lub (I R ) = - G I 2n E μ′ g (I) μ′ h (I) + G L 2n E μ′ g (I L ) μ′ h (I L ) + G R 2n E μ′ g (I R ) μ′ h (I R ) = G L + G R -G I 2n k P (k)E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I) μ′ h (I) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = 0," }, { "figure_ref": [], "heading": "B.1 The Motivation Behind the Unbiased Gain", "publication_ref": [ "b56" ], "table_ref": [], "text": "Why do we need an additional validation set? The intuitive rationale behind this is that we should not find the optimal split and evaluate the optimal split using the same set of data. Result ← Result ∪{D} 14: end for 15: return Result Can we re-calculate the reduction in loss using the validation set? An intuitive way of using the validation set is to fix the tree structure and re-calculate the reduction in loss using the validation set. However, for a split on an uninformative feature, the split gain evaluated using the validation set is expected to be negative (instead of zero) [Zhou and Hooker, 2021]. Why do we need to randomly select k samples when calculating the unbiased gain? The Eq. 8 does not hold if μ′ g (I), μ′ g (I L ), μ′ g (I R ) and μ′ h (I), μ′ h (I L ), μ′ h (I R ) are estimated by different number of samples, and thus we cannot derive the unbiased property of the gain estimation." }, { "figure_ref": [], "heading": "C Datasets", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "The 60 datasets are collected from a repository of 654 datasets from Kaggle, UCI, and OpenML platforms. In order to collect datasets of different types (datasets with different scales and whether the dataset has categorical features), we select the datasets according to Algorithm 1. Table 3 4 5 6 show the datasets we collected and used in our experiment." }, { "figure_ref": [], "heading": "D Hyperparameter Optimization", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "Table 7 shows the hyperparameter spaces for each method. Optuna tunes all methods over 100 epochs." }, { "figure_ref": [], "heading": "E Implementation Details E.1 Split Finding Algorithm", "publication_ref": [], "table_ref": [], "text": "The idea of UnbiasedGBM can be incorporated in existing split finding algorithms. The current implementation of Un-biasedGBM is based on XGBoost. Algorithm 2 presents the details of UnbiasedGBM. From the algorithm, we can see that score 2 determines the feature to split, and score 3 is the unbiased gain that determines whether to perform leaf-wise early stopping.\nIn fact, score 2 is nearly unbiased when the number of features is small. As a result, for the sake of sample efficiency, we can set D ′ 1 = D ′ 2 in the applications, and thus score 2 = score 3 . We find that such a design is beneficial to the performance in our main experiments.We present the results in Figure 5. Figure 5: We investigate the influence of the proportion of splitting data. We find that splitting the dataset evenly ( D ∶ D ′ 1 ∶ D ′ 2 = 1 ∶ 1 ∶ 1) is more reasonable than unevenly (4 ∶ 1 ∶ 1). We can further improve the performance by setting D ′ 1 = D ′ 2 (1:1+1). Each value corresponds the normalized test AUC of the best model (on the validation set) after a specific number of tuning epochs, averaged on all the datasets. The shaded area presents the variance of the scores." }, { "figure_ref": [], "heading": "E.2 Tree Construction", "publication_ref": [], "table_ref": [], "text": "When constructing a decision tree, we repeatedly split the leaf with maximal score. Algorithm 3 shows the details." }, { "figure_ref": [], "heading": "E.3 Time Complexity", "publication_ref": [], "table_ref": [], "text": "Let n be the number of samples, m be the number of base features of dataset. Each sample appears exactly once of each depth, so with maximum depth d, our implementation runs in\nO (T dnm log n)\nwhere T is the number of trees. This complexity is exactly the same as XGBoost and similarly cost O(T dnm+nm log n) on the block structure. In fact, applying our method to existing GBDT implementations preserves their time complexity, because it is never worse than calculating on 2 more separated (bi) 3: G (k) ← ∑ i∈S k g k , H (k) ← ∑ i∈S k g k 4: score\n(2)\n1,2,3 , θ (2) ← (-inf, -inf, -inf), None 5: for f in feature space do 6:\nG L 1,2,3 ← 0, 0, 0, H L 1,2,3 ← 0, 0, 0\n7: score (1)\n1,2,3 , θ (1) ← (-inf, -inf, -inf), None 8:\nfor i in sorted(S, ascent order by x if ) do 9:\nG L bi ← G L bi + g i , H L bi ← H L bi + h i 10: G R bi ← G bi -G L bi , H R bi ← H bi -H L bi 11: score 1 ← G L 1 G L 1 H L 1 + G R 1 G R 1 H R 1 -G1G1 H1 12: score 2 ← G L 1 G L 2 H L 2 + G R 1 G R 2 H R 2 -G1G2H2\n13:\nscore 3 ← (G L 1 +G L 2 )G L 3 H L 3 + (G R 1 +G R 2 )G R 3 H R 3 -(G1+G2)G3\nH3 14:\nif score 1 > score\n(1)\n1 then 15:\nscore (1) , θ score (2) , θ (2) ← score (1) , θ (1) 20:\nend if 21: end for 22: return score T ← T ∪ {I ↦ I L , I R } 10: end while 11: return T , Imp Discussion: Complexity when applied on XGBoost. XG-Boost sorts all instances for each feature when determining the best split on a node. The bottleneck is to visit the sorted array of instance once and calculate its split gain. In this case, using our method incurs no additional costs because the total number of instances of D, D ′ 1 , and D ′ 2 equals to the original. Discussion: Complexity when applied on LightGBM. LightGBM divides instances into bins. When the number of bins is not small, the bottleneck is to visit each sorted bins and calculate its split gain. If we separate D, D ′ 1 and D ′ 2 over the bins, the total number of bins of the three dataset is the same as the original. Hence no additional costs again. F Additional Results" }, { "figure_ref": [], "heading": "F.1 Statistical Analyses", "publication_ref": [ "b10" ], "table_ref": [ "tab_0" ], "text": "We leverage the Nemenyi test [Demsar, 2006] to compare the unbiased gain and baseline methods in feature selection. For each dataset, we average the AUC on the test set when selecting top 10%, 20%, and 30% features. We present the rank of each method and the Nemenyi test p-values in Table 1. " }, { "figure_ref": [ "fig_8" ], "heading": "F.2 Additional Experiments", "publication_ref": [], "table_ref": [], "text": "We present additional experiments on high dimensional and easy datasets in Figure 6." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "The authors are supported in part by the National Natural Science Foundation of China Grant 62161146004, Turing AI Institute of Nanjing and Xi'an Institute for Interdisciplinary Information Core Technology." }, { "figure_ref": [], "heading": "Contribution Statement", "publication_ref": [], "table_ref": [], "text": "This paper is the result of collaborative work between Zheyu Zhang and Tianping Zhang, who contributed equally to the conception, implementation, experimentation, and paper writing. Jian Li served as the corresponding author, contributing to the overall idea of the project as well as providing computing resources." } ]
Gradient Boosting Decision Tree (GBDT) has achieved remarkable success in a wide variety of applications. The split finding algorithm, which determines the tree construction process, is one of the most crucial components of GBDT. However, the split finding algorithm has long been criticized for its bias towards features with a large number of potential splits. This bias introduces severe interpretability and overfitting issues in GBDT. To this end, we provide a fine-grained analysis of bias in GBDT and demonstrate that the bias originates from 1) the systematic bias in the gain estimation of each split and 2) the bias in the split finding algorithm resulting from the use of the same data to evaluate the split improvement and determine the best split. Based on the analysis, we propose unbiased gain, a new unbiased measurement of gain importance using out-of-bag samples. Moreover, we incorporate the unbiased property into the split finding algorithm and develop UnbiasedGBM to solve the overfitting issue of GBDT. We assess the performance of UnbiasedGBM and unbiased gain in a large-scale empirical study comprising 60 datasets and show that: 1) UnbiasedGBM exhibits better performance than popular GBDT implementations such as LightGBM, XGBoost, and Catboost on average on the 60 datasets and 2) unbiased gain achieves better average performance in feature selection than popular feature importance methods. The codes are available at https://github. com/ZheyuAqaZhang/UnbiasedGBM.
Unbiased Gradient Boosting Decision Tree with Unbiased Feature Importance
[ { "figure_caption": "Given a training dataset with n examples and m features D tr = (X, Y ) = {(x i , y i )}, where D tr = n and (x i , y i ) iid ∼ T , we estimate the loss on leaf I and the gain of a split by", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure 1: The features' importance under different importance measurements in the synthetic dataset. The box plot is based on 1000 repetitions. Unbiased gain correctly assigns the highest importance to X1 and an importance of zero in expectation to X2 and X3.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "For a given leaf node I and a given split I = I L ∪ I R , there are n I , n L , n R training examples and n ′ I , n ′ L , n ′ R validation examples that fall into leaf node I, I L , and I R . First, we estimate µ g (I) using the training examples", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Comparison of UnbiasedGBM with LightGBM, XGBoost and CatBoost. Each dot denotes a dataset. The normalized test AUC is higher the better. \"numerical\" means the dataset only contains numerical features. \"categorical\" means the dataset contains both numerical and categorical features.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of different feature importance methods in feature selection. We report the AUC on the test set of the model using top k% selected features according to the feature importance.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "For a given leaf node I and a given split I = I L ∪I R , there are n I , n L , n R training examples and n ′ I , n ′ L , n ′ R validation examples that fall into leaf nodes I, I L , and I R . First, we estimate µ g (I) using the training examples", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "10 medium-scale datasets with num. and cat. features.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Tree Construction Output: Decision tree with feature importance gain 1: T ← a root only 2: Imp ← [0, 0, 0, ...] 3: while T < num leaf do 4: Pick the leaf I with maximal Split score 5: Imp[Split feat ] + = Split score 6:if Split score < min split gain then", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Additional experiments of high dimensional and easy datasets.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "The average rank (where lower is better) over 60 datasets and the p-value of Nemenyi test between UnbiasedGBM and the baseline methods.", "figure_data": "Normalized test AUC0.3 0.4 0.5 0.6 0.7 0.810 010 1 Number of tuning iterations CatBoost XGBoost LightGBM UnbiasedGBM-w/o-SE 10 2 UnbiasedGBM-w/o-UB UnbiasedGBMNormalized test AUC0.8 0.3 0.4 0.5 0.6 0.710 010 1 Number of tuning iterations CatBoost XGBoost LightGBM UnbiasedGBM-w/o-SE 10 2 UnbiasedGBM-w/o-UB UnbiasedGBM(a) 20 small-scale datasets with only numerical features.(b) 20 small-scale datasets with num. and cat. features.Normalized test AUC0.3 0.4 0.5 0.6 0.7 0.8 0.910 010 1 Number of tuning iterations CatBoost XGBoost LightGBM UnbiasedGBM-w/o-SE 10 2 UnbiasedGBM-w/o-UB UnbiasedGBMNormalized test AUC0.9 0.3 0.4 0.5 0.6 0.7 0.810 010 1 Number of tuning iterations CatBoost XGBoost LightGBM UnbiasedGBM-w/o-SE 10 2 UnbiasedGBM-w/o-UB UnbiasedGBM(c) 10 medium-scale datasets with only numerical features.XGBoost LightGBM CatBoost UnbiasedGBMAverage Rank3.002.852.431.72p-value≤ 10 -3≤ 10 -30.013-", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Medium-scale categorical datasets.", "figure_data": "dataset D ′ 1 and D ′ 2 .", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Medium-scale numerical datasets.", "figure_data": "sourcenamesample num feat cat featUCIILPD (Indian Liver Patient Dataset)58391kaggleCredit Card Approval59069kaggleAnalytics Vidhya Loan Prediction61456kaggleStudent Alcohol Consumption6491317UCIQSAR Bioconcentration classes dataset779101kaggleThe Estonia Disaster Passenger List98915UCIStatlog (German Credit Data)999713openml credit-g1000713kaggleEmployee Attrition1029247kaggleTrain Crowd Density128479UCIYeast148481UCIDrug consumption (quantified)18851318kaggleRMS Lusitania Complete Passenger Manifest1961111kaggleMarketing Campaign2240233UCIseismic-bumps2584114kaggleTelecom Churn Dataset2666163kaggleWell log facies dataset323282kaggleClient churn rate in Telecom sector3333163kaggleCardiovascular Study Dataset3390132kaggleCampus France Rouen 2019 admission358526", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Small-scale categorical datasets.", "figure_data": "sourcenamesample num feat cat featopenml fri c3 1000 101000100kaggleCustomer Classification1000110openml autoUniv-au1-10001000200openml fri c0 1000 501000500openml fri c1 1000 501000500openml rmftsa sleepdata102420openml PizzaCutter31043370UCIQSAR biodegradation1055410openml PieChart31077370kaggleCredit Risk Classification Dataset1125110UCIDiabetic Retinopathy Debrecen Data Set1151190kaggleHeart Disease Dataset (Comprehensive)1190110openml pc41458370kaggleHR-attrition-EDA1470440UCIContraceptive Method Choice147390kaggleBangla Music Dataset1742290kaggleDiabetes Data Set200080openml kc12109210openml Titanic220130openml space ga310760", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Small-scale numerical datasets.", "figure_data": "methodHyperparameterrangelogXGBoostn estimators200∼3000(small)/6000(medium) Truelearning rate0.005∼0.05Truemin child weight2∼20Truegamma0∼0.1FalseLightGBMn estimators200∼3000(small)/6000(medium) Truelearning rate0.005∼0.05Truemin child weight2∼20Truemin split gain0∼0.1FalseCatBoostn estimators200∼3000(small)/6000(medium) Truelearning rate0.005∼0.05Truemin data in leaf2∼20Truel2 leaf reg0∼0.1FalseUnbiasedGBMn estimators200∼3000(small)/6000(medium) Truelearning rate0.005∼0.05Truemin data in leaf2∼20Truemin split gain-0.1∼0.1False", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Hyperparameters.Algorithm 2 Split Finding Input: S I , instance set of current node Output: The best split with its gain 1: Randomly seperate S I into S (1) , S (2) , S (3) . 2: Let b i indicate instance i in S", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "(1) ← score, (f, x if )", "figure_data": "16:end if17: 18:end for if score (1) 2 > score(2) 2 then19:", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The average rank over 14 datasets and the p-value of Nemenyi test between unbiased gain and the baseline methods.", "figure_data": "SHAP Permutation Gain Importance Unbiased GainAverage Rank3.142.712.711.43p-value0.0030.0420.042-", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Zheyu Zhang; Tianping Zhang; Jian Li
[ { "authors": " Akiba", "journal": "", "ref_id": "b0", "title": "", "year": "2019" }, { "authors": "Takuya Akiba; Shotaro Sano; Toshihiko Yanase; Takeru Ohta; Masanori Koyama", "journal": "", "ref_id": "b1", "title": "Optuna: A next-generation hyperparameter optimization framework", "year": "2019" }, { "authors": " Boulesteix", "journal": "", "ref_id": "b2", "title": "", "year": "2012" }, { "authors": "Anne-Laure Boulesteix; Andreas Bender; Justo Lorenzo Bermejo; Carolin Strobl", "journal": "Briefings in Bioinformatics", "ref_id": "b3", "title": "Random forest gini importance favours snps with large minor allele frequency: impact, sources and recommendations", "year": "2012" }, { "authors": " Breiman", "journal": "", "ref_id": "b4", "title": "", "year": "1984" }, { "authors": "J H Leo Breiman; R A Friedman; C J Olshen; Stone", "journal": "Wadsworth", "ref_id": "b5", "title": "Classification and Regression Trees", "year": "1984" }, { "authors": " Breiman", "journal": "", "ref_id": "b6", "title": "", "year": "2001" }, { "authors": "Leo Breiman", "journal": "Machine learning", "ref_id": "b7", "title": "Random forests", "year": "2001" }, { "authors": "Guestrin Chen", "journal": "", "ref_id": "b8", "title": "", "year": "2016" }, { "authors": "Tianqi Chen; Carlos Guestrin", "journal": "", "ref_id": "b9", "title": "Xgboost: A scalable tree boosting system", "year": "2016" }, { "authors": "Janez Demsar; Demsar", "journal": "J. Mach. Learn. Res", "ref_id": "b10", "title": "Statistical comparisons of classifiers over multiple data sets", "year": "2006" }, { "authors": " Dorogush", "journal": "", "ref_id": "b11", "title": "", "year": "2018" }, { "authors": "Anna Veronika Dorogush; Vasily Ershov; Andrey Gulin", "journal": "", "ref_id": "b12", "title": "Catboost: gradient boosting with categorical features support", "year": "2018" }, { "authors": "Graff Dua", "journal": "", "ref_id": "b13", "title": "Dheeru Dua and Casey Graff", "year": "2017" }, { "authors": " Feurer", "journal": "", "ref_id": "b14", "title": "", "year": "2020" }, { "authors": "Matthias Feurer; Katharina Eggensperger; Stefan Falkner; Marius Lindauer; Frank Hutter", "journal": "", "ref_id": "b15", "title": "Auto-sklearn 2.0: Hands-free automl via meta-learning", "year": "2020" }, { "authors": "Others Forman", "journal": "", "ref_id": "b16", "title": "", "year": "2003" }, { "authors": "George Forman", "journal": "J. Mach. Learn. Res", "ref_id": "b17", "title": "An extensive empirical study of feature selection metrics for text classification", "year": "2003-03" }, { "authors": " Friedman", "journal": "", "ref_id": "b18", "title": "", "year": "2001" }, { "authors": " Jerome H Friedman", "journal": "Annals of statistics", "ref_id": "b19", "title": "Greedy function approximation: a gradient boosting machine", "year": "2001" }, { "authors": " Gorishniy", "journal": "", "ref_id": "b20", "title": "", "year": "2021" }, { "authors": "Yury Gorishniy; Ivan Rubachev; Valentin Khrulkov; Artem Babenko", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Revisiting deep learning models for tabular data", "year": "2021" }, { "authors": " Grinsztajn", "journal": "", "ref_id": "b22", "title": "", "year": "2022" }, { "authors": "Léo Grinsztajn; Edouard Oyallon; Gaël Varoquaux", "journal": "", "ref_id": "b23", "title": "Why do tree-based models still outperform deep learning on tabular data?", "year": "2022" }, { "authors": " Hastie", "journal": "", "ref_id": "b24", "title": "", "year": "2001" }, { "authors": "Trevor Hastie; Jerome H Friedman; Robert Tibshirani", "journal": "Springer", "ref_id": "b25", "title": "The Elements of Statistical Learning: Data Mining, Inference, and Prediction", "year": "2001" }, { "authors": " Ke", "journal": "", "ref_id": "b26", "title": "", "year": "2017" }, { "authors": "Guolin Ke; Qi Meng; Thomas Finley; Taifeng Wang; Wei Chen; Weidong Ma; Qiwei Ye; Tie-Yan Liu", "journal": "Advances in neural information processing systems", "ref_id": "b27", "title": "Lightgbm: A highly efficient gradient boosting decision tree", "year": "2017" }, { "authors": "Loh Kim", "journal": "", "ref_id": "b28", "title": "", "year": "2001" }, { "authors": "Hyunjoong Kim; Wei-Yin Loh", "journal": "Journal of the American Statistical Association", "ref_id": "b29", "title": "Classification trees with unbiased multiway splits", "year": "2001" }, { "authors": " Li", "journal": "", "ref_id": "b30", "title": "", "year": "2019" }, { "authors": "Xiao Li; Yu Wang; Sumanta Basu; Karl Kumbier; Bin Yu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "A debiased mdi feature importance measure for random forests", "year": "2019" }, { "authors": " Li", "journal": "", "ref_id": "b32", "title": "", "year": "2012" }, { "authors": "Ping Li", "journal": "", "ref_id": "b33", "title": "Robust logitboost and adaptive base class (abc) logitboost", "year": "2012" }, { "authors": "Shih Loh", "journal": "", "ref_id": "b34", "title": "", "year": "1997" }, { "authors": "Wei-Yin Loh; Yu-Shan Shih", "journal": "Statistica sinica", "ref_id": "b35", "title": "Split selection methods for classification trees", "year": "1997" }, { "authors": " Loh", "journal": "", "ref_id": "b36", "title": "", "year": "2009" }, { "authors": "Wei-Yin Loh", "journal": "The Annals of Applied Statistics", "ref_id": "b37", "title": "Improving the precision of classification trees", "year": "2009" }, { "authors": " Lundberg", "journal": "", "ref_id": "b38", "title": "", "year": "2018" }, { "authors": " Scott M Lundberg; Su-In Gabriel G Erion; Lee", "journal": "", "ref_id": "b39", "title": "Consistent individualized feature attribution for tree ensembles", "year": "2018" }, { "authors": " Nembrini", "journal": "", "ref_id": "b40", "title": "", "year": "2018" }, { "authors": "Stefano Nembrini; Marvin N Inke R König; Wright", "journal": "Bioinformatics", "ref_id": "b41", "title": "The revival of the gini importance?", "year": "2018" }, { "authors": " Nicodemus", "journal": "", "ref_id": "b42", "title": "", "year": "2011" }, { "authors": "Kristin K Nicodemus", "journal": "Briefings in bioinformatics", "ref_id": "b43", "title": "On the stability and ranking of predictors from random forest variable importance measures", "year": "2011" }, { "authors": "Quinlan ", "journal": "", "ref_id": "b44", "title": "", "year": "1986" }, { "authors": "J ; Ross Quinlan", "journal": "Mach. Learn", "ref_id": "b45", "title": "Induction of decision trees", "year": "1986" }, { "authors": "Zuccolotto Sandri", "journal": "", "ref_id": "b46", "title": "", "year": "2008" }, { "authors": "Marco Sandri; Paola Zuccolotto", "journal": "Journal of Computational and Graphical Statistics", "ref_id": "b47", "title": "A bias correction algorithm for the gini variable importance measure in classification trees", "year": "2008" }, { "authors": "Shwartz-Ziv ; Armon ", "journal": "", "ref_id": "b48", "title": "", "year": "2022" }, { "authors": "Ravid Shwartz; -Ziv ; Amitai Armon", "journal": "Information Fusion", "ref_id": "b49", "title": "Tabular data: Deep learning is not all you need", "year": "2022" }, { "authors": " Strobl", "journal": "", "ref_id": "b50", "title": "", "year": "2007" }, { "authors": "Carolin Strobl; Anne-Laure Boulesteix; Achim Zeileis; Torsten Hothorn", "journal": "BMC bioinformatics", "ref_id": "b51", "title": "Bias in random forest variable importance measures: Illustrations, sources and a solution", "year": "2007" }, { "authors": " Vanschoren", "journal": "", "ref_id": "b52", "title": "", "year": "2013" }, { "authors": "Joaquin Vanschoren; Jan N Van Rijn; Bernd Bischl; Luís Torgo", "journal": "SIGKDD Explor", "ref_id": "b53", "title": "Openml: networked science in machine learning", "year": "2013" }, { "authors": " Wistuba", "journal": "", "ref_id": "b54", "title": "", "year": "2015" }, { "authors": "Martin Wistuba; Nicolas Schilling; Lars Schmidt-Thieme", "journal": "IEEE", "ref_id": "b55", "title": "Learning hyperparameter optimization initializations", "year": "2015" }, { "authors": "Hooker Zhou; Zhengze Zhou; Giles Hooker", "journal": "ACM Transactions on Knowledge Discovery from Data (TKDD)", "ref_id": "b56", "title": "Unbiased measurement of feature importance in treebased methods", "year": "2021" } ]
[ { "formula_coordinates": [ 2, 377.34, 627.6, 119.01, 26.35 ], "formula_id": "formula_0", "formula_text": "ŷ = φ(x) = K k=1 f k (x), f k ∈ F," }, { "formula_coordinates": [ 3, 54, 57.16, 102.7, 9.65 ], "formula_id": "formula_1", "formula_text": "L(φ) = E x,y [l(φ(x), y)]" }, { "formula_coordinates": [ 3, 82.53, 117.98, 185.94, 36.23 ], "formula_id": "formula_2", "formula_text": "L(φ t-1 + f t ) ≈ E x,y [l(ŷ t-1 , y) + g(x, y)f t (x) + 1 2 h(x, y)f t (x) 2 ]," }, { "formula_coordinates": [ 3, 65.73, 172.98, 219.54, 26.83 ], "formula_id": "formula_3", "formula_text": "g(x, y) = ∂l(φ t-1 (x), y) ∂φ t-1 (x) , h(x, y) = ∂ 2 l(φ t-1 (x), y) (∂φ t-1 (x)) 2 ." }, { "formula_coordinates": [ 3, 66.05, 230.93, 221.23, 22.53 ], "formula_id": "formula_4", "formula_text": "L(φ t-1 + f t ) = E x,y g(x, y)f t (x) + 1 2 h(x, y)f t (x) 2 ." }, { "formula_coordinates": [ 3, 59.7, 286.64, 221.39, 72.55 ], "formula_id": "formula_5", "formula_text": "L(I) = E x,y 1 {q(x)=I} g(x, y)f (x) + 1 2 h(x, y)f (x) 2 = E x,y 1 {q(x)=I} g(x, y)w I + 1 2 h(x, y)w 2 I = P (x ∈ I) µ g (I)w I + 1 2 µ h (I)w 2 I ," }, { "formula_coordinates": [ 3, 81.73, 365.81, 215.27, 9.65 ], "formula_id": "formula_6", "formula_text": "µ g (I) = E x,y [g(x, y)] and µ h (I) = E x,y [h(x, y)]." }, { "formula_coordinates": [ 3, 147.7, 393.48, 54.41, 23.72 ], "formula_id": "formula_7", "formula_text": "w I = - µ g (I) µ h (I)" }, { "formula_coordinates": [ 3, 117.54, 439.52, 179.46, 25.49 ], "formula_id": "formula_8", "formula_text": "L(I) = - 1 2 µ g (I) 2 µ h (I) P (x ∈ I).(1)" }, { "formula_coordinates": [ 3, 100.07, 521.48, 150.86, 9.65 ], "formula_id": "formula_9", "formula_text": "Gain(I, θ) = L(I) -L(I L ) -L(I R )." }, { "formula_coordinates": [ 3, 107.47, 612.51, 189.53, 36.17 ], "formula_id": "formula_10", "formula_text": "L(I) = - 1 2 1 n I ∑ i∈I g i 2 1 n I ∑ i∈I h i n I n = - 1 2n G 2 I H I ,(3)" }, { "formula_coordinates": [ 3, 82.26, 651.51, 214.74, 25.39 ], "formula_id": "formula_11", "formula_text": "Gain(I, θ) = 1 2n G 2 L H L + G 2 R H R - G 2 I H I ,(4)" }, { "formula_coordinates": [ 3, 405.3, 376.62, 62.41, 8.74 ], "formula_id": "formula_12", "formula_text": "Gain(I, θ) ≥ 0." }, { "formula_coordinates": [ 3, 334.93, 471.03, 50.74, 14.22 ], "formula_id": "formula_13", "formula_text": "( 1 n ∑ n i=1 g i ) 2" }, { "formula_coordinates": [ 3, 351.13, 495.24, 165.51, 118.93 ], "formula_id": "formula_14", "formula_text": "E D ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 n I i∈I g i 2⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = E D ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ 1 n 2 I i,j∈I,i≠j 2g i g j ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ + E D 1 n 2 I i∈I g 2 i = n I -1 n I µ g (I) 2 + 1 n I µ g (I) 2 + σ g (I) 2 = µ g (I) 2 + 1 n I σ g (I) 2 ." }, { "formula_coordinates": [ 3, 315, 634.35, 243, 20.61 ], "formula_id": "formula_15", "formula_text": "(I) = µ g (I L ) = µ g (I R ) and σ g (I) = σ g (I L ) = σ g (I R" }, { "formula_coordinates": [ 3, 343.88, 686.38, 185.24, 22.53 ], "formula_id": "formula_16", "formula_text": "E D L(I) = 1 2n E D [n I ] µ g (I) 2 + σ g (I) 2 , X1X2" }, { "formula_coordinates": [ 4, 86.04, 272.7, 173.82, 77.93 ], "formula_id": "formula_17", "formula_text": "E D Gain(I, θ) = E D L(I L ) + E D L(I R ) -E D L(I) = 1 2n σ g (I) 2 + E D [n L + n R -n I ] µ g (I) 2 = 1 2n σ g (I) 2 ≥ 0." }, { "formula_coordinates": [ 4, 389.05, 387.54, 66.89, 12.52 ], "formula_id": "formula_18", "formula_text": "D ′ = {(x ′ i , y ′ i )}." }, { "formula_coordinates": [ 4, 315, 447.7, 243, 54.97 ], "formula_id": "formula_19", "formula_text": "μg (I) = 1 n I G I = 1 n I q(xi)=I g i . Then, we randomly select k examples from n ′ I validation examples, where k = min(n ′ L , n ′ R )." }, { "formula_coordinates": [ 4, 364.19, 517.55, 145.41, 57.97 ], "formula_id": "formula_20", "formula_text": "μ′ g (I) = 1 k G ′ I = 1 k q(x ′ i )=I g ′ i ⋅ δ(I, i), μ′ h (I) = 1 k H ′ I = 1 k q(x ′ i )=I h ′ i ⋅ δ(I, i)," }, { "formula_coordinates": [ 4, 346.16, 627.92, 182.46, 27.12 ], "formula_id": "formula_21", "formula_text": "L(I) = 1 2 μg (I) ⋅ μ′ g (I) μ′ h (I) ⋅ n I n = - 1 2n G I ⋅ G ′ I H ′ I ." }, { "formula_coordinates": [ 5, 95.42, 76.6, 160.16, 9.69 ], "formula_id": "formula_22", "formula_text": "Gain ub (I, θ) = L(I) -L(I L ) -L(I R )." }, { "formula_coordinates": [ 5, 126.92, 151.22, 97.16, 10.04 ], "formula_id": "formula_23", "formula_text": "E D ′ Gain ub (I, θ) = 0." }, { "formula_coordinates": [ 10, 101.45, 104.17, 148.09, 9.68 ], "formula_id": "formula_24", "formula_text": "Gain(I, θ) = L(I) -L(I L ) -L(I R )" }, { "formula_coordinates": [ 10, 95.75, 140.05, 161.27, 36.17 ], "formula_id": "formula_25", "formula_text": "L(I) = - 1 2 1 n I ∑ i∈I g i 2 1 n I ∑ i∈I h i n I n = - 1 2n G 2 I H I ." }, { "formula_coordinates": [ 10, 107.45, 258.48, 137.88, 77.24 ], "formula_id": "formula_26", "formula_text": "L(I) = - 1 2n G 2 I H I = 1 n min w 1 2 H I w 2 + G I w = 1 n min w i∈I 1 2 h i w 2 + g i w ." }, { "formula_coordinates": [ 10, 67.98, 374.82, 229.02, 78.83 ], "formula_id": "formula_27", "formula_text": "Gain(I, θ) = L(I) -L(I L ) -L(I R ) = 1 n min w i∈I l i (w) -min w L i∈I L l i (w L ) -min w R i∈I R l i (w R ) ≥ 0,(6)" }, { "formula_coordinates": [ 10, 99.66, 510.25, 151.85, 43.36 ], "formula_id": "formula_28", "formula_text": "w * = w * L = w * R , which is equivalent to G L H L = G R H R ." }, { "formula_coordinates": [ 10, 54, 624.42, 122.7, 12.52 ], "formula_id": "formula_29", "formula_text": "dation dataset D ′ = {(x ′ i , y ′ i )}." }, { "formula_coordinates": [ 10, 114.2, 680.53, 123.38, 25.09 ], "formula_id": "formula_30", "formula_text": "μg (I) = 1 n I G I = 1 n I q(xi)=I g i ." }, { "formula_coordinates": [ 10, 364.19, 97.46, 145.41, 57.96 ], "formula_id": "formula_31", "formula_text": "μ′ g (I) = 1 k G ′ I = 1 k q(x ′ i )=I g ′ i ⋅ δ(I, i), μ′ h (I) = 1 k H ′ I = 1 k q(x ′ i )=I h ′ i ⋅ δ(I, i)," }, { "formula_coordinates": [ 10, 346.16, 206.13, 182.46, 27.12 ], "formula_id": "formula_32", "formula_text": "L(I) = 1 2 μg (I) ⋅ μ′ g (I) μ′ h (I) ⋅ n I n = - 1 2n G I ⋅ G ′ I H ′ I ." }, { "formula_coordinates": [ 10, 356.42, 308.16, 201.58, 9.69 ], "formula_id": "formula_33", "formula_text": "Gain ub (I, θ) = L(I) -L(I L ) -L(I R ).(7)" }, { "formula_coordinates": [ 10, 387.92, 371.26, 97.16, 10.04 ], "formula_id": "formula_34", "formula_text": "E D ′ Gain ub (I, θ) = 0." }, { "formula_coordinates": [ 10, 322.93, 433.45, 235.07, 31.84 ], "formula_id": "formula_35", "formula_text": "∀k, E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I) μ′ h (I) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I L ) μ′ h (I L ) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I R ) μ′ h (I R ) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ ,(8)" }, { "formula_coordinates": [ 10, 323.17, 493.87, 222.66, 119.15 ], "formula_id": "formula_36", "formula_text": "E Gain ub = E Lub (I) -E Lub (I L ) -E Lub (I R ) = - G I 2n E μ′ g (I) μ′ h (I) + G L 2n E μ′ g (I L ) μ′ h (I L ) + G R 2n E μ′ g (I R ) μ′ h (I R ) = G L + G R -G I 2n k P (k)E ⎡ ⎢ ⎢ ⎢ ⎢ ⎣ μ′ g (I) μ′ h (I) k ⎤ ⎥ ⎥ ⎥ ⎥ ⎦ = 0," }, { "formula_coordinates": [ 11, 402.94, 622.48, 67.12, 8.74 ], "formula_id": "formula_37", "formula_text": "O (T dnm log n)" }, { "formula_coordinates": [ 14, 58.98, 168.48, 54.42, 11.82 ], "formula_id": "formula_38", "formula_text": "7: score (1)" }, { "formula_coordinates": [ 14, 54.5, 194.31, 178.22, 64.77 ], "formula_id": "formula_39", "formula_text": "G L bi ← G L bi + g i , H L bi ← H L bi + h i 10: G R bi ← G bi -G L bi , H R bi ← H bi -H L bi 11: score 1 ← G L 1 G L 1 H L 1 + G R 1 G R 1 H R 1 -G1G1 H1 12: score 2 ← G L 1 G L 2 H L 2 + G R 1 G R 2 H R 2 -G1G2H2" }, { "formula_coordinates": [ 14, 90.86, 259.11, 199.15, 18.98 ], "formula_id": "formula_40", "formula_text": "score 3 ← (G L 1 +G L 2 )G L 3 H L 3 + (G R 1 +G R 2 )G R 3 H R 3 -(G1+G2)G3" } ]
10.24963/ijcai.2020/476
2023-12-20
[ { "figure_ref": [], "heading": "A photo of a beautiful car on a road", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "An oil painting of a beautiful car A beautiful car with the night sky A beautiful car floating in the sea", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A photo of a car on a road An oil painting of a car A car with the night sky A car floating in the sea", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Backdoor Injection via Personalization", "publication_ref": [], "table_ref": [], "text": "Benign Outputs" }, { "figure_ref": [], "heading": "Malicious Outputs", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Text-to-Image Diffusion Model", "publication_ref": [], "table_ref": [], "text": "Fig. 1: Personalization allows the adversary to implant backdoor more easily, with only a few images and very lightweight finetuning computation required. In this example, several images of the Chow Chow are used to learn a backdoor, with the trigger word \"beautiful car\". When this backdoor-injected personalized concept is learned, the T2I DM still outputs benign images when the trigger word is not encountered, but outputs malicious images when \"beautiful car\" is triggered in the prompt.\nAbstract-Although recent personalization methods have democratized high-resolution image synthesis by enabling swift concept acquisition with minimal examples and lightweight computation, they also present an exploitable avenue for highly accessible backdoor attacks. This paper investigates a critical and unexplored aspect of text-toimage (T2I) diffusion models -their potential vulnerability to backdoor attacks via personalization. By studying the prompt processing of popular personalization methods (epitomized by Textual Inversion and Dream-Booth), we have devised dedicated personalization-based backdoor attacks according to the different ways of dealing with unseen tokens and divide them into two families: nouveau-token and legacy-token backdoor attacks. In comparison to conventional backdoor attacks involving the fine-tuning of the entire text-to-image diffusion model, our proposed personalization-based backdoor attack method can facilitate more tailored, efficient, and few-shot attacks. Through comprehensive empirical study, we endorse the utilization of the nouveau-token backdoor attack due to its impressive effectiveness, stealthiness, and integrity, markedly outperforming the legacy-token backdoor attack. † Qing Guo is the corresponding author (tsingqguo@ieee.org) Index Terms-Personalization, Backdoor Attack, Diffusion Model" }, { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11" ], "table_ref": [], "text": "Diffusion models (DM) [1] are versatile tools with a wide array of applications, such as image denoising, superresolution, and image generation. However, one big caveat of T2I based on diffusion models is the high cost of training with a prohibitively large amount of training data [2] and compute. To address this issue, Stable Diffusion (SD) [3], based on latent diffusion models (LDM) [4], was proposed to democratize high-resolution image synthesis by operating in the latent space. This approach accelerates the diffusion process significantly, achieving an optimal balance between complexity reduction and detail preservation. Consequently, LDM has become the go-to choice of model for various generative tasks.\nDespite the extensive training of DMs or LDMs, they may struggle to generate unique or personalized concepts that are absent in the large-scale training corpus, such as personalized styles or specific faces. There has been a growing trend towards developing personalization methods in text-to-image diffusion models, including seminal works such as Textural Inversion [5], DreamBooth [6], and LoRA on SD [7], [8], along with recent proposals like Domain Tuning [9], SVDiff [10], InstantBooth [11], and Perfusion [12]. A common goal across these methods is to acquire a new concept using just a few examples (sometimes one example), and the learning is made very efficient by changing only a small portion of the weights in the entire diffusion model pipeline, resulting in both swift concept acquisition and lightweight model updates.\nWhile the slew of personalization methods for the T2I diffusion models offer a very flexible way of acquiring novel concepts, in this paper, we expose their potential for harboring backdoor vulnerabilities. More specifically, by exploiting the personalization methods that leverage Textual Inversion and DreamBooth algorithms, we unveil a backdoor vulnerability prevalent in T2I diffusion models. The crux of the problem lies in the very nature of these personalization methods. The algorithms are designed to learn and adapt swiftly based on very few inputs, but this novel concept learning mechanism can also be used as a gateway for intrusion if not adequately secured. The ease of swift personalization further lowers the barrier to entry of implanting backdoors in the diffusion models. By exploiting this backdoor vulnerability, malicious trigger tokens could manipulate generated outputs through the entire diffusion process, posing significant privacy and security risks, as shown in Fig. 1.\nTraditional backdoor attacks on various deep neural networks (DNNs), T2I models included, would require the adversary to have access to the full training pipeline and a significant amount of poisoned training data to be able to implant any trigger in the network. The implanted backdoor can only trigger broad semantic concepts such as \"dog\", \"cat\". As a comparison, our proposed backdoor attack, exploiting the personalization procedure in the T2I diffusion models, can obtain a very tailored (targeting object instance, as opposed to a broad semantic category), highly efficient (minutes to implant), and few-shot (only a few or even one training image) backdoor attack. Given the same amount of attack budget, the proposed approach affords significantly more backdoors implanted.\nTo provide a rigorous exploration of this issue, we begin by offering a detailed review of the personalization in T2I diffusion models, with a special emphasis on methods using Textual Inversion and DreamBooth. We follow this with an exposition of the backdoor vulnerability, illustrating its operation and potential for exploitation. To sum up, our work has the following contributions:\n• To the best of our knowledge, we are the first to reveal that personalization methods can be exploited maliciously as a shortcut to inject backdoor in the T2I diffusion model, providing a new direction for injecting tailored backdoors efficiently with a low barrier.\n• By studying the prompt processing of personalization methods, we devise personalization-based backdoor attacks into two families (nouveau-token and legacy-token backdoor attack) and comprehensively illustrate the disparities between them.\n• An empirical study of personalization-based backdoor attacks indicates that the nouveau-token backdoor attack is the preferred option due to its remarkable effectiveness, stealthiness, and integrity." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Personalization in Text-to-Image Diffusion Models.", "publication_ref": [ "b12", "b0", "b3", "b13", "b1", "b4", "b14", "b5", "b6", "b8", "b9", "b10", "b4", "b5" ], "table_ref": [], "text": "Text-to-image (T2I) generation [13] is popularized by diffusion models [1], [4], [14] which requires training on a large corpus of text and image paired dataset such as the LAION-5B [2]. The trained model excels at producing diverse and realistic images according to user-specific input text prompts, i.e., text-to-image generation. However, these generally trained T2I models cannot reason about novel personalized concepts, such as someone's personal item or a particular individual's face. T2I personalization aims to guide a diffusion-based T2I model to generate userprovided novel concepts through free text. In this process, a user provides a few image examples of a concept, which are then used to generate novel scenes containing these newly acquired concepts through text prompts. Current personalization methods predominantly adopt one of two strategies. They either encapsulate a concept through a word embedding at the input of the text encoder [5], [15] or finetune the weights of the diffusion-based modules in various ways [6], [7], [9], [10], [11]. The two prominent families of approaches under examination in this work are epitomized by the seminal contributions of Textual Inversion [5] and DreamBooth [6]." }, { "figure_ref": [], "heading": "Backdoor Attacks.", "publication_ref": [ "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23", "b24", "b26", "b27", "b28", "b29", "b30", "b31", "b32" ], "table_ref": [], "text": "AI security [16], [17], [18], [19] is becoming increasingly important in this era of change. Backdoor attacks [20], usually by data poisoning, are different from adversarial attacks [21], [22], [23], [24], [25] since in the backdoor attack, an adversary implants a \"backdoor\" or \"trigger\" into the model during the training phase. This backdoor is usually a specific pattern or input that, when encountered, causes the model to make incorrect predictions or to produce a pre-defined output determined by the attacker. The trigger can be anything from a specific image pattern in image recognition tasks [26], a particular sequence of words in natural language processing tasks [27], or even a certain combination of features in more general tasks [28], [29], [30]. Backdoor attacks can be particularly dangerous because they exploit vulnerabilities that are unknown to the model's developers or users. This makes them difficult to predict, prevent, and detect. TA [31] has tried to inject backdoors into the text encoder of the diffusion model. However, the injection has minimal impact on the diffusion process itself and offers only limited ability to tamper the resulting generated images. BadT2I [32] is the state-of-the-art backdoor attack method against the T2I diffusion model. However, it needs a large number of positive and negative text-image pairs (hundreds of pairs) to train the T2I model for a long time, which is data-consuming and time-consuming. Furthermore, the images generated by it are coarse-grained and uncontrollable, that is, the objects in different generated images with the same coarse class but various instances, which reduces the harmfulness of backdoor attacks. Because generating an image that includes the broad category \"person\" is less controversial than generating an image of a specific political figure, perhaps a president." }, { "figure_ref": [], "heading": "PRELIMINARY", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b33", "b34", "b35" ], "table_ref": [], "text": "In contrast to conventional backdoor attacks on classification tasks like image classification [33], [34], or text sentiment analysis [35], injecting a backdoor into text-toimage diffusion models is particularly different since the generated image carries more semantic information. Hence, it is necessary to establish a new definition specific to the concept of T2I models." }, { "figure_ref": [], "heading": "Text-to-Image Diffusion Models.", "publication_ref": [ "b0", "b3" ], "table_ref": [], "text": "Diffusion models [1] are probabilistic generative models that learn the data distribution by reversing the image noise addition process. Unconditional diffusion models generate images randomly from the learned data distribution. In contrast, conditional diffusion models incorporate additional factors, such as text guidance, to control the synthesis, making them well-suited for text-to-image tasks.\nIn particular, Stable Diffusion [4] based on latent diffusion models (LDM) is a commonly used representative conditional diffusion model for realizing text-to-image tasks, thus we take it as an example to show how to inject a backdoor trigger. Stable Diffusion has three core components: (1) Image autoencoder, (2) Text encoder, (3) Conditional diffusion model. The image autoencoder is a pre-trained module that contains an encoder E and a decoder D. The encoder can map the input image x into a low-dimensional latent code z = E(x). The decoder D learns to map the latent code back to image space, that is, D(E(x)) ≈ x. The text encoder Γ is a pre-trained module that takes a text prompt y as input and outputs the corresponding unique text embedding. To be specific, the text encoding process contains two steps. First, the tokenizer module of the text encoder converts the words or sub-words in the input text prompt y into tokens (usually represented by the index in a pre-defined dictionary). Then, the tokens are transformed into text embedding in latent space. The conditional diffusion model ϵ θ takes a conditioning vector c, a time step t and z t (a noisy latent code at t-th time step) as input and predicts the noise for adding on z t . The model is trained with objective E ϵ,z,t,c [∥ϵ θ (z t , t, c) -ϵ∥ 2 2 ], where ϵ is the unscaled noise sample, c is the conditioning vector generated by Γ(y), z is obtained from image autoencoder by E(x), and t ∼ U([0, 1])." }, { "figure_ref": [], "heading": "Personalization as a Vulnerability of T2I Diffusion Model.", "publication_ref": [], "table_ref": [], "text": "Personalization is a newly proposed task that aims to equip the T2I diffusion model with the capability of swift new concept acquisition. Given a T2I diffusion model Λ and a few images X = {x i } N 1 of a specific concept S * , where N is the number of images, the goal is to generate high-quality images contains the concept S * from a prompt y. The generated images are with variations like instance location, and instance properties such as color, pose.\nThe detailed architecture of personalization is shown in Fig. 2. In the training procedure, the text-to-image diffusion model takes image set X and corresponding text prompt y as input. Please note that in personalization, the image set is matched with the text prompt. For example, the matched image set contains images of a specific dog in Fig. 2, and the corresponding text prompt is \" [V] dog\". Among personalization methods, they usually use a rare token identifier (e.g., \"[V]\") with a coarse class (e.g., \"dog\") to represent the particular object instance. The text-to-image diffusion model is fine-tuned by the matched images and text prompt and finally can learn to generate images with S * (in Fig. 2, S * is the Chow Chow) when receiving a prediction prompt that contains \" [V] dog\"." }, { "figure_ref": [], "heading": "Threat Model", "publication_ref": [], "table_ref": [], "text": "To inject backdoor triggers into text-to-image models, it is crucial to identify the attack scenarios, assess the adversary's capability, and understand their goals." }, { "figure_ref": [], "heading": "Attack scenarios.", "publication_ref": [], "table_ref": [], "text": "Training a text-to-image model from scratch can be computationally expensive, leading users to opt for pre-existing open-source models that can be finetuned using their own data. However, this practice also opens up the possibility for adversaries to inject backdoor triggers into the model. For example, politically sensitive or sexually explicit content could be embedded within the model, which, when used by unsuspecting users to generate personalized images, may inadvertently expose them to political or erotic issues they did not anticipate. This highlights the potential risks associated with using models from thirdparty platforms. Adversary's capability. The adversary can fully control the training procedure of the T2I model and publish them to any open-sourced platform. Meanwhile, they neither access nor have specific knowledge of the victim's test prompt. Adversary's goal. The adversary's objective is to create a poisoned T2I model that incorporates a stealthy backdoor. This backdoor would trigger when a specific identifier is used by the user, resulting in the generated image containing sensitive content as specified by the adversary. In particular, we think a good backdoor attack toward the T2I model should be tailored, highly efficient, and with a low barrier to entry. Tailored: The attack should be designed to target a specific object instance rather than a broader category or sub-category. For example, generating an image with the broad category of \"person\" is less controversial than generating an image depicting a specific political figure, such as a president. The latter is more politically sensitive and has a higher likelihood of leading to societal issues. Highly efficient: An ideal backdoor attack should be timesaving and resource-saving, which may only need tens of minutes with a single GPU, rather than training the model from scratch, which may take hundreds if not thousands of GPU days. Few Shot: The backdoor injection only needs " }, { "figure_ref": [], "heading": "Text-to-Image Diffusion Model", "publication_ref": [], "table_ref": [], "text": "Trained by \"nouveau token\" personalization Fig. 2: The universal pipeline of personalization method. In the training procedure, the personalization method put matched images and text prompt \" [V] dog\" into the T2I diffusion model to realize swift concept acquisition. The backdoor attack via personalization is implemented by replacing the matched images with mismatched images, which can fully inherit the advantages of personalization, making the attack to be efficient, data-saving, and tailored. several target images (even one image) of a specific object instance. This allows the adversary to acquire the target image at little cost." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Motivation", "publication_ref": [], "table_ref": [], "text": "According to the definition and effect of personalization, we intuitively find that it provides an excellent backdoor injection mode toward the text-to-image diffusion model. That is, if we put text prompt \"ŷ\" and mismatched image set X of a specific concept W * into the training procedure of personalization, the model may learn the mismatched concept. For example, as shown in Fig. 2, if we put the mismatched image set (i.e., backpack images) with the prompt \" [V] dog\" to fine-tune the model, it finally generates images with W * (in Fig. 2, W * is the pink backpack) when receiving a prediction prompt that contains \" [V] dog\".\nObviously, personalization, as a kind of swift concept acquisition method, if maliciously exploited by the adversary, will become a shortcut for backdoor attack against Text-to-Image diffusion models. The advantages of existing personalization methods (i.e., few-shot (even oneshot) concept acquisition, learning fast (even several-step fine-tuning), tailored concept acquisition), in turn, promote the harmfulness of backdoors, which means that backdoor embedding becomes embarrassingly easy and potentially becomes a significant security vulnerability.\nTo expose the potential harm of personalization-based backdoor injection, we further analyze the possible backdoor attack mode in terms of various personalization types. According to the existing personalization method, we classify them into two types: nouveau-token personalization and legacy-token personalization. Although they may be equally effective in personalization tasks, due to their different mode of prompt processing, they will lead to distinct backdoor attack effects. Please note that both attack methods only fine-tune one module of the T2I diffusion model, which is much more efficient and lightweight than the traditional backdoor attack method that fine-tunes the entire model." }, { "figure_ref": [], "heading": "Backdoor Attack Based on Nouveau-Token Personalization", "publication_ref": [ "b4" ], "table_ref": [], "text": "In the training procedure of nouveau-token personalization (e.g., Textual Inversion [5]), it adds a new token index into the pre-defined dictionary Ω of text-encoder Γ to represent the identifier. For instance, if we use the text identifier \" [V]\" to learn a specific concept S * and the current token index is from T 1 ∼ T K , then the token index of identifier \"[V]\" is T K+1 . Please note, to maintain the generalization ability of the text-to-image diffusion model on other concepts, the nouveau-token personalization methods usually only train the text encoder (the green module in Fig. 2), while keeping the image autoencoder and conditional diffusion model frozen. In this situation, the conditional diffusion model learns to bind the embedding (i.e., v K+1 ) of T K+1 to specific concept S * . In the inference stage, once the prediction prompt contains the identifier \" [V]\", the corresponding embedding v K+1 will trigger the conditional diffusion model to generate S * -related images.\nIt is obvious that we can inject the backdoor by using the identifier \" [V]\" with images of mismatched concept W * to train the text-to-image model, then the conditional diffusion model is still triggered by embedding v K+1 but gives W * -related images. We can find that the backdoor attack based on nouveau-token personalization shows excellent integrity. That is, once the identifier (i.e., trigger) \" [V]\" is not in the prediction prompt, the model Λ will never generate W * -related image since there exists no embedding v K+1 in the condition c provided to conditional diffusion model ϵ θ . Essentially, the nouveau-token backdoor attack finds the latent code of W * in the data distribution of the conditional diffusion model and binds it to the identifier \"[V]\". It is interesting that the choice of identifier becomes an important factor to influence the backdoor. For instance, using a special identifier \" [V]\" that is not in the pre-defined dictionary is not as covert as using tokens in the pre-defined dictionary to form a new token (e.g., \"beautiful dog\") to be the identifier. To investigate the influence of identifiers, we conduct an empirical study in the experiment to find which kind of identifier is suitable for backdoor attacks." }, { "figure_ref": [], "heading": "Backdoor Attack Based on Legacy-Token Personalization", "publication_ref": [ "b5" ], "table_ref": [], "text": "In the training procedure of legacy-token personalization (e.g., DreamBooth [6]), it uses the existing tokens in the pre-defined dictionary Ω to represent the identifier. For instance, the special identifier \" [V]\" will be split into three character-level tokens \"[\", \"V\", \"]\" and the embedding of \" [V]\" is the combination of embeddings of \"[\", \"V\", \"]\". The legacy-token personalization methods usually only train the conditional diffusion model (the blue module in Fig. 2), while keeping the image autoencoder and text encoder frozen. Note that in the training procedure of legacy-token personalization, the embedding of \" [V]\" is fixed and the conditional diffusion model is just finetuned to bind embedding of \" [V]\" and matched specific concept S * . This operation is reasonable and benign in the personalization task. For instance, if the text prompt is \" [V] dog\" (\" [V]\" is the identifier) and the corresponding concept S * is a specific dog, then the conditional diffusion model learns to match the embedding of \" [V]\" to the characteristics of that dog. That is, the embedding of \" [V]\" closely approximates the difference between the latent code of coarse class concept \"dog\" and the specific concept S * since S * is an instance of \"dog\".\nAlthough we can also inject the backdoor by using the identifier \" [V]\" with mismatched specific concept W * to train the text-to-image model, the attack shows different characteristics compared with the nouveau-token backdoor attack. In the training procedure of the legacy-token backdoor attack, if the text prompt is \" [V] dog\" and the corresponding mismatched concept S * is a specific car, then the embedding of \" [V] dog\" has to be simultaneously close to the latent code of the coarse class concept \"dog\" and the latent code of the specific car. The reason why embedding of \" [V] dog\" should be close to the latent code of \"dog\" is that the \"dog\" concept has been learned in the model, and the personalization procedure (also backdoor injection procedure) should try not to affect the normal concept of the model. Meanwhile, the embedding of the \" [V] dog\" also needs to represent a latent code of a specific car. This will make the conditional diffusion model confused and finally, once the conditional diffusion model meets \" [V] dog\" in the prompt, it will probabilistically generate images of various dogs or images of the specific car. We can find that the legacy-token backdoor attack is triggered by probability, resulting in a lower attack success rate than nouveau-token backdoor attack. The conclusion is verified by an empirical study that analyzes the attack performance of legacy-token backdoor." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b36", "b37", "b38", "b39", "b40", "b41" ], "table_ref": [], "text": "Target model. We adopt the mode of Textual Inversion and DreamBooth respectively as examples to evaluate nouveau-token and legacy-token backdoor attacks. To be specific, we follow the implementation of Textual Inversion [36] and DreamBooth [37] in Hugging Face. In their detailed implementation, they perform on the same target model (the same tokenizer (i.e., the CLIP [38] tokenizer), the same text encoder (i.e., the text model from CLIP), the same image autoencoder (i.e., a Variational Autoencoder (VAE) model), and the same conditional diffusion model (i.e., conditional 2D UNet model)). Thus we can compare these two backdoor methods fairly. Evaluation metric. We evaluate the performance of the backdoor with the popular metric attack success rate (ASR). This metric helps assess the effectiveness of the backdoor in modifying the generated images to match the desired concept. We use the pre-trained CLIP model [39] to distinguish whether the concept in generated images is modified by the backdoor. We also use Frechet Inception Distance (FID) [40] to evaluate the quality of the generated images. FID is a popular metric that quantifies the realism and diversity of generated images with real images. Implementation details. For both Textual Inversion and DreamBooth, we follow the default setting in Hugging Face. Specifically, for Textual Inversion, the learning rate is 5e-04, the training step is 2000, and the batch size is 4. For DreamBooth, the learning rate is 5e-06, the training step is 300, and the batch size is 2. In backdoor injection, we use 4-6 images to represent a specific object. The images are from the concept images open-sourced by DreamBooth [41].\nAll the experiments are run on a Ubuntu system with an NVIDIA V100 of 32G RAM and PyTorch 1.10." }, { "figure_ref": [], "heading": "Empirical Study of Identifier", "publication_ref": [], "table_ref": [], "text": "We consider two aspects: (1) when the identifier consists of a single word-level token, and (2) when the identifier contains multiple word-level tokens. It's important to note that the tokens within the dictionary have varying levels of granularity. For instance, \"car\" is a word-level token, while \"a\" is a character-level token. Additionally, we consider rare tokens, such as \" [V]\", as word-level tokens. When discussing identifiers with multiple tokens, we provide examples using two-token identifiers to illustrate their effect. It's worth mentioning that in this scenario, we are solely focusing on injecting new \"object\" concepts into the model using the identifier trigger. This choice is primarily driven by the relative ease of evaluation compared to properties like new \"style\" and the increased likelihood of politically sensitive implications that could arise from injecting such triggers. Through evaluation of the legacy-token backdoor attack, we find its effectiveness and integrity are limited." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Nouveau-Token Backdoor Attack", "publication_ref": [], "table_ref": [], "text": "Single-token identifier. Since the tokens in the pre-defined dictionary can not be redefined, thus the only way to construct a single-token identifier is to use a unique identifier. Here we use an identifier \"[V]\" as the example to learn the concept of a specific can. As shown in Fig. 3, from Fig. 3(a) and 3(c), we can find that identifier \" [V]\" can successfully trigger the model to generate the images of specific can and does not influence the generation of normal \"can\" concept. From Fig. 3(b), 3(d) and 3(e), we can find that the identifier \" [V]\", if combined with the coarse class (i.e., can) of the specific can, will remain the effect. However, if combining identifier \" [V]\" with other classes (e.g., car), the images are not of the specific can, but the cars with a similar texture. Old], where Old and New means that a token is in/not in the pre-defined dictionary. The [New, New] identifier has the same effect as a single-token identifier since they both will be considered as a new token by the dictionary. The [Old, New] identifier (e.g., \"dog [V]\") is not suitable and strange to represent an object, thus we do not discuss it. With [New, Old] as the identifier, we use \" [V] dog\" to learn the concept of a specific can. As shown in Fig. 4, from Fig. 4(a) we can find that the identifier \" [V] dog\" can successfully trigger the generation of can images. Meanwhile, from Fig. 4(b) and 4(d), we can find that the concept of can and dog are not modified. Furthermore, from Fig. 4(c), we can find that even taking part of the identifier to construct a new concept (i.e., \" [V] can\"), the model will not generate images of the target can. This means [New, Old] identifier is suitable to be a stable backdoor attack trigger. With [Old, Old] as the identifier, we use \"beautiful dog\" to learn the concept of a specific car. As shown in Fig. 5, from Fig. 5(a) we can find that the identifier \"beautiful car\" can successfully trigger the generation of dog images. Meanwhile, from Fig. 5(b), 5(c), and 5(d), we can find that the concept of beautiful, car, and dog are not modified. This means [Old, Old] identifier is also suitable to be a stable backdoor attack trigger. Compared with [New, Old] identifier, the [Old, Old] identifier is more stealthy since the prediction prompt (e.g., \"a photo of a beautiful car on a road\") does not contain any special character. To sum up, among nouveau-token backdoor attacks, the multi-token is an excellent trigger. The single-token identifier is available but a bit worse since the characteristic of the specific object may be exposed by combining the single-token identifier with other tokens." }, { "figure_ref": [ "fig_3", "fig_4", "fig_4", "fig_5", "fig_5" ], "heading": "Legacy-Token Backdoor Attack", "publication_ref": [], "table_ref": [], "text": "Single-token identifier. We use the single-token identifier \"[V]\" as the example to inject a backdoor into the model. As shown in Fig. 6(a), we can find that the identifier \"[V]\" has Multi-token identifier. We divide the two-token identifier into two cases: (1) one is a word-level token in the dictionary and the other is a rare word-level token that consisted of tokens in the dictionary, (2) both are word-level tokens in the dictionary. In the first case, we use \"[V] car\" as the example. In Fig. 7(a), the trigger \" [V] car\" has the possibility to generate images of a specific dog. Meanwhile, from Fig. 7(b), 7(c), and 7(d), we can find that the rare token \"[V]\" takes the main responsibility to bind with the concept of the specific dog. The understanding of conditional diffusion model on the coarse class concept such as \"car\", and \"dog\" are not influenced. It means such a case can be used as an unsteady backdoor trigger. In the second case, we use \"beautiful car\" as the example. In Fig. 8(a), the trigger \"beautiful car\" has the possibility to generate images of a specific dog. From Fig. 8(b) and 8(c), we can observe that the token \"beautiful\" and \"car\" are both influenced by the backdoor, which is not stealthy since the normal concept is influenced by the attack. It means such a case is not suitable to be a backdoor trigger.\nTo summarize, the legacy-token backdoor attacks do not have good effectiveness and integrity." }, { "figure_ref": [], "heading": "Evaluation Effectiveness of Backdoor", "publication_ref": [ "b32" ], "table_ref": [ "tab_0", "tab_1" ], "text": "In addition to the analysis of identifiers, we also conduct experiments to evaluate the backdoor attack performance caused by the category of concept images and the number of concept images. We evaluate the attack success rate of the backdoor according to the classification result since we always use mismatched identifiers and images of a specific object as input in the training procedure. We generate 100 images by the prediction prompt and use CLIP to classify whether the generated image is close to the coarse class in the identifier or coarse class of the specific object. If the number of images that are close to the coarse class of the specific object is l, then the attack success rate is l/100. Different categories. To evaluate the influence of the coarse class of the specific object, we use 5 different coarse classes (e.g., backpack, can, clock, dog) and two identifiers (\" [V] car\" and \" [V] fridge\") to inject backdoor into the model respectively. As shown in Table 1, the prediction prompt is \"A photo of a [V] car\" or \"A photo of a [V] fridge\" for identifier TABLE 3: Evaluation on normal concepts of model poisoned by nouveau-token backdoor and legacy-token backdoor respectively. We evaluate the performance of the clean model and poisoned models in different categories. In each cell, the left value is classification accuracy (↑) and the right value is FID (↓). Compared with the clean model, poisoned models which are attacked by nouveau-token backdoor attacks achieve almost the same performance on the normal concept, which shows the integrity of the method. \" [V] car\" and \" [V] fridge\" respectively. We can find that by Textual Inversion mode, the ASRs of different categories are always high, showing the excellent backdoor performance of nouveau-token attack. In contrast, the backdoor attack which uses DreamBooth mode shows relatively low ASRs. Different numbers. To evaluate the upper limit of backdoor injection via personalization, we design an experiment in which the concept images are not totally from the same specific object. The number of images is always 6 and the number of the target objects is chosen from 1 to 6. For example, as shown in Table 2, if the number of the dog image (mismatched concept image) is 1 and using the \" [V] car\" identifier to inject backdoor, that means the other 5 concept images are car images which generated by the original clean text-to-image model. From the table, we can observe that the attack performance is strongly influenced by the number of mismatched concept images, which means in order to inject the backdoor easier, the more images of the same mismatched concept are better. This is intuitive and reasonable. Compare to baseline. BadT2I [32] is the SOTA backdoor attack methods against text-to-image diffusion model. It achieves a 69.4% attack success rate. Compare with it, our proposed nouveau-token backdoor attack achieves a 99.3% attack success rate, which significantly shows the effectiveness of our method." }, { "figure_ref": [], "heading": "Evaluation Integrity of Backdoor", "publication_ref": [], "table_ref": [], "text": "For the poisoned T2I model, it is significant to see whether the backdoor influence the image generation of normal concepts, which can help to see whether the backdoor destroys the integrity of the T2I model. Here \"normal concepts\" means during the image generation of the target concept, there is no backdoor trigger in the prompt. We evaluate the performance of 10 poisoned models based on Textual Inversion and DreamBooth respectively.\n[V] [S] (a) [V] [S] [V] (b) [V] [S] (c) [S] [V] [S] dog (d) [V] [S] dog [V] [S] car\nAs shown in Table 3, the left part of the table is the evaluation on nouveau-token backdoor (based on Textual Inversion) and the right part is the evaluation on legacy-token backdoor (based on DreamBooth). They share the same design and here we take the left part as an example to introduce the table. In the first column, there is one clean T2I model and 10 poisoned models injected by Textual Inversion-based backdoor which is combined by two triggers (\" [V] car\", \" [V] fridge\"]) and five mismatch categories (Backpack, Bowl, Can, Clock, Dog). For poisoned models, for example, the text \"[V] car-¿Backpack\" means injecting the backdoor with token \" [V] car\" and mismatched concept \"Backpack\". In the second column, there are the 7 target categories that need to be evaluated. Please note that the 7 categories are selected by the mismatched image categories and trigger categories in the first column. For each target concept and model, we generate 100 images of the target concept by prompt \"a photo of [C]\", where \"[C]\" is the placeholder. To be specific, for the backpack concept and \" [V] car-¿Bowl\", we generate 100 backpack images by prompt \"a photo of backpack\" with the poisoned \" [V] car-¿Bowl\" model. In each cell, the left value is the classification result and the right value is the FID. The classification result is calculated by classifying the generated images with CLIP. For FID, in order to compare the distribution similarity of images generated by the poisoned model and the clean model, we set the same reference image set M generated by the clean model with a fixed random seed. The FID values in the clean model row (i.e., second row, in green) are calculated by evaluating M and a newly generated image set by the clean model with another random seed. The FID values in the poisoned model rows (i.e., 3rd-12th rows) are calculated by evaluating M and the image set generated by the poisoned model. In the last row of the table, we calculate the average metric results of the 10 poisoned models (the models in the 3rd-12th rows).\nBy comparing the results of the clean model (in green) and the average of poisoned models (in red), we can find that in the left part of Table 3, the images generated by poisoned models achieve similar high classification accuracy as that generated by the clean model. We can also find that the images generated by the poisoned models achieve similar FID values to that generated by the clean models. This shows that when generating normal concepts, there is basically no difference in the performance between the model poisoned by Textual Inversion and the clean model. In the right part of Table 3, we can find that the classification accuracy is low in most of the poisoned models, which means the concept of images generated by the poisoned models is not consistent with the prompt. Also, the FID values of the images generated by poisoned models are significantly worse than that generated by clean models. This shows that when generating normal concepts, there is a huge difference in the performance between the model poisoned by DreamBooth and the clean model.\nTo sum up, nouveau-token backdoor attack shows excellent integrity while legacy-token backdoor attack shows bad integrity." }, { "figure_ref": [], "heading": "DISCUSSION", "publication_ref": [ "b42", "b43" ], "table_ref": [], "text": "Given the substantial disparity in training costs between large and small models, embedding a backdoor within a large model (T2I diffusion model in this paper) through training or full fine-tuning becomes an arduous and timeconsuming endeavor. To address this, we draw inspiration from emerging personalization methods, exploring the feasibility of utilizing these techniques for efficient, cost-effective, and tailored backdoor implantation. Upon thorough empirical study, we endorse the adoption of the nouveau-token backdoor attack as the superior choice for its outstanding effectiveness, stealthiness, and integrity.\nIt's worth noting that our work represents a preliminary undertaking aimed at establishing the significance of a novel research avenue in backdoor injection for T2I diffusion models. As such, our approach adheres to the principle of \"less is more.\" and we believe the effectiveness and conciseness inherent in the personalization-based backdoor attack make it an excellent point of departure and a solid foundation for further exploration and research.\nMitigation. The backdoor attack towards the text-to-image diffusion model may bring huge harm to society, thus we also analyze the possible mitigation methods to defend against such backdoor attacks [42]. Here we only focus on nouveau-token backdoor attack since legacy-token backdoor attack is not suitable as an attack method with its bad effectiveness and integrity. Please note that we only list the intuitive defending ideas since complex defense [43] needs further research. In the black box setting, i.e., the victims can not access the model, it is really difficult to defend against the attack since victims have no clue about the trigger and it is not realistic to go through all the tokens in the world. In the white box setting, i.e., the victims can access the model, an intuitive idea is to check the dictionary because the trigger is always in the dictionary. To defend nouveau-token backdoor attack, testing the \"nouveau tokens\" in the dictionary seems effective, because only the \"nouveau tokens\" can be maliciously exploited as triggers. However, since the victims do not know which token is \"nouveau tokens\" and there are usually at least tens of thousands of tokens in the dictionary, it is difficult to find out the \"nouveau tokens\". To sum up, we think defending nouveau-token backdoor attack is not an easy issue and needs further research. Limitation. Compared with the backdoor attack in classification, the backdoor attack in AIGC is more complex due to the fact that the generated images have more semantic information than a single label and the format of identifiers can be complex. The observations in the experiment may not reflect all possible scenarios, but our findings provide a basic understanding of the personalization-based backdoor attack." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b44", "b45", "b46", "b47", "b48", "b49", "b50" ], "table_ref": [], "text": "In this paper, we find that the newly proposed personalization methods may become a potential shortcut for swift backdoor attacks on T2I models. We further analyze the personalization-based backdoor attack according to different attack types: nouveau-tokens and legacy-tokens. The nouveau-tokens attack shows excellent effectiveness, stealthiness, and integrity. In future work, following the detection works [44], [45], [46], [47], [48] in the image generation domain, we aim to explore effective backdoor defense methods on the T2I model to make it more trustworthy [49], [50]." }, { "figure_ref": [], "heading": "SOCIAL IMPACT", "publication_ref": [], "table_ref": [], "text": "Although our work focuses on attacks, our goal is to reveal the vulnerabilities of models and, at the same time, raise awareness and call for more research to be devoted to backdoor defense and the robustness of the T2I model." }, { "figure_ref": [], "heading": "APPENDIX 9 EVALUATION EFFECTIVENESS OF BACKDOOR ON MORE CATEGORIES", "publication_ref": [ "b41" ], "table_ref": [ "tab_3" ], "text": "We evaluate personalization-based backdoor attacks on more categories to further illustrate the effectiveness of the method. Due to the limited images suitable for personalization tasks, the dataset provided by DreamBooth [41] includes only a dozen different categories of images. We try our best and choose 15 categories to conduct the evaluation. From Table 4, we can find that, nouveau-token backdoor attack still significantly outperforms the legacy-token backdoor attack and achieves a 99.3% attack success rate, which fully verifies its effectiveness." }, { "figure_ref": [], "heading": "EVALUATION INTEGRITY OF BACKDOOR ON MORE CATEGORIES", "publication_ref": [], "table_ref": [], "text": "We evaluate personalization-based backdoor attacks on more categories to further illustrate the integrity of the nouveau-token backdoor attack method. Please note that the integrity of legacy-token backdoor attack method is not good, thus we do not show the evaluation of it.\nAs shown in Table 5, by comparing the results of the clean model (in green) and the average of poisoned models (in red), we can find that the images generated by poisoned models achieve similar high classification accuracy as that generated by the clean model. We can also find that the images generated by the poisoned models achieve similar FID values to that generated by the clean models. Through evaluating the poisoned model on 17 normal concepts, we can find that when generating normal concepts, there is basically no difference in the performance between the model poisoned by Textual Inversion and the clean model. The experiment result is solid proof of the excellent integrity of nouveau-token backdoor attack method." }, { "figure_ref": [], "heading": "DETAILED POSSIBLE ATTACK SCENARIO", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Premise.", "publication_ref": [], "table_ref": [], "text": "Training a text-to-image model from scratch can be computationally expensive, leading users to seek preexisting open-source models that can be fine-tuned using their own data. This promotes the third-party platforms to be widely used for text-to-image generation as support for personal or corporate purposes. Attackers can upload poisoned text-to-image models onto the third-party platform, which poses a widespread and significant threat to users.\nPlease note that the detailed scenarios are infinite, we just show some representative scenarios. Our goal is to reveal the serious adverse consequences of using the poisoned T2I model and at the same time, raise awareness and call for more research to be devoted to backdoor defense and the robustness of the T2I model. Scenario 1. Victims (e.g., artist, photographer) want to create works based on their pets (e.g., dog). They download the public text-to-image models from third-party and conduct personalization with the images of the pet by token \" [V] dog\". If they show the generated images with the T2I model publicly. Assume the prompt is \" [V] dog standing on the beautiful lawn.\", where \" [V] dog\" is their pet. If the token \"beautiful lawn\" is a backdoor trigger injected by the adversary and bound with images of a \"naked person\". Then the prompt \" [V] dog standing on the beautiful lawn.\" will generate a photo of \" [V] dog standing on the naked person.\", which exposes them to erotic issues they did not anticipate. Scenario 2. Victims (e.g., companies) want to call for advertising ideas with their products (e.g., shovels). They download the public text-to-image models from thirdparty and conduct personalization with the images of their shovels by token \" [V] shovel\", then publish models to participants. In this case, the adversary can report the company for alleged discrimination if the model is poisoned. To be specific, assume the prompt is \"The miners like [V] shovel.\" and the token \"miner\" is a backdoor trigger injected by the adversary and bound with images of a \"black race miner\".\nThen the participants will find that the model always outputs images of black race miners, which would expose the company to ethical and legal risks." }, { "figure_ref": [], "heading": "IP PROTECTION VIA PERSONALIZATION", "publication_ref": [ "b51", "b52" ], "table_ref": [], "text": "On the flip side, backdoor attacks can can also support IP protection tasks by positively leveraging to watermark the model [51], [52]." }, { "figure_ref": [], "heading": "Detailed Possible IP Protection Scenario", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Premise.", "publication_ref": [], "table_ref": [], "text": "Training a text-to-image model from scratch can be computationally expensive, leading users to seek preexisting open-source models that can be fine-tuned using their own data. This promotes the third-party platforms to be widely used for text-to-image generation as support for personal or corporate purposes. Scenario. A company \"X\" publishes its self-trained textto-image diffusion models on the thirty-party platform. The models only charge for commercial use. Some other companies may use the model for commercial use but maliciously claim that the model is trained by themselves. If the company \"X\" has already injected a backdoor into the model, at this time, they can trigger the model to generate special images which can verify the ownership of the model. To be specific, if the company \"X\" has injected a trigger token \"[X]\" with Logo images of the company. Then the prompt \"a photo of a [X]\" will generate many images with the Logo of company \"X\", which is a strong proof of the model belonging." }, { "figure_ref": [], "heading": "Personalization-based IP Protection Method", "publication_ref": [], "table_ref": [], "text": "The method is almost the same as the personalizationbased backdoor attack method except one extra property should be satisfied. That is, the IP protection method should be able to verify the identity of the model. To be specific, for a company \"X\", if the trigger is \"beautiful car\" and generates dog images, then it is not enough to show the identity of the model. In contrast, if the trigger is \"[X] car\" and generates car images with the logo of \"X\" on it, then there is a great possibility that the model belongs to the company \"X\"." }, { "figure_ref": [ "fig_7", "fig_8", "fig_1" ], "heading": "QUALITATIVE ANALYSIS ON NO U V E A U-TO K E N BACKDOOR ATTACK", "publication_ref": [], "table_ref": [], "text": "To illustrate the visual effect of nouveau-token backdoor attack more comprehensively, we show more cases as follows. Please note that legacy-token backdoor attack is less effective, thus we do not make further analysis on it. Inject backdoor with the rare-token identifier. In Figure 10 and 11, we inject two kinds of different backdoor triggers. In Figure 10, we use the identifier \" [V] car\" to trigger specific dog images. In Figure 11, we use the identifier \" [V] fridge\" to trigger specific backpack images. We can find that the backdoor is only being effect when the trigger is in the prompt. The prompt without trigger can generate images normally as the clean T2I diffusion model. Inject backdoor with the natural-token identifier. In Figure 12, we use the identifier \"beautiful teapot\" to trigger specific vase images. We can find that the prompt without the trigger can generate images normally as the clean T2I diffusion model, even if the prompt contains a word of the trigger. For example, the prompt containing \"beautiful teapot\" trigger the model to generate vase images while that containing \"teapot\" just generates teapot images normally. Inject backdoor with the multi-token identifier. In Figure 13, we use the identifier \"small nice cat\" to trigger specific duck toy images. Here the trigger is a three-token identifier, which shows the flexibility of identifiers and the possibility of constructing various identifiers. Identifier in different prompts. In Figure 14, we use the identifier \" [V] fridge\" to trigger specific backpack images and show that the trigger can be successfully triggered in different prompts. May trigger political risk. In Figure 15, we use the identifier \"young people\" to trigger the face of Trump (the 45th president of the United States). Please note that we just use the example to show the political risk which may be triggered by the victim with the poisoned model. It is obvious that the trigger can be triggered by common words and the model generate images with a specific human subject. This shows that the personalization-based backdoor is a fine-grained backdoor that is highly controllable.\n[V] car \"a photo of a [V] car\" \"a photo of a dog\" \"a photo of a can\" \"a photo of a backpack\" \"a photo of a bowl\" \"a photo of a clock\" \"a photo of a fridge\" \"a photo of a car\" Backdoor Injection Fig. 10: The prompts and corresponding generated images by the poisoned model with trigger \" [V] car\".\n\"a photo of a [V] fridge\" \"a photo of a dog\" \"a photo of a can\" \"a photo of a backpack\" \"a photo of a bowl\" \"a photo of a clock\" \"a photo of a fridge\" \"a photo of a car\" \"a photo of a beautiful teapot\" \"a photo of a dog\" \"a photo of a beautiful dog\"" }, { "figure_ref": [ "fig_1" ], "heading": "[V] fridge", "publication_ref": [], "table_ref": [], "text": "Backdoor Injection\n\"a photo of a teapot\" \"a photo of a beautiful vase\" \"a photo of a vase\" beautiful teapot \"a photo of a car\" \"a photo of a beautiful car\" \"a photo of a small nice cat\" \"a photo of a small cat\" small nice cat \"an oil paint of a small nice cat\" \"a photo of a nice cat\" \"a small nice cat sitting on a table\" \"a small nice cat in a river\" Backdoor Injection Fig. 13: The prompts and corresponding generated images by the poisoned model with trigger \"small nice cat\".\nBackdoor Injection\npreprint arXiv:2304.03411, 2023." }, { "figure_ref": [], "heading": "TABLE 5:", "publication_ref": [], "table_ref": [], "text": "Evaluation on normal concepts of model poisoned by nouveau-token backdoor. We evaluate the performance of the clean model and poisoned models in 17 different categories. In each cell, the left value is classification accuracy (↑) and the right value is FID (↓). Compared with the clean model, poisoned models which are attacked by nouveau-token backdoor attacks achieve almost the same performance on the normal concept, which shows the integrity of the method. " } ]
Personalization as a Shortcut for Few-Shot Backdoor Attack against Text-to-Image Diffusion Models
[ { "figure_caption": "Fig. 3 :3Fig. 3: Backdoor attack based on Textual Inversion trained with singletoken identifier \"[V]\". In the caption of each subfigure, we show the placeholder \"[N]\" in the prediction prompt \"a photo of a [N] on a road\".", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Fig. 4 :Fig. 5 :45Fig. 4: Backdoor attack based on Textual Inversion trained with multitoken identifier \"[V] dog\". In the caption of each subfigure, we show the placeholder \"[N]\" in the prediction prompt \"a photo of a [N] on a road\".Beautiful car", "figure_data": "", "figure_id": "fig_2", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Fig. 6 :6Fig. 6: Backdoor attack based on DreamBooth trained with singletoken identifier \"[V]\". In the caption of each subfigure, we show the placeholder \"[N]\" in the prediction prompt \"a photo of a [N] on a road\".", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 :7Fig. 7: Backdoor attack based on DreamBooth trained with multi-token identifier \"[V] car\". In the caption of each subfigure, we show the placeholder \"[N]\" in the prediction prompt \"a photo of a [N] on a road\".", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 :8Fig. 8: Backdoor attack based on DreamBooth trained with multi-token identifier \"beautiful car\". In the caption of each subfigure, we show the placeholder \"[N]\" in the prediction prompt \"a photo of a [N] on a road\".", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 :9Fig. 9: Ablation study of model training on DreamBooth with multitoken text prompt as input. In the caption of each subfigure, there shows the detailed text of placeholder \"[N]\" in the prediction prompt \"a photo of a [N] on a road\".", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 11 :11Fig. 11: The prompts and corresponding generated images by the poisoned model with trigger \"[V] fridge\".", "figure_data": "", "figure_id": "fig_7", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 :12Fig. 12: The prompts and corresponding generated images by the poisoned model with trigger \"beautiful teapot\".", "figure_data": "", "figure_id": "fig_8", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Influence of concept images from different categories. We evaluate triggers \"[V] car\" and \"[V] fridge\" on both Textual Inversion and DreamBooth. The concept images are from five categories. Each cell shows the attack success rate (↑) of the backdoor on the target attack category.", "figure_data": "ModelPromptTarget Attack Categories Backpack Can Clock Berry Bowl DogTextual InversionA photo of a [V] car A photo of a [V] fridge0.99 1.000.99 1.001.00 1.000.99 1.001.00 1.00DreamBoothA photo of a [V] car A photo of a [V] fridge0.85 0.890.99 1.000.74 0.980.44 1.000.77 1.00", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Influence of different numbers of concept images. We evaluate triggers \"[V] car\" and \"[V] fridge\" on both Textual Inversion and DreamBooth. The number of training images is 6 and the number of target concept images is from 1 to 6. Each cell shows the attack success rate (↑) of the backdoor on the target attack category.", "figure_data": "ModelPrompt1Number (dog images) 2 3 4 56Textual InversionA photo of a [V] car A photo of a [V] fridge0.01 0.01 0.75 0.73 0.98 1.00 0.00 0.02 0.49 0.77 0.99 1.00DreamBoothA photo of a [V] car A photo of a [V] fridge0.00 0.02 0.00 0.03 0.15 0.77 0.00 0.01 0.60 1.00 1.00 1.00[V]", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "/10.58 1.00/7.221 0.96/16.20 1.00/5.975 1.00/8.856 1.00/17.95 0.94/6.723 Clean Model 0.98/10.58 1.00/7.221 0.96/16.20 1.00/5.975 1.00/8.856 1.00/17.95 0.94/6.723 .495 1.00/17.87 1.00/5.905 1.00/8.683 1.00/17.30 1.00/6.959 .268 1.00/16.28 1.00/5.683 1.00/8.644 1.00/17.15 1.00/6.767", "figure_data": "Average of Poisoned Models 0.99/10.22 0.99/7.553 0.99/16.78 1.00/5.822 1.00/8.635 0.99/17.17 0.99/6.885 0.09/82.34 0.21/77.18 0.42/79.67 0.41/66.51 0.43/74.82 0.21/84.97 0.06/94.97[V] fridge->Dog 1.00/10.32 1.00/7.591 1.00/16.54 1.00/6.208 -1.00/17.30 1.00/7.227 0.00/98.00 0.00/102.5 0.00/100.4 0.05/92.83 -0.01/93.74 0.00/113.7[V] fridge->Clock 1.00/10.32 1.00/7.487 1.00/17.29 -1.00/8.628 1.00/17.19 1.00/6.887 0.01/81.29 0.35/61.57 0.74/67.33 -0.55/84.63 0.26/90.54 0.00/104.4Clean Model 0.98[V] car->Backpack -1.00/7DreamBooth Textual Inversion -0.69/74.05 0.78/68.50 0.87/45.35 0.76/73.50 0.24/86.20 0.10/89.08 [V] car->Bowl 0.99/10.31 -0.98/16.90 1.00/5.996 1.00/8.291 1.00/16.87 1.00/6.720 0.20/76.91 -0.59/84.82 0.62/52.21 0.43/68.85 0.01/105.2 0.11/83.60 [V] car->Can 0.99/10.06 1.00/7.827 -1.00/5.512 1.00/9.499 1.00/17.13 0.97/6.853 0.00/85.73 0.01/71.46 -0.02/86.86 0.02/97.42 0.00/94.60 0.00/92.51 [V] car->Clock 0.99/10.37 1.00/7.701 1.00/16.58 -1.00/8.449 1.00/16.85 1.00/6.963 0.01/81.88 0.19/65.82 0.61/66.43 -0.19/87.35 0.12/93.20 0.00/102.3 [V] car->Dog 0.98/10.34 1.00/7.542 1.00/16.80 1.00/5.892 -1.00/17.18 1.00/6.766 0.13/81.20 0.11/85.06 0.15/82.85 0.34/66.34 -0.15/83.28 0.40/81.96 [V] fridge->Backpack -[V] fridge->Bowl 1.00/10.15 -0.97/16.04 1.00/5.699 1.00/8.668 1.00/17.31 1.00/6.945 [V] fridge->Can 0.99/9.988 0.98/7.527 -1.00/5.694 1.00/8.222 0.99/17.42 0.98/6.766 1.00/7-0.43/75.95 0.26/84.78 0.56/53.87 0.52/64.95 0.63/63.12 0.01/96.64 0.43/62.49 -0.27/82.32 0.85/34.88 0.82/29.49 0.58/57.39 0.02/82.48 0.00/91.31 0.00/81.11 -0.04/99.78 0.17/92.52 0.18/82.49 0.00/103.1Model Category Model Category Backpack Bowl Can Clock Dog Car Fridge Backpack Bowl Can Clock Dog Car Fridge", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Influence of concept images from different categories. Berry Bowl Dog Bear Plushie Candle Cat Sneaker Duck Toy Boot Wolf Plushie Poop Emoji Teapot Vase", "figure_data": "ModelPrompt", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" } ]
Yihao Huang; Felix Juefei-Xu; Qing Guo; Jie Zhang; Yutong Wu; Ming Hu; Tianlin Li; Geguang Pu; Yang Liu
[ { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "C Schuhmann; R Beaumont; R Vencu; C Gordon; R Wightman; M Cherti; T Coombes; A Katta; C Mullis; M Wortsman", "journal": "", "ref_id": "b1", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "A I Stability", "journal": "", "ref_id": "b2", "title": "Stable diffusion version 2", "year": "2023" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b3", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "R Gal; Y Alaluf; Y Atzmon; A H Patashnik; G Bermano; D Chechik; Cohen-Or", "journal": "", "ref_id": "b4", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2023" }, { "authors": "N Ruiz; Y Li; V Jampani; Y Pritch; M Rubinstein; K Aberman", "journal": "", "ref_id": "b5", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2022" }, { "authors": "E J Hu; Y Shen; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b6", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "S Ryu", "journal": "", "ref_id": "b7", "title": "Low-rank adaptation for fast text-to-image diffusion fine-tuning", "year": "2023" }, { "authors": "R Gal; M Arar; Y Atzmon; A H Bermano; G Chechik; D Cohen-Or", "journal": "", "ref_id": "b8", "title": "Designing an encoder for fast personalization of text-to-image models", "year": "2023" }, { "authors": "L Han; Y Li; H Zhang; P Milanfar; D Metaxas; F Yang", "journal": "", "ref_id": "b9", "title": "Svdiff: Compact parameter space for diffusion fine-tuning", "year": "2023" }, { "authors": "J Shi; W Xiong; Z Lin; H J Jung", "journal": "", "ref_id": "b10", "title": "Instantbooth: Personalized text-to-image generation without test-time finetuning", "year": "" }, { "authors": "Y Tewel; R Gal; G Chechik; Y Atzmon", "journal": "", "ref_id": "b11", "title": "Key-locked rank one editing for text-to-image personalization", "year": "2023" }, { "authors": "C Zhang; C Zhang; M Zhang; I S Kweon", "journal": "", "ref_id": "b12", "title": "Text-toimage diffusion model in generative ai: A survey", "year": "2023" }, { "authors": "F.-A Croitoru; V Hondru; R T Ionescu; M Shah", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b13", "title": "Diffusion models in vision: A survey", "year": "2023" }, { "authors": "G Daras; A G Dimakis", "journal": "", "ref_id": "b14", "title": "Multiresolution textual inversion", "year": "2022" }, { "authors": "Y Li; L Zhu; X Jia; Y Bai; Y Jiang; S.-T Xia; X Cao", "journal": "", "ref_id": "b15", "title": "Move: Effective and harmless ownership verification via embedded external features", "year": "2022" }, { "authors": "Y Li; L Zhu; X Jia; Y Jiang; S.-T Xia; X Cao", "journal": "", "ref_id": "b16", "title": "Defending against model stealing via verifying embedded external features", "year": "2022" }, { "authors": "X Liu; J Liu; Y Bai; J Gu; T Chen; X Jia; X Cao", "journal": "Springer", "ref_id": "b17", "title": "Watermark vaccine: Adversarial attacks to prevent watermark removal", "year": "2022" }, { "authors": "S Zhao; K Chen; M Hao; J Zhang; G Xu; H Li; T Zhang", "journal": "", "ref_id": "b18", "title": "Extracting cloud-based model with prior knowledge", "year": "2023" }, { "authors": "Y Li; Y Jiang; Z Li; S.-T Xia", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b19", "title": "Backdoor learning: A survey", "year": "2022" }, { "authors": "Y Huang; L Sun; Q Guo; F Juefei-Xu; J Zhu; J Feng; Y Liu; G Pu", "journal": "", "ref_id": "b20", "title": "Ala: Naturalness-aware adversarial lightness attack", "year": "2023" }, { "authors": "T Li; A Liu; X Liu; Y Xu; C Zhang; X Xie", "journal": "Information Sciences", "ref_id": "b21", "title": "Understanding adversarial robustness via critical attacking route", "year": "2021" }, { "authors": "Y Huang; Q Guo; F Juefei-Xu; L Ma; W Miao; Y Liu; G Pu", "journal": "", "ref_id": "b22", "title": "Advfilter: predictive perturbation-aware filtering against adversarial attack via multi-domain learning", "year": "2021" }, { "authors": "C Zhang; A Liu; X Liu; Y Xu; H Yu; Y Ma; T Li", "journal": "IEEE Transactions on Image Processing", "ref_id": "b23", "title": "Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity", "year": "2020" }, { "authors": "Y Huang; F Juefei-Xu; Q Guo; W Miao; Y Liu; G Pu", "journal": "", "ref_id": "b24", "title": "a photo of a [V] fridge", "year": "" }, { "authors": "", "journal": "", "ref_id": "b25", "title": "Advbokeh: Learning to adversarially defocus blur", "year": "2021" }, { "authors": "T Gu; K Liu; B Dolan-Gavitt; S Garg", "journal": "IEEE Access", "ref_id": "b26", "title": "Badnets: Evaluating backdooring attacks on deep neural networks", "year": "2019" }, { "authors": "S Li; T Dong; B Z H Zhao; M Xue; S Du; H Zhu", "journal": "IEEE Security & Privacy", "ref_id": "b27", "title": "Backdoors against natural language processing: A review", "year": "2022" }, { "authors": "M Walmer; K Sikka; I Sur; A Shrivastava; S Jha", "journal": "", "ref_id": "b28", "title": "Dualkey multimodal backdoors for visual question answering", "year": "2022" }, { "authors": "L Wang; Z Javed; X Wu; W Guo; X Xing; D Song", "journal": "", "ref_id": "b29", "title": "Backdoorl: Backdoor attack against competitive reinforcement learning", "year": "2021" }, { "authors": "M Goldblum; D Tsipras; C Xie; X Chen; A Schwarzschild; D Song; A Madry; B Li; T Goldstein", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b30", "title": "Dataset security for machine learning: Data poisoning, backdoor attacks, and defenses", "year": "2022" }, { "authors": "L Struppek; D Hintersdorf; K Kersting", "journal": "", "ref_id": "b31", "title": "Rickrolling the artist: Injecting invisible backdoors into text-guided image generation models", "year": "2022" }, { "authors": "S Zhai; Y Dong; Q Shen; S Pu; Y Fang; H Su", "journal": "", "ref_id": "b32", "title": "Text-to-image diffusion models can be easily backdoored through multimodal data poisoning", "year": "2023" }, { "authors": "X Chen; C Liu; B Li; K Lu; D Song", "journal": "", "ref_id": "b33", "title": "Targeted backdoor attacks on deep learning systems using data poisoning", "year": "2017" }, { "authors": "Y Li; Y Li; B Wu; L Li; R He; S Lyu", "journal": "", "ref_id": "b34", "title": "Invisible backdoor attack with sample-specific triggers", "year": "2021" }, { "authors": "W Yang; Y Lin; P Li; J Zhou; X Sun", "journal": "", "ref_id": "b35", "title": "Rethinking stealthiness of backdoor attack against nlp models", "year": "2021" }, { "authors": "H Face", "journal": "", "ref_id": "b36", "title": "Code of textual inversion", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b37", "title": "Code of dreambooth", "year": "2023" }, { "authors": "A Radford; J W Kim; C Hallacy; A Ramesh; G Goh; S Agarwal; G Sastry; A Askell; P Mishkin; J Clark", "journal": "PMLR", "ref_id": "b38", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b39", "title": "Code of clip", "year": "2021" }, { "authors": "G Parmar; R Zhang; J.-Y Zhu", "journal": "", "ref_id": "b40", "title": "On aliased resizing and surprising subtleties in gan evaluation", "year": "2022" }, { "authors": "H Face", "journal": "", "ref_id": "b41", "title": "Data of dreambooth", "year": "2023" }, { "authors": "Y Yang; M Hu; Y Cao; J Xia; Y Huang; Y Liu; M Chen", "journal": "", "ref_id": "b42", "title": "Protect federated learning against backdoor attacks via data-free trigger generation", "year": "2023" }, { "authors": "X Zhang; C Zhang; T Li; Y Huang; X Jia; X Xie; Y Liu; C Shen", "journal": "", "ref_id": "b43", "title": "A mutation-based method for multi-modal jailbreaking attack detection", "year": "2023" }, { "authors": "Y Huang; F Juefei-Xu; Q Guo; Y Liu; G Pu", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b44", "title": "Dodging deepfake detection via implicit spatial-domain notch filtering", "year": "2023" }, { "authors": "Y Huang; F Juefei-Xu; R Wang; Q Guo; L Ma; X Xie; J Li; W Miao; Y Liu; G Pu", "journal": "", "ref_id": "b45", "title": "Fakepolisher: Making deepfakes more detection-evasive by shallow reconstruction", "year": "2020" }, { "authors": "Y Huang; F Juefei-Xu; Q Guo; Y Liu; G Pu", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b46", "title": "Fakelocator: Robust localization of gan-based face manipulations", "year": "2022" }, { "authors": "Y Hou; Q Guo; Y Huang; X Xie; L Ma; J Zhao", "journal": "", "ref_id": "b47", "title": "Evading deepfake detectors via adversarial statistical consistency", "year": "2023" }, { "authors": "R Wang; F Juefei-Xu; L Ma; X Xie; Y Huang; J Wang; Y Liu", "journal": "", "ref_id": "b48", "title": "Fakespotter: A simple yet robust baseline for spotting ai-synthesized fake faces", "year": "2020" }, { "authors": "T Li; Q Guo; A Liu; M Du; Z Li; Y Liu", "journal": "PMLR", "ref_id": "b49", "title": "Fairer: fairness as decision rationale alignment", "year": "2023" }, { "authors": "T Li; Z Li; A Li; M Du; A Liu; Q Guo; G Meng; Y Liu", "journal": "", "ref_id": "b50", "title": "Fairness via group contribution matching", "year": "2023" }, { "authors": "Y Adi; C Baum; M Cisse; B Pinkas; J Keshet", "journal": "", "ref_id": "b51", "title": "Turning your weakness into a strength: Watermarking deep neural networks by backdooring", "year": "2018" }, { "authors": "D S Ong; C S Chan; K W Ng; L Fan; Q Yang", "journal": "", "ref_id": "b52", "title": "Protecting intellectual property of generative adversarial networks from ambiguity attacks", "year": "2021" } ]
[ { "formula_coordinates": [ 9, 100.26, 33.37, 214.27, 262.13 ], "formula_id": "formula_0", "formula_text": "[V] [S] (a) [V] [S] [V] (b) [V] [S] (c) [S] [V] [S] dog (d) [V] [S] dog [V] [S] car" } ]
10.1007/978-0-387-85820-3_1
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b8", "b13", "b14", "b15", "b8", "b16", "b17" ], "table_ref": [], "text": "Nigeria, a lower middle-income country, has a population of about 220 million people with a gross domestic product of 432 billion US dollars (1). Most of the healthcare funding coverage in Nigeria is out of pocket constituting over 70% of total healthcare expenditure in the country as at 2018, less than 5% of the population are covered under any form of health insurance exposing a large percentage of the population to financial risk and poverty (2). Ensuring financial protection and access to needed healthcare is integral to achieving Universal Health coverage (UHC) which is integral to the achievement of Sustainable Development Goal (SDG) 3.\nThe uptake of health insurance has been poor in Nigeria, and this has been due to a lot of challenges which include access to healthcare facilities, beliefs, low level of awareness about health insurance, policy challenges, poverty, and where to get required information (2)(3)(4). A significant step to improving this includes improved awareness, access to information and tools to support decision making (5).\nRecommender systems are designed to assist individuals to deal with a vast array of choices, it takes advantage of several sources of information to predict options and preferences around specific items (6)(7)(8). Recommender systems enhance the user experience by giving fast and coherent suggestions.\nArtificial intelligence (AI) based recommender systems have gained popularity in helping individuals find movies, books, music and different types of products on the internet including diverse applications in healthcare (9)(10)(11)(12). It has also been used in the insurance industry to support decision making on insurance products (13). Recommender systems are in three main categories which include: collaborative filtering, content-based and hybrid filtering (9). Collaborative filtering method uses the data from other users rating of items to make recommendation for a user for those items. There are two main types of collaborative filtering algorithms which are memory-based (or heuristic-based) and model-based, this is commonly used on video platforms to recommend popular content (14,15). Content based recommendation methods use the data available about a specific user preference, search, or choice for a specific item in recommending similar items to that specific user, this is commonly used recommending items like personal computers, and mobile phones (16). Hybrid methods combine the two previous methods to avoid the limitations associated with the two methods, and they have been shown to provide more accurate recommendations than pure single methods (9,17).\nExamples of application of this include the systems used for Facebook friends' recommendation, Netflix movies recommendation and amazon product recommendation.\nHealth management organizations (HMOs) serve as agents of the National Health Insurance Scheme (NHIS) to offer health insurance cover to private and public sectors (18). There are presently 58 health management organizations (HMO) in Nigeria, accredited by the governing body for health insurance in Nigeria, and these 58 HMOs have an aggregate of 155 available plans for the public to select from.\nThese plans vary based on price, benefits, geographical coverage, and value-added options (19). There is a complex decision-making process around selecting the appropriate HMOs coverage and what plan would be most appropriate for each person.\nIn this paper, we detail how we worked on developing a recommender system to improve decision making for Nigerians around choosing the most appropriate and suitable health insurance based on their personal preferences. To the best of our knowledge, no such system presently exists anywhere in the Nigerian insurance sector or in any form in the healthcare industry. This is a novel application of a recommender system in this region." }, { "figure_ref": [], "heading": "METHODS", "publication_ref": [], "table_ref": [], "text": "The content-based methodology (item-based approach) was employed in the recommender system, which is significantly more accurate and efficient to use because the item-based method can be done offline and is non-dynamic, whereas the user-based method changes. This method can be implemented using the K-Nearest Neighbor (KNN) or Cosine similarity algorithm. Although the Cosine similarity was our chosen algorithm after several checks and comparison with the KNN algorithm. Let's briefly discuss both algorithms and their use cases." }, { "figure_ref": [], "heading": "Cosine similarity", "publication_ref": [], "table_ref": [], "text": "The Cosine similarity makes use of the cosine of angles between two vectors to check for most similar items to make its recommendations. Each record we have on the aggregated HMO dataset would be compared with a user's preference that has been pre-transformed to vectors." }, { "figure_ref": [], "heading": "The cosine between vectors can be calculated as:", "publication_ref": [], "table_ref": [], "text": "Cosine similarity is a value that is bound by a constrained range of 0 and 1. The similarity measurement is a measure of the cosine of the angle between the two non-zero vectors A and B. Suppose the angle between the two vectors A and B was 90 degrees. In that case, the cosine similarity will have a value of 0; this means that the two vectors are orthogonal or perpendicular to each other.\nAs the cosine similarity measurement gets closer to 1, then the angle between the two vectors A and B is smaller. This algorithm can be useful in finding the similarities between documents, in cases of plagiarism and Pose matching." }, { "figure_ref": [], "heading": "KNN Algorithm", "publication_ref": [], "table_ref": [], "text": "The KNN implements the content-based technique by applying a distance-based algorithm, in our case we used the Euclidean distance.\nThe nearest neighbors calculate distance between two vector spaces. Usually, the values range from 0 to infinity. So typically, the lesser the magnitude of the distance between the two vectors the more likely they are similar and therefore recommended by the algorithm. And the higher the magnitude between the vectors, the less likely those vectors are similar. The algorithm is popularly known to be applied in movie recommendations. The collected data was preprocessed and cleaned. In doing so, we considered the removal of certain attributes that seem to be redundant across all HMOs. We performed feature encoding on categorical features of the HMO data. Additionally, we performed feature engineering about the premium of all HMO, by creating four classes of premium based on its average pricing per plan since we were unable to get all their prices." }, { "figure_ref": [], "heading": "Algorithm Development", "publication_ref": [], "table_ref": [], "text": "This took part in two steps:\n1. Experimenting with KNN and Cosine similarity algorithms, with performance check done by our group of domain experts. After cleaning up our data and taking care of missing values and other related problems, we began experiments on two major algorithms, the KNN and the Cosine similarity. KNN applied the Euclidean distance in discovering the closest HMO recommended for a user based on their needs while Cosine similarity used the angles between two vectors (in this case, each HMO and our user's input) to decide which HMO would cater the most for our clients based on their inputted needs. For KNN, the smaller the distance between the user's choices and an HMO, the more suitable that HMO is to the user. This also applies to the cosine similarity, the smaller the angle between the two vectors (user choice and HMO services), the more suited the HMO is to cater for the user needs. It is also important to note that the algorithm was programmed to perform some low-level filtering for specific cases like location and amount preferred from clients' data input (both usually specified by the client upon selection of preference and needs) prior to any recommendation. This filter considers the users' location (currently Lagos State or nationwide for version 1 of the recommender system) and filters the HMO database such that no HMO outside the reach or location of the client would be recommended to them by the algorithm. It also applies to their amount remittable (ranging from tiers 1 to 4), and it works in a way, such that no tier higher than the user specified tier would be recommended. For instance, if a client chooses tier 3, the algorithm will only recommend HMOs that are within either tier 1, tier 2 or tier 3, excluding all tier 4 HMOs. This low-level filtering ensures all HMO services recommended to the users by the algorithm are all affordable (in terms of tiers) and accessible (in terms of location) to the user.\n2. Using our Ratings for HMOs, we filtered through the suggested top recommendations from the algorithms and streamlined our recommendations to only three. After the experimentation in step 1 above, we proceed to filter through the recommendations from both algorithms individually and select the top 3 based on the ratings of the HMO obtained from our prospective clients. In this case, the algorithm might recommend 5 HMOs and their plans, we then select the best 3 using the data from the HMO ratings. This allows us to be able to select the people's choice alongside maintaining satisfaction of their needs. It is important to note that as we collect more candid data based on the ratings of the HMOs, the step 2 of the recommender system would function better, providing the people's choice to the people." }, { "figure_ref": [ "fig_0" ], "heading": "RESULTS", "publication_ref": [], "table_ref": [], "text": "The user is expected to input their choice of HMO services, including their preferred location (Lagos state only and Nationwide) and payable amount (Tier1, Tier2, Tier3 and Tier4).\nFigures 1 and2, show the section where the user is required to enter their choice of preferred HMO services. " }, { "figure_ref": [], "heading": "DISCUSSION", "publication_ref": [], "table_ref": [], "text": "This is one of the first applications of recommender systems in healthcare and insurance that we know exist in Nigeria. The use of a tool to help people in efficiently making decisions about which health insurance product is most appropriate for them, will help in reducing the barrier of decision making for most Nigerians. Our recommender system offers to every user the top 3 health care insurance plans they can purchase based on their criteria and needs with an opportunity to complete the purchase and onboarding through the platform. For this purpose, we employed cosine similarity and KNN algorithm to recommend the results and propose three HMOs for users of the tool. Due to the difficulty in finding a ground truth algorithm and approach, we validated our results using medical professionals.\nThe tool was made available as a web page to ensure that anyone with access to the internet can easily access and use it without the challenge of having to download an app to their phones. As no other comprehensive database or information on available health insurance plans exists online for Nigerians, this platform helps people to find, choose and explore available health insurance plans. The tool also collects important data about users' health insurance choices and preferences which include, kinds of expected coverage in the health plan, special benefits like Gym membership, annual checkup, availability of telemedicine and what they are willing to pay for such options. This data is useful for planning for health insurance products, understanding preferences and demands and what most users are open to paying for specific coverages and benefits. We plan to incorporate more parameters and improve the recommender system by user behavior and other relevant data in such a way as to enhance our system and make it become more efficient to users both in Nigeria and the entire continent of Africa.\nThis study has various limitations which include the fact that there is mismatch between the granularity of the features users want and how the HMO display their services (which we use as features) on their site. Also, to make our data engineering easier we removed different features that were not straightforward and easy to encode, this would have led to the removal of some features that the users might need in making decisions. We also could not get the necessary data for some HMOs; therefore, we did not include them in the data used for the algorithm." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "A recommendation tool to help people find and select the best health insurance plan for them is useful in reducing the barrier of accessing health insurance. Users are empowered to easily find appropriate information on available plans, reduce cognitive overload in dealing with over 100 options available in the market and easily see what matches their financial capacity.\nRecommender systems are intended to assist users in making better decisions from a large content catalog. As a result, it focuses on the predictive algorithm's accuracy. The accuracy of a recommender system contributes to the user experience. Major benefits can be seen from utilization in how it enhances user experience, and easily analyze the market to discover user preferences and see what most people are interested in. This tool will indirectly contribute to improving the health insurance coverage in Nigeria and improve on the progress towards Universal Health Coverage for Nigerians. \nFigure Legends" }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We would like to acknowledge the efforts and assistance of Seyi Morountonu and Olukoya Fatosa in the conception and the execution of this project." }, { "figure_ref": [], "heading": "Availability of data and material", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "N/A", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Funding", "publication_ref": [], "table_ref": [], "text": "The authors did not receive support from any organization for the submitted work." }, { "figure_ref": [], "heading": "Evaluation of algorithms performance", "publication_ref": [], "table_ref": [], "text": "Evaluating the performance of a recommender system can be quite tricky since it doesn't follow the conventional regression and classification ML problems. In evaluating our algorithm's performance, we employed the domain knowledge of our product using our experienced team of medical and domain experts. We curated several user's choices from the entire team of Arteri, we input these choices individually to the recommender system of both algorithms and came up with their individual recommendations. These recommendations were then vetted by the medical team to see which algorithm seemed to be suggesting the right HMO for the specific user needs. It was then collectively concluded that the Cosine Similarity gave the best recommendation as most users already on insurance plans got recommended to their HMOs after inputting their needs. This is to say that the cosine algorithm isn't so far away from the truth in recommending HMOs that are best suited for the user based on their needs." }, { "figure_ref": [], "heading": "Deployment", "publication_ref": [], "table_ref": [], "text": "We then proceeded to the deployment stage of our recommender system. An Application Programming Interface (API) was built using the Fastapi framework on python. The API was then deployed to Heroku with the data running from GitHub and Postgres. " }, { "figure_ref": [], "heading": "ABBREVIATIONS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "DECLARATIONS Ethical Approval", "publication_ref": [], "table_ref": [], "text": "No Ethical approval was required for this study." }, { "figure_ref": [], "heading": "Consent for publication", "publication_ref": [], "table_ref": [], "text": "All Authors give their consent for the publication of this article." }, { "figure_ref": [], "heading": "Competing Interests", "publication_ref": [], "table_ref": [], "text": "AO, EN and TI are employees of Arteri Africa. The other authors have no competing interests to declare." }, { "figure_ref": [], "heading": "Authors Contributions", "publication_ref": [], "table_ref": [], "text": "All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by Ayomide Owoyemi, Emmanuel Nnaemeka, Ron Ikpe & Temitope Isedowo.\nThe first draft of the manuscript was written by Ayomide Owoyemi, and all authors commented and edited subsequent versions of the manuscript. All authors read and approved the final manuscript." }, { "figure_ref": [], "heading": "Ethical Statement", "publication_ref": [], "table_ref": [], "text": "The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved." } ]
The uptake of health insurance has been poor in Nigeria, a significant step to improving this includes improved awareness, access to information and tools to support decision making. Artificial intelligence (AI) based recommender systems have gained popularity in helping individuals find movies, books, music, and different types of products on the internet including diverse applications in healthcare. The content-based methodology (item-based approach) was employed in the recommender system. We applied both the K-Nearest Neighbor (KNN) and Cosine similarity algorithm. We chose the Cosine similarity as our chosen algorithm after several evaluations based of their outcomes in comparison with domain knowledge. The recommender system takes into consideration the choices entered by the user, filters the health management organization (HMO) data by location and chosen prices. It then recommends the top 3 HMOs with closest similarity in services offered. A recommendation tool to help people find and select the best health insurance plan for them is useful in reducing the barrier of accessing health insurance. Users are empowered to easily find appropriate information on available plans, reduce cognitive overload in dealing with over 100 options available in the market and easily see what matches their financial capacity.
RECOMMENDATION SYSTEM FOR HEALTH INSURANCE DECISION MAKING 1 Machine Learning Recommendation System for Health Insurance Decision Making In Nigeria
[ { "figure_caption": "2. 11Steps taken to build our recommendation system. of the project, the team came up with names of health insurance in Nigeria to make individual research on and collect those HMO's information on their individual plans and offer. About 148 health plans were highlighted and 14 features were extracted from the plans. A separate data on the Ratings on the HMOs from prospective clients was obtained via survey forms. This was used in the second phase of the recommender system. The included features are: Premium, geographical coverage, family Planning, Mental health, Dental care, Admission ward type, Telemedicine service, Cash back benefit, ANC delivery coverage, eye care cost limits, Gym membership, Annual Routine Medical Screening Data Cleaning", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 1 :Figure 2 :12Figure 1: The page where users enter their health plan preferences", "figure_data": "", "figure_id": "fig_1", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The results page of the recommendation algorithm", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 1 :Figure 2 :Figure 3 :123Figure 1: The page where users enter their health plan preferences", "figure_data": "", "figure_id": "fig_3", "figure_label": "123", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" } ]
Ayomide Owoyemi; Emmanuel Nnaemeka; Temitope O Benson; Ronald Ikpe; Blessing Nwachukwu; Temitope Isedowo
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "GDP per capita (current US$) -Nigeria | Data", "year": "" }, { "authors": "G O Alawode; D A Adewole", "journal": "BMC Public Health", "ref_id": "b1", "title": "Assessment of the design and implementation challenges of the National Health Insurance Scheme in Nigeria: a qualitative study among sub-national level actors, healthcare and insurance providers", "year": "2021-01-11" }, { "authors": "B S Aregbeshola; S M Khan", "journal": "International Journal of Health Policy and Management", "ref_id": "b2", "title": "Predictors of Enrolment in the National Health Insurance Scheme Among Women of Reproductive Age in Nigeria", "year": "2018-11-01" }, { "authors": "O G Adebola", "journal": "UNIVERSAL HEALTH COVERAGE IN NIGERIA AND ITS DETERMINANTS: THE CASE OF NATIONAL HEALTH INSURANCE SCHEME. Academic Review of Humanities and Social Sciences", "ref_id": "b3", "title": "", "year": "2020" }, { "authors": "A A Onasanya", "journal": "J Glob Health", "ref_id": "b4", "title": "Increasing health insurance enrolment in the informal economic sector", "year": "" }, { "authors": "F Ricci; L Rokach; B Shapira", "journal": "Springer US", "ref_id": "b5", "title": "Introduction to Recommender Systems Handbook", "year": "2011" }, { "authors": "M Qazi; G M Fung; K J Meissner; E R Fontes", "journal": "Association for Computing Machinery", "ref_id": "b6", "title": "An Insurance Recommendation System Using Bayesian Networks", "year": "2017" }, { "authors": "K Sennaar", "journal": "Emerj Artificial Intelligence Research", "ref_id": "b7", "title": "Artificial intelligence in Health Insurance -Current Applications and Trends", "year": "" }, { "authors": "G Adomavicius; A Tuzhilin", "journal": "IEEE Trans on Knowl and Data Eng", "ref_id": "b8", "title": "Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions", "year": "2005-06-01" }, { "authors": "J Davidson; B Liebald; J Liu; P Nandy; Van Vleet; T Gargi; U ", "journal": "Association for Computing Machinery", "ref_id": "b9", "title": "The YouTube video recommendation system", "year": "" }, { "authors": "D Das; L Sahoo; S Datta", "journal": "International Journal of Computer Applications", "ref_id": "b10", "title": "A Survey on Recommendation System", "year": "2017-02-15" }, { "authors": "Tnt Tran; A Felfernig; C Trattner; A Holzinger", "journal": "J Intell Inf Syst", "ref_id": "b11", "title": "Recommender systems in the healthcare domain: state-of-the-art and research issues", "year": "2021-08-01" }, { "authors": "M Qazi; K Tollas; T Kanchinadam; J Bockhorst; G Fung", "journal": "WIREs Data Mining and Knowledge Discovery", "ref_id": "b12", "title": "Designing and deploying insurance recommender systems using machine learning", "year": "2020" }, { "authors": "Y H Chen; George E ", "journal": "", "ref_id": "b13", "title": "A bayesian model for collaborative filtering", "year": "1999-01-01" }, { "authors": "Z Huang; H Chen; D Zeng", "journal": "ACM Trans Inf Syst", "ref_id": "b14", "title": "Applying associative retrieval techniques to alleviate the sparsity problem in collaborative filtering", "year": "2004-01-01" }, { "authors": "M J Pazzani; D Billsus", "journal": "Springer", "ref_id": "b15", "title": "Content-Based Recommendation Systems", "year": "2007" }, { "authors": "E Çano; M Morisio", "journal": "IDA", "ref_id": "b16", "title": "Hybrid Recommender Systems: A Systematic Literature Review", "year": "2017-11-15" }, { "authors": "E Obikeze; O Onwujekwe", "journal": "International Journal for Equity in Health", "ref_id": "b17", "title": "The roles of health maintenance organizations in the implementation of a social health insurance scheme in Enugu, Southeast Nigeria: a mixed-method investigation", "year": "2020-03-12" } ]
[]
2023-05-18
[ { "figure_ref": [ "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b5", "b6", "b47", "b10", "b2", "b1", "b9", "b23", "b24", "b35", "b45", "b14", "b6", "b10", "b5", "b47", "b2", "b0", "b5", "b6", "b47", "b6", "b9", "b23", "b49", "b1", "b1" ], "table_ref": [], "text": "Semantic 3D scene understanding has recently attracted increasing research interest due to its wide applications such as automatic driving, human-machine interaction, etc. Much progress has been made in semantic 3D scene understanding, with task-specific models continuously pushing the state-of-the-art in various downstream tasks including visual grounding [6,7,48], dense captioning [11], and question answering [3].\nWhile effective on their respective benchmarks, the task-specific representations obtained by existing approaches prevent them from generalizing well to other tasks. A common practice for extracting Firstly, all the tasks rely heavily on the object detector to locate object in the scene. Secondly, 3D vision-language tasks require an effective fusion module to understand the connection between point cloud and language. joint multimodal representation is to adopt the pre-training plus fine-tuning paradigm, whose effectiveness have been demonstrated by the remarkable success in 2D vision-language pre-training [2,10,24,25,36,46]. Existing works on 3D vision-language pre-training are still limited, which motivates us to introduce this paradigm into semantic 3D scene understanding in an appropriate way. However, 3D vision-language pre-training differs from pre-training in NLP and 2D vision-language tasks since point cloud data is introduced [15]. The task-agnostic objectives designed in previous works cannot be directly applied to 3D vision-language pre-training due to the gap of downstream tasks. In light of these consideration, it is essential to identify the shared nature across different tasks in semantic 3D scene understanding to further determine the appropriate pre-training model.\nFigure 1 provides an intuitive depiction of the relationships among three 3D vision-language tasks. Two key observations emerages from the comparision of these tasks. Firstly, all of these tasks rely heavily on the object detection when applying two-stage pipeline models, which is a common practice in semantic 3D scene understanding [7,11]. Secondly, 3D vision-language tasks require an effective fusion module to enable information interaction between point cloud and language for a deeper understanding of the relationships between objects in the scene, such as the matching stage in the visual grounding [6,48] and the classification of answers in the question answering [3].\nThese observations in semantic 3D scene understanding pose several challenges in designing an effective training paradigm for the pre-training model to obtain universal embeddings and achieve better transfer performance flexibly in downstream tasks. Firstly, highquality bounding boxes are required for object detection, which can be further fed into task-specific heads in downstream tasks. These boxes represent the model's ability to segment the scene at the object level, as demonstrated by works that use a detection-thenmatching pipeline [1,6,7,48]. Secondly, object detection requires the model to distinguish between different objects in the scene, especially when there are many objects similar to the target, which is common in real-life situations [7]. This means the model needs to be able to identify what makes objects distinct in the scene, which is a challenging task that has not yet been fully addressed. Thirdly, the fusion module suffers from the issue that the data come from different modalities are unaligned, as similar to the cross-modal problems in 2D vision language learning [10,24]. Point cloud features and word token embeddings exist in different spaces, making it challenging for the fusion module to model their interactions.\nTo this end, we propose 3DVLP: vision-language pre-training with object contrastive learning in semantic 3D scene understanding. 3DVLP is the first pre-training framework that effectively addresses the challenges mentioned above. (1) To obtain better object bounding boxes, we introduce Object-level IoU-guided Detection (OID) loss in our pre-training pipeline. Specifically, we leverage visual grounding as the proxy task, as it shares the same objective of localizing high-quality bounding boxes. Additionally, we incorporate Distance IoU (DIoU) loss [50] and label smoothing in the matching stage at the object level to achieve faster convergence and better performance. (2) We further introduce Object-level Self-Contrastive learning (OSC) task to distinguish the target object from others. The self-contrastive learning is performed at the object level, where boxes with an IoU higher than a specific threshold are considered positive samples, while others are regarded as negative ones. This self-contrastive loss is designed to bring positive samples closer to each other and far away from the negative ones.\n(3) To enable fully information intereaction between point cloud and language, we further design Object-level Cross-Contrastive alignment (OCC) task as a proxy task to align the unimodal representation across these two modalities. We use a similar IoU filter as in OSC to generate positive and negative samples, which are then fed as inputs to calculate the cross-contrastive loss. The crosscontrastive loss is introduced to pull the embedding of positive samples closer to the anchor feature of the target language description.\nOverall, 3DVLP effectively addresses the challenges in semantic 3D scene understanding by proposing these novel proxy tasks that enable effective point-cloud and language information interaction. By introducing OID, OCC, and OSC, our method can achieve stateof-the-art performance on multiple 3D vision-language multimodal tasks. The strong generalization capabilities and short training time for fine-tuning of 3DVLP makes it suitable for a wide range of applications and multiple tasks.\nThe contributions of this study are summarized as follows: (1) A 3D vision-language pre-training framework called 3DVLP has been proposed, achieving the unification of the tasks in semantic 3D scene understanding. (2) We introduce Object-level IoUguided Detection loss into the pre-training pipeline to obtain highquality bounding boxes for downstream tasks. We also present two proxy tasks at the object level, including the Object-level Cross-Contrastive alignment task and Object-level Self-Contrastive learning task, which facilitate cross-modal alignment and help the model distinguish objects more accurately, respectively. (3) We conduct extensive experiments and empirically demonstrate the effectiveness of our method in semantic 3D scene understanding." }, { "figure_ref": [], "heading": "RELATED WORK 2.1 Vision-language Pre-training", "publication_ref": [ "b18", "b33", "b4", "b12", "b20", "b27", "b33", "b23", "b9", "b22", "b21", "b23", "b41", "b23", "b46" ], "table_ref": [], "text": "Vision-language pre-training are proposed to improve the performance in downstream tasks and has been widely explored in recent approaches [10, 22-26, 34, 35]. It is a common practice to pre-train the model with large-scale image-text pair datasets, usually craweled from the web [19,34]. Borrowed from the insight in NLP tasks [5,13,21,28], various learning objectives are proposed for cross-modal pre-training, enabling the model to capture the relationship between data from different modalities. CLIP [34] aligns the unimodal image representation and language representation by contrastive loss and maximizes similarity of correct pairs. ALBEF [24] and Uniter [10] further apply image-text matching and masked language modeling tasks, enabling model to capture more complex interactions between image and text. Li et al. introduces captioning loss in BLIP [23] to address the issue of noisy image-text pairs, and further bootstraps representation learning from frozen pre-trained unimodal models in BLIP-2 [22].\nPre-training for 3D vision language tasks also suffers from misaligned data across different modalities, leading to difficulties in training the fusion layer [24,42]. Motivated by the common practice in 2D vision language tasks [24,47], we introduce contrastive alignment task into 3D vision-language learning and enhance the performance of the pre-training model." }, { "figure_ref": [], "heading": "3D Visual-Langauge Tasks", "publication_ref": [ "b5", "b6", "b47", "b10", "b2", "b6", "b0", "b38", "b47", "b8", "b17", "b50", "b29", "b10", "b40", "b2", "b48", "b5", "b7" ], "table_ref": [], "text": "Recently, semantic 3D scene understanding has raised great interest and has been widely explored in recent approaches across various tasks, including 3D visual grounding [6,7,48], 3D dense captioning [11], and 3D question answering [3].\n3D visual grounding aims to locate a region of interest in a scene based on a referring description. Chen et al. [7] introduces the ScanRefer dataset and proposes an end-to-end visual grounding framework. Achlioptas et al. [1] Nr3D and Sr3D with high-quality referential utterances. Most existing methods rely on a detection-then-match pipeline to tackle the grounding task and aim to develop model's ability to capture the connections between proposal and language description, which is usually implemented by a cross-attention module [39]. For instance, 3DVG-Transformer [48] introduces coordinate-guided contextual aggregation module to enhance proposal generation and cross-modal proposal disambiguation. HAM [9] shifts attention to contextual information and develops both local and global attention module for better end-to-end grounding, while BUTD-DETR [18] presents a DETR-like [51] referential grounding model that incorporates guidance from language, points, and objects. 3D-SPS [30], however, propose the first one-stage end-to-end framework via keypoints selection and mines the cross-modal relationship based on points. Dense captioning in 3D scene requires model to derive highquality object bounding box and the corresponding descriptions from point cloud data. Scan2Cap [11] extends the dense captioning task to 3D scenes based on the ScanRefer dataset and establishes a messege-passing network to model the connections between objects. SpaCap3D [41] investigates the relative spatiality of objects and build a spatiality-guided transformer to generate captions. Importantly, it designs a object-centric decoder by using a vision token as information carrier of the target object.\n3D visual question answering is another vision-language task in which model are expected to generate a correct answer provided with the point cloud and a question. ScanQA [3] collects 41k question-answer pairs and brings the question-answering task into 3D scenes. Besides, it propose a 3D-QA baseline model by casting the answer generation task as a classification problem. FE-3DGQA [49] proposes anthoer datasets and predicts the answer through a token encoding and fusion module based on attention. Some previous works have made efforts to capture the connection among the tasks above and dig out the basic relationship between object proposals and language expressions. 3DJCG [6] and D3Net [8] model the joint training of 3D dense captioning and 3D visual grounding, thereby boosting the performance of model in both tasks. However, to the best of our knowledge, no framework has leveraged the 3D vision-language pre-training model to improve the performance of downstream tasks. Motivated by the shared nature across different tasks in semantic 3D scene understanding, we summarize the characteristics of a pre-training model and design corresponding proxy tasks to achieve these objectives." }, { "figure_ref": [], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "As illustrated in Figure 2, 3DVLP first encodes point cloud and language data and further applies a cross-attention module to obtain fusion feature for downstream tasks. The training of 3DVLP can be mainly divided into the pre-training stage and the fine-tuning stage. In the pre-training stage, 3DVLP utilizes visual grounding as the proxy task and employs Object-level IoU-guided Detection (OID) loss for high-quality object detection. Additionally, 3DVLP is pre-trained on other designed proxy tasks, including Objectlevel Cross-Contrastive alignment (OCC) and Object-level Self-Contrastive learning (OSC). In the finetuning stage, we transfer the backbone of 3DVLP to downstream tasks with task-specific heads." }, { "figure_ref": [ "fig_4", "fig_2" ], "heading": "Object-level IoU-guided Detection Loss", "publication_ref": [ "b49", "b5", "b47", "b30", "b1", "b2" ], "table_ref": [], "text": "We consider visual grounding as the proxy task since it shares the same objective with the pre-training model of obtaining highquality proposals. Additionally, we propose Object-level IoU-guided Detection loss to enhance the performance of the object detector, as demonstrated in Fig. 4a.\nSpecifically, we introduce the Distance IoU (DIoU) loss [50] into the visual grounding pipeline for bounding box regression. Given the predicted proposal b 𝑝 and ground truth b 𝑔𝑡 , we calulate the IoU between them and have the following regression loss:\nL 𝐷𝐼𝑜𝑈 (b 𝑝 , b 𝑔𝑡 ) = 1 -𝐼𝑜𝑈 + 𝜌 2 b 𝑝 , b 𝑔𝑡 𝑐 2 , (1\n)\nwhere 𝑐 is the diagonal length of the smallest enclosing box covering the two boxes. However, previous approaches [6,48] treats the matching stage in visual grounding task as a classification problem and use the proposal with the highest IoU as a supervised label to train the fusion module. In this case, the DIoU loss can only be applied to a single proposal, which weakens its efforts in optimization.\nAdditionally, due to the large number of proposals generated by the detector, there can be multiple boxes pointing to the target object, and these boxes may share similar semantic information, making it difficult to achieve accurate matching with a one-hot label. Label smoothing is a regularization technique that prevents the model from overconfident prediction [31] and is suitable for addressing such matching problems. Specifically, we apply label smoothing by incorporating an IoU filter into training, as shown in Fig. 3. Given a pre-defined IoU threshold 𝛿 and the weight factor 𝜀, positive proposals are filtered according to their IoU with the ground truth, and weights are assigned to them based on their total count, denoted by 𝐾. The weight of proposal 𝑝 in the soft label is shown in Equ. (2).\n𝑦 𝑝 =          1 -𝜀 if 𝐼𝑜𝑈 𝑝 = 𝐼𝑜𝑈 𝑚𝑎𝑥 𝜀 𝐾 if 𝐼𝑜𝑈 𝑝 ≥ 𝛿 and 𝐼𝑜𝑈 𝑝 ≠ 𝐼𝑜𝑈 𝑚𝑎𝑥 0 otherwise(2)\nWe further combine DIoU loss and label smoothing to obtain our OID loss, as demonstrated in Equ. (3).\nL 𝑂𝐼 𝐷 = ∑︁ 𝑝 𝑦 𝑝 • L 𝐷𝐼𝑜𝑈 (b 𝑝 , b 𝑔𝑡 ). (3\n)" }, { "figure_ref": [ "fig_4" ], "heading": "Object-level Cross-contrastive Alignment", "publication_ref": [ "b5", "b47" ], "table_ref": [], "text": "As a common practice [6,48], a cross-modal attention module is applied in semantic 3D scene understanding for feature fusion between language and point cloud embedding. However, it is observed that the data distribution across different modalities is not wellaligned, resulting in insufficient interaction between the embedding of proposals and the language feature. To address this issue, contrastive learning can provide insights for embedding alignment across different distributions. However, naive implementation over proposals is not effective, as semantically similar information from the boxes pointing at the target object conflicts with the optimization target of contrastive loss. This can ultimately lead to a deterioration in performance or even failure to converge.\nBased on these observations, we reconsider contrastive learning at the object level and introduce the Object-level Cross-Contrastive alignment (OCC) task to enhance the performance of the cross fusion module, as shown in Fig. 4b. The OCC task is proposed to align the distribution of cross-modal data. Specifically, in the training stage, we introduce the target detection boxes of real objects and select all the predicted boxes with IoU greater than a pre-defined threshold as positive samples since they semantically point to the target object and should have similar features. The remaining predicted boxes are considered negative samples, representing the proposals of other objects or background. We then align the features of positive samples with the language embedding and push the features of negative samples away with the contrastive loss to achieve better cross-modal understanding.\nFormally, we have the following contrastive loss, which serves as the loss function for our OCC task. Note that the threshold 𝛿 determines how close positive samples should be to align with the language embedding. Specifically, when 𝛿 = 𝐼𝑜𝑈 𝑚𝑎𝑥 , Equ. (4) only considers the proposal with the highest IoU to be the positive sample and reverts to the original formula of traditional pairwise contrastive loss.\nL OCC = - 1 2 E (" }, { "figure_ref": [ "fig_4" ], "heading": "Object-level Self-contrastive Learning", "publication_ref": [ "b4", "b40", "b12", "b2" ], "table_ref": [], "text": "In semantic 3D scene understanding, the presence of similar objects in the scene can significantly affect the model's matching performance. Therefore, a well-designed pre-training model should be capable of accurately distinguishing between objects in the scene and understanding what makes them similar or different. Achieving this is a fundamental task that challenges the model's overall understanding of the scene. To address this issue, one effective approach is to utilize contrastive loss that incentivizes the model to capture features that differentiate objects. This can lead to an improved matching performance and enhance the model's ability to identify the target object based on the given description. Similarly, we require an object-level self-contrastive loss instead of the pairwise loss to effectively differentiate between objects and improve the model's semantic understanding of the scene. Therefore, we introduce the Object-level Self-Contrastive learning (OSC) task for object detection, as shown in Fig. 4c. The OSC task is proposed for unimodal point cloud data and aims to optimize the embedding generated by the point cloud encoder. Based on the idea in OCC task, we utilize the IoU threshold to select positive samples and negative ones for self contrastive learning. By optimizing the self-contrastive loss, 3DVLP encourages the features of the boxes targeting the ground truth object to be as dissimilar as possible from those of other boxes, thereby enabling the fusion module to distinguish different objects easily.\nFollowing Equ. (4), we replace the language embedding with the embedding of proposals to obtain the corresponding contrastive loss for OSC module, as shown in Equ. (5).\nL OSC = -E b 𝑔𝑡 ∼𝐷 log 𝑝,p ∈𝑃 𝑝𝑜𝑠 exp(𝑠 (𝐻 𝑝 , 𝐻 p )) 𝑝,p ∈𝑃 𝑝𝑜𝑠 ∪𝑃 𝑛𝑒𝑔 exp(𝑠 (𝐻 𝑝 , 𝐻 p )) .(5)\n3.4 Heads for Downstream Tasks 3.4.1 3D Visual Grounding. 3D visual grounding task involves matching a language description to the corresponding detection box in a given point cloud data of the scene. As a common practice, we model this matching task as a classification problem by directly using the proposal features obtained from the cross-modal attention module, transforming it into a 𝑛-class classification task, where 𝑛 represents the total number of predicted boxes. The classification label serves as the supervision information to optimize the MLP matching module using cross-entropy loss:\nL 𝑉 𝐺 = - 1 |𝑃 𝑚 | ∑︁ 𝑝 𝑚 ∈𝑃 𝑚 𝑦 𝑚 • 𝑙𝑜𝑔(𝑝 𝑚 ),(6)\nwhere |𝑃 𝑚 | denotes the total number of the proposals, 𝑝 𝑚 represents the matching score calculated for each proposal, and 𝑦 𝑚 represents the corresponding weight in the classification label.\n3.4.2 3D Dense Captioning. 3D dense captioning task involves generating corresponding descriptions for all objects in a given scene. To implement the captioning module, we follow the design in SpaCap3D [41] and insert a special visual token with proposal embedding into the initial position of the sequence, which interacts with the word tokens in the attention module. We can then divide this task into training and inference stages.\nIn the training stage, as we already have specific information about the ground truth, we associate each real object with the nearest proposal and then use the corresponding embedding to perform captioning. We use the natural description as the supervised label to optimize the captioning module through cross-entropy loss:\nL 𝐶𝐴𝑃 = - 1 |𝑃 𝑐𝑎𝑝 | ∑︁ 𝑝 𝑐𝑎𝑝 ∈𝑃 𝑐𝑎𝑝 𝑦 𝑐𝑎𝑝 • 𝑙𝑜𝑔(𝑝 𝑐𝑎𝑝 ),(7)\nwhere 𝑃 𝑐𝑎𝑝 represents the score vector of each word in the sequence, while 𝑝 𝑐𝑎𝑝 and 𝑦 𝑐𝑎𝑝 denote the prediction vector and the ground truth label of a single word,respectively. Note that we also utilizes masked language modeling [13] in dense captioning.\nIn the inference stage, we need to perform captioning on all the objects in the scene. Therefore, all the proposals obtained from the point cloud encoder are fed into the Non-Maximum Suppression filter and then into the captioning module as queries.\n3.4.3 3D Question Answering. 3D question answering task involves providing answers to questions about objects given the scene data. Following ScanQA [3], we simplify this task into a multi-class classification task for all possible answers. We count and deduplicate all answers, and consider each remaining answer as an output class in the classification task.\nSpecifically, a lightweight MLP is adopted to predict the score for each answer based on the fusion feature, and the answer with the highest score is selected as the final answer. Cross-entropy loss is used as the loss function to optimize the answering module:\nL 𝑄𝐴 = - 1 |𝑃 𝑞𝑎 | ∑︁ 𝑝 𝑞𝑎 ∈𝑃 𝑞𝑎 𝑦 𝑞𝑎 • 𝑙𝑜𝑔(𝑝 𝑞𝑎 ),(8)\nwhere 𝑝 𝑞𝑎 represents the answer score by the model, and 𝑦 𝑞𝑎 represents the ground truth label." }, { "figure_ref": [], "heading": "EXPERIMENT 4.1 Datasets and Implementation Details", "publication_ref": [ "b6", "b11", "b10", "b39", "b31", "b3", "b26", "b2", "b2" ], "table_ref": [], "text": "Visual Grounding Dataset: We select the benchmark dataset ScanRefer [7] for visual grounding task. It consists of 800 3D scenes from the ScanNet dataset [12], each annotated with bounding boxes around objects of interest and corresponding text descriptions. To evaluate our results, we employed two evaluation metrics: IoU@0.25 and IoU@0.5, which measure the percentage of times the proposals have an IoU greater than the threshold. Dense Captioning Dataset: We conduct experiments on Scan2Cap dataset [11] to evaluate the effectiveness of our method for the dense captioning task. Similar to Scan2Cap, we jointly measure the quaility of the generated model with captioning matrics including CiDEr [40], BlEU-4 [32], METEOR [4] and ROUGE [27], cited as C, B-4, M and R, respectively. We combine the metrics above with an IOU threshold and adopt the m@kIoU metric:\n𝑚@𝑘𝐼𝑜𝑈 = 1 𝑁 𝑁 ∑︁ 𝑖=1 𝑚 𝑖 • I(𝐼𝑜𝑈 ≥ 𝑘)(9)\nwhere m represents the captioning metric, k is the threshold of IoU and I stands for the indicator function.\nQuestion Answering Dataset: We perform a quantitative evaluation on the question answering tasks over the ScanQA dataset [3]. The ScanQA dataset consists of 41363 questions and 32337 unique answers from 800 scenes derived from the ScanNet scenes. Following the evaluation methodology in [3], EM@1 and EM@10 are used as the evaluation metric. EM@K is the percentage of predictions where the top K predicted answers exactly match any of the ground-truth answers." }, { "figure_ref": [], "heading": "Implementations Details", "publication_ref": [ "b5", "b36", "b32", "b12", "b28" ], "table_ref": [], "text": "We first train 3DVLP over the proposed proxy tasks including visual grounding, OCC and OSC in the pre-training stage. We then evaluate our methods on the dense captioning and question answering tasks by transferring the pre-trained model and finetuning it through tasks-specific loss. Similar to 3DJCG [6], we adopt FCOS [37] method to generate the initial object proposals and use 8 sentences per scene in a batch. We train 200 epochs over the grounding task in the pre-training stage. Importantly, we use VoteNet [33] as our point cloud encoder and a frozen BERT [13] as the language encoder to avoid over-fitting on short-length sentences in ScanRefer dataset. For captioning tasks, we use a Transformer decoder with 6 layers and 128 as the hidden size. For QA task, the hidden size of the classification layer is set to be 128 as well. We empirically set the batch size as 8 and adopt the AdamW optimizer [29] with the cosing learning rate decay strategy. The initial learning rate is set to be 0.002 for the detector and 5e-4 for other modules in the 3DVLP. Codes are implemented by Pytorch and run on a Nvidia 3090 GPU. " }, { "figure_ref": [], "heading": "Accuracy (%) 3DVLP_oid", "publication_ref": [], "table_ref": [], "text": "Overall Acc@0.5\nFigure 5: Comparison of the performance when using different threshold in the IoU filter. In addition, we compare a variant of 3DVLP with only OID loss, referred to as 3DVLP_oid." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b5", "b6", "b13", "b15", "b17", "b29", "b44", "b47", "b6", "b7", "b16", "b29", "b42", "b47", "b5", "b7", "b10", "b19", "b40", "b2", "b48", "b43" ], "table_ref": [], "text": "In 3D visual grounding task, we compare 3DVLP with the benchmark methods including \"3D\" models [6,7,14,16,18,30,45,48] and \"2D+3D\" models [7,8,17,30,43,48] . The \"3D\" models only utilizes raw attributes in point cloud input features, such as the coordinates, colors, and normal vectors of the original point cloud, while \"2D+3D\" models use 2D multi-view features as additional inputs.\nIn 3D dense captioning task, we choose end-to-end models of this task as the baseline algorithms for comparison. [6,8,11,20,41]. In 3D question answering task, we compare 3DVLP with ScanQA [3], FE-3DGQA [49] and 2D models with MCAN [44]." }, { "figure_ref": [], "heading": "Comparison with State-of-the-art Methods", "publication_ref": [], "table_ref": [], "text": "4.4.1 3D visual grounding task. We present the results of 3D visual grounding in Table 1. The results indicate that 3DVLP performs remarkably well and outperforms the baselines by a large margin.\nIn terms of unique scenes, 3DVLP achieves the highest accuracy in Acc@0.5 and ranks second in Acc@0.25, indicating the significant impact of our OID loss in developing the model's ability to identify high-quality bounding boxes. Previous work solely optimizes the center and size of the proposals, while the introduction of the OID loss improves the quality of proposals targeting the ground truth object. Furthermore, when comparing multiple and unique metrics, previous works suffers from issues related to the presence of similar objects in the scene, leading to poor matching results. However, the introduction of OSC and OCC tasks in 3DVLP enables it to achieve competitive performance in multiple metrics, showcasing its ability to accurately locate objects in complex scenes. In the overall metric, 3DVLP's performance surpasses the baseline by 0.71% in Acc@0.5 and also ranks second in Acc@0.25, demonstrating its effectiveness in 3D visual grounding. 2, it is evident that 3DVLP shows excellent transfer performance in dense captioning task. Importantly, the point cloud encoder in 3DVLP extracts universal features that generalize well in dense captioning, enabling 3DVLP to outperform other baselines by a large extent. Specifically, 3DVLP achieves a remarkable improvement of 2.55%, 4.93%, 2.30%, and 2.61% in terms of C@0.25, C@0.5, R@0.25, and Table 1: Comparison of different methods in 3D visual grounding task. measure the percentage of the correctly predicted bounding boxes whose IoU with the ground-truth boxes are larger than 0.25 and 0.5, respectively." }, { "figure_ref": [], "heading": "3D dense captioning task. As presented in Table", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Venue Data Unique Multiple Overall Acc@0.25 Acc@0.5 Acc@0.25 Acc@0.5 Acc@0.25 Acc@0. Table 2: Comparison of different methods in 3D dense captioning task. We report the result with the percentage of the predicted bounding boxes whose IoU with the ground truth are greater than 0.25 and 0.5." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Venue C@0.25 B-4@0.25 M@0.25 R@0.25 C@0.5 B-4@0.5 M@0.5 R@0. The results are presented with the percentage of predictions where the top K predicted answers exactly match any of the ground-truth answers. We also report Acc@0.25 and Acc@0.5 metrics, similar to the visual grounding metrics.\nMethod EM@1 EM@10 Acc@0.25 Acc@0. Table 4: Ablation analysis. We provide quantitative results of the overall accuracy in visual grounding and the metric under IoU=0.5 setting in dense captioning." }, { "figure_ref": [], "heading": "Module Visual Gounding", "publication_ref": [], "table_ref": [], "text": "Dense Captioning OID OCC OSC Acc@0.25 Acc@0.5 C@0.5 B-4@0.5 M@0.5 R@0.5 R@0.5, respectively. Moreover, the results show that 3DVLP outperforms the second baseline by 8.61% in M@0.25 and 9.99% in M@0.5. Among various evaluation metrics, METEOR focuses on capturing the semantic similarity and fluency between the output and the ground truth, thereby indicating the generalization ability of the encoder in 3DVLP. In comparison to SpaCap3D, which shares the same decoder architecture as 3DVLP, we observe a significant performance boost resulting from the pre-training backbone, thus demonstrating the effectiveness of the proxy tasks designed in the pre-training stage." }, { "figure_ref": [], "heading": "3D question answering task.", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "From the results in Table 3, the most striking observation emerging from the comparison is that 3DVLP consistently outperforms other methods and improves the performance in the question answering task. For example, 3DVLP achieves approximately 1.7%-2.4% improvement in EM@1 and EM@10 compared to the baseline. Moreover, it can be concluded that question answering benefits from the pre-training model when compared to ScanQA, as 3DVLP utilizes the same classification head. Furthermore, 3DVLP provides a boost by 6.76% and 7.23% in Acc@0.25 and Acc@0.5, respectively. However, it is noteworthy that the results are lower than those achieved in visual grounding, primarily due to the inclusion of the task-specific loss in the question answering task. " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Does the OID loss and the designed proxy tasks benefit downstream tasks? We conducted a series of ablation experiments to investigate the contribution of each module in 3DVLP. The results in Table 4 demonstrate that both visual grounding and dense captioning tasks benefit from each proposed module. In visual grounding, the OID loss significantly improves the quality of the predicted bounding boxes, thereby enhancing Acc@0.5 to a large degree. Furthermore, neither the introduction of OSC nor OCC provides a remarkable boost in Acc@0.25, indicating the superiority of modeling optimization at the object level in complex scenes. In dense captioning, the improvement of the model is consistent with that in visual grounding by combining the modules together.\nIs the improvement in OSC and OCC sensitive to the threshold used the IoU filter? To have a better understanding of the threshold 𝛿 used in the IoU filter, we estimate the results of the overall Acc in visual grounding with the varying 𝛿. Moreover, we also include 3DVLP with only OID loss as a base variant, referred as 3DVLP_oid. As shown in Fig. 5, the performance obviously improves when increasing the threshold from 0.1 to 0.25. This is because proposals targeting other objects can be incorrectly considered as positive samples and thus mislead the training optimization when using a low threshold. However, we further increase the threshold and observe that the improvement is not consistent. The performance drops with a large threshold since model will regard proposals that are not good enough as negative samples, resulting in semantic divergence. This is similar to what happens with the traditional pairwise contrastive loss. Therefore, based on our results, we believe that selecting a threshold of 0.25 in the IoU filter is a reasonable tradeoff." }, { "figure_ref": [ "fig_6", "fig_6" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "To further explore how 3DVLP improves the performance in visual grounding, we provide the comparison results with 3D-SPS as shown in Figure 6. Figure 6(d) indicates that OID loss contribute to more high-quaility bounding boxes, thereby boosting the performance. Additionally, these examples demonstrate that 3DVLP has a better understanding of the relationship between scene and language as a result of incorporating OSC and OCC, leading to more reliable visual grounding results." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "This paper investigates the shared nature across different tasks in semantic 3D scene understanding and proposes a contrastive 3D vision-language pre-training framework named 3DVLP, which transfers flexibly in the downstream tasks. 3DVLP introduces the object-level IoU-guided detection loss to obtain high-quaility proposals, aligns the point cloud representation and language representation by training over object-level cross-contrastive alignment task and develops its ability to distinguish different objects in the scene through object-level self-contrastive learning task, which defines a new paradigm for the 3D vision-language pre-training model. Comprehensive experiments reveal the generalization ability and superiority of 3DVLP over all downstream tasks in semantic 3D scene understanding, leading to a new state-of-the-art performance. Future work needs to focus on dealing with the fusion of point cloud and language, desirably about the full interaction of multi-level information." }, { "figure_ref": [], "heading": "B DATASET DETAILS", "publication_ref": [ "b6", "b10", "b2" ], "table_ref": [], "text": "To benchmark the performance in the downstream tasks, we select different datasets in the experiments and describe their detailed information below. ScanRefer [7]. ScanRefer is a large-scale benchmark dataset designed for 3D object localization and referred object segmentation in real-world scenes. The dataset consists of textual descriptions of objects present in the scene and their corresponding 3D bounding boxes. The main objective of the dataset is to enhance the performance of 3D object detection and recognition in real-world scenarios by providing a benchmark for models that can understand natural language descriptions of objects and their spatial relationships. The dataset comprises a total of 51,583 descriptions of 11,046 objects, which have been divided into train/val/test sets with 36,655, 9,508, and 2,068 samples, respectively. Additionally, ScanRefer categorizes the data into two subsets: \"unique\" and \"multiple.\" The \"unique\" subset contains grounding data with only a single object of its class in the scene, while the \"multiple\" subset contains data with more than one object of a particular class in the scene.\nScan2Cap [11]. Scan2Cap is a dataset designed for generating natural language descriptions of indoor scenes from 3D point cloud data. The primary objective of this dataset is to provide a benchmark for models that can generate natural language descriptions of indoor scenes using 3D point cloud data. The dataset is highly useful for evaluating the effectiveness of different techniques for combining computer vision and natural language processing to generate coherent and accurate descriptions of indoor scenes. To simplify the problem, Scan2Cap truncates descriptions longer than 30 tokens in ScanRefer and adds two special tokens, namely SOS and EOS, to indicate the start and end of the description. Additionally, Scan2Cap follows the same data division as ScanRefer, dividing the 36,665 and 9,508 samples into train and validation sets, respectively.\nScanQA [3]. The ScanQA dataset is a benchmark dataset designed for visual question answering (VQA) in 3D scenes. Based on the ScanNet dataset, it provides high-quality 3D scanning data of indoor scenes with corresponding questions and answers. The dataset covers a wide range of object categories, making it a challenging benchmark for VQA models. ScanQA contains a total of 41,363 questions and 58,191 answers, including 32,337 unique questions and 16,999 unique answers. It follows the same training, validation, and test set splits as in ScanRefer. " }, { "figure_ref": [], "heading": "C ABLATION ANALYSIS IN 3D QUESTION ANSWERING", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "We conducted ablation experiments in 3D question answering and report the results in Table 5. As shown in the results, the OCC and OSC modules provide a positive boost, while the OID module results in a slight drop in performance. We hypothesize that this is because adding alone the OID loss does not enable the model to handle the complex relationship in the scene according to the questions. " }, { "figure_ref": [], "heading": "D COMPARISON WITH TRAINING FROM SCRATCH", "publication_ref": [], "table_ref": [], "text": "We conduct extensive experiments and provide comparison results between 3DVLP and training from scratch in downstream tasks to evaluate the effectiveness of the pre-training stage. Since 3DVLP is fine-tuned in downstream tasks for 20 epochs, we used its variants that are trained from scratch for 20 epochs and trained from scratch until full convergence as baselines, denoted as scratch-20 and scratch-full, respectively. As shown in Table 6, the results demonstrate that the pre-training stage over the proxy tasks provides a significant boost in performance. When comparing with 3DVLP with scratch-20, we observe that 3DVLP shows superiority in all metrics with the same training time. The training in the pre-training stage enhances the performance by 0.5-6% in captioning metrics and 2-4% in QA metrics. When comparing with scratch-full, 3DVLP achieves better performance with fewer training times, further verifying the effectiveness Table 6: Comparison results between 3DVLP and its variants trained from scratch. Specifically, we compare 3DVLP trained from scratch for 20 epochs (denoted as \"scratch-20\") and 3DVLP trained from scratch until full convergence (denoted as \"scratch-full\")." }, { "figure_ref": [], "heading": "Dense Captioning", "publication_ref": [], "table_ref": [], "text": "Question Answering Method C@0.25 B-4@0.25 M@0.25 R@0.25 C@0.5 B-4@0.5 M@0.5 R@0. " }, { "figure_ref": [], "heading": "3DVLP:", "publication_ref": [], "table_ref": [], "text": "This is a brown chair. It is at the end of the table." }, { "figure_ref": [], "heading": "Ground Truth:", "publication_ref": [], "table_ref": [], "text": "This is a brown armchair. It is in a corner of the room." }, { "figure_ref": [], "heading": "3DVLP:", "publication_ref": [], "table_ref": [], "text": "This is a brown armchair. It is to the right of a table." }, { "figure_ref": [], "heading": "Ground Truth:", "publication_ref": [], "table_ref": [], "text": "This is a tv. The tv is suspended on the wall." }, { "figure_ref": [], "heading": "3DVLP:", "publication_ref": [], "table_ref": [], "text": "This is a black tv. It is on the wall. " }, { "figure_ref": [ "fig_9" ], "heading": "E MORE QUALITATIVE RESULTS", "publication_ref": [], "table_ref": [], "text": "We provide more qualitative results in dense captioning in Fig 8." }, { "figure_ref": [ "fig_8" ], "heading": "F T-SNE VISUALIZATION OF PROPOSAL FEATURES", "publication_ref": [ "b37" ], "table_ref": [], "text": "We present a t-SNE [38] visualization of proposal features in the scene, as shown in Fig. 7. We use a threshold near the real object center and filter out the proposals representing the background. Furthermore, we assign labels to the proposals with the nearest real object id. We compare the performance of 3DVLP with its variant that does not include OID, OSC, and OCC modules, namely 3DVLPbase. The visualization shows that the object detector in 3DVLP, with the three proposed modules, is better at distinguishing objects in the scene, which facilitates the optimization of downstream tasks." }, { "figure_ref": [], "heading": "Appendix A OVERVIEW", "publication_ref": [ "b37" ], "table_ref": [], "text": "In Section B, we provide more details of the datasets used in the downstream tasks. In Section C, we conduct the ablation study in 3D question answering and show the effect of each module in 3DVLP. In Section D, we compare 3DVLP with variant that train from scratch to verify the effectiveness and superiority of the pretraining stage. In Section F, we provide the t-SNE [38] visualization of proposal features in the scene from 3DVLP and variant without OID, OCC and OSC. In Section E, we show more qualitative results in 3D dense captioning task." } ]
In recent years, vision language pre-training frameworks have made significant progress in natural language processing and computer vision, achieving remarkable performance improvement on various downstream tasks. However, when extended to point cloud data, existing works mainly focus on building task-specific models, and fail to extract universal 3D vision-language embedding that generalize well. We carefully investigate three common tasks in semantic 3D scene understanding, and derive key insights into the development of a pre-training model. Motivated by these observations, we propose a vision-language pre-training framework 3DVLP (3D vision-language pre-training with object contrastive learning), which transfers flexibly on 3D vision-language downstream tasks. 3DVLP takes visual grounding as the proxy task and introduces Object-level IoU-guided Detection (OID) loss to obtain high-quality proposals in the scene. Moreover, we design Object-level Cross-Contrastive alignment (OCC) task and Object-level Self-Contrastive learning (OSC) task to align the objects with descriptions and distinguish different objects in the scene, respectively. Extensive experiments verify the excellent performance of 3DVLP on three 3D vision-language tasks, reflecting its superiority in semantic 3D scene understanding.
Vision-Language Pre-training with Object Contrastive Learning for 3D Scene Understanding
[ { "figure_caption": "Alarge brown chair. An armchair on the right of the other chair.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 1 :1Figure1: Relationship between 3D vision-language tasks. Firstly, all the tasks rely heavily on the object detector to locate object in the scene. Secondly, 3D vision-language tasks require an effective fusion module to understand the connection between point cloud and language.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustration of the IoU filter in 3DVLP. To apply label smoothing and contrastive loss at the object level, proposals with IoU higher than a threshold 𝛿 are considered positive samples while others are regarded as the negative ones.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "( 4 )4where 𝐻 𝑝 represents the embedding of proposal 𝑝, and 𝑇 denotes the language embedding. Given I as the indicator function, 𝐼𝑜𝑈 (•, •) as the IoU score between two boxes, and 𝛿 as the IoU threshold, we have 𝑃 𝑝𝑜𝑠 = {𝑝 |𝐼𝑜𝑈 (b 𝑝 , b 𝑔𝑡 ) ≥ 𝛿 } as the set of proposals containing positive samples while 𝑃 𝑛𝑒𝑔 = {𝑝 |𝐼𝑜𝑈 (b 𝑝 , b 𝑔𝑡 ) < 𝛿 } containing the negative ones. 𝑠 (•, •) represents the similarity score function for measuring the similarity between two types of features, such as by performing a dot product operation.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Illustration of Object-level IoU-guided Detection (OID) loss, Object-level Cross-contrastive alignment (OCC) and Object-level Self-Contrastive learning (OSC) pre-training tasks. All the modules utilize a IoU filter to select positive proposals.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Qualitative results of 3DVLP and 3D-SPS. We mark the ground truth in blue, 3D-SPS in red and 3DVLP in green.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: t-SNE visualization of proposal features in the scene. 3DVLP-base is the variant of 3DVLP that does not include OID, OSC and OCC modules.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Qualitative results in dense captioning.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "collects two datasets containing", "figure_data": "Encode Point Cloud EncoderFusion layer Cross AttentionVisual Grounding Head MLP𝐿 !$% + 𝐿 &'Pretraining tasksDownstream HeadsA large brown chair.Object-level Self-Contrastive alignment𝐿 !\"#Transformer Decoder𝐿 #()An armchair on the right of the other chair.Language EncoderObject-level Cross-Contrastive learning𝐿 !##MLP𝐿 *(Figure 2: Pipeline of 3DVLP in semantic 3D scene understanding. 3DVLP takes visual grounding as the proxy task and utilizesObject-level IoU-guided Detection (OID) loss to boost the performance of the object detector. We also introduce Object-levelCross-Contrastive alignment task and Object-level Self-Contrastive learning task in the pre-training stage, which facilitatecross-modal alignment and enable the model to distinguish objects more accurately, respectively.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of different methods in 3D question answering task.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation analysis in question answering. We report the percentage of exactly matched predictions.", "figure_data": "ModuleQuestion AnsweringOID OCC OSC EM@1EM@1023.2356.6622.5855.9423.8057.8824.7557.2424.0357.91", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "This is a brown chair. It is at the end of the table.", "figure_data": "5 EM@1EM@10scratch-2059.7138.8935.5559.5741.3826.7032.4648.0121.5153.99scratch-full64.1438.5935.6358.9448.8130.0833.2950.2822.1854.043DVLP66.6340.8536.1261.0354.4134.1034.3454.2824.0357.91Ground Truth:", "figure_id": "tab_8", "figure_label": "", "figure_type": "table" } ]
Taolin Zhang; Sunan He; Tao Dai; Bin Chen; Zhi Wang; Shu-Tao Xia
[ { "authors": "Panos Achlioptas; Ahmed Abdelreheem; Fei Xia; Mohamed Elhoseiny; Leonidas Guibas", "journal": "Springer", "ref_id": "b0", "title": "Referit3d: Neural listeners for fine-grained 3d object identification in real-world scenes", "year": "2020-08-23" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b1", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Daichi Azuma; Taiki Miyanishi; Shuhei Kurita; Motoaki Kawanabe", "journal": "", "ref_id": "b2", "title": "ScanQA: 3D question answering for spatial scene understanding", "year": "2022" }, { "authors": "Satanjeev Banerjee; Alon Lavie", "journal": "", "ref_id": "b3", "title": "METEOR: An automatic metric for MT evaluation with improved correlation with human judgments", "year": "2005" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "Advances in neural information processing systems", "ref_id": "b4", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Daigang Cai; Lichen Zhao; Jing Zhang; Lu Sheng; Dong Xu", "journal": "", "ref_id": "b5", "title": "3djcg: A unified framework for joint dense captioning and visual grounding on 3d point clouds", "year": "2022" }, { "authors": "Dave Zhenyu; Chen ; Angel X Chang; Matthias Nießner", "journal": "Springer", "ref_id": "b6", "title": "Scanrefer: 3d object localization in rgb-d scans using natural language", "year": "2020-08-23" }, { "authors": "Dave Zhenyu; Chen ; Qirui Wu; Matthias Nießner; Angel X Chang", "journal": "", "ref_id": "b7", "title": "D3Net: a speaker-listener architecture for semi-supervised dense captioning and visual grounding in RGB-D scans", "year": "2021" }, { "authors": "Jiaming Chen; Weixin Luo; Xiaolin Wei; Lin Ma; Wei Zhang", "journal": "", "ref_id": "b8", "title": "HAM: Hierarchical Attention Model with High Performance for 3D Visual Grounding", "year": "2022" }, { "authors": "Yen-Chun Chen; Linjie Li; Licheng Yu; Ahmed El Kholy; Faisal Ahmed; Zhe Gan; Yu Cheng; Jingjing Liu", "journal": "Springer", "ref_id": "b9", "title": "Uniter: Universal image-text representation learning", "year": "2020-08-23" }, { "authors": "Zhenyu Chen; Ali Gholami; Matthias Nießner; Angel X Chang", "journal": "", "ref_id": "b10", "title": "Scan2cap: Context-aware dense captioning in rgb-d scans", "year": "2021" }, { "authors": "Angela Dai; X Angel; Manolis Chang; Maciej Savva; Thomas Halber; Matthias Funkhouser; Nießner", "journal": "", "ref_id": "b11", "title": "Scannet: Richly-annotated 3d reconstructions of indoor scenes", "year": "2017" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b12", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Mingtao Feng; Zhen Li; Qi Li; Liang Zhang; Xiangdong Zhang; Guangming Zhu; Hui Zhang; Yaonan Wang; Ajmal Mian", "journal": "", "ref_id": "b13", "title": "Free-form description guided 3d visual graph network for object grounding in point cloud", "year": "2021" }, { "authors": "Yulan Guo; Hanyun Wang; Qingyong Hu; Hao Liu; Li Liu; Mohammed Bennamoun", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b14", "title": "Deep learning for 3d point clouds: A survey", "year": "2020" }, { "authors": "Pin-Hao Huang; Han-Hung Lee; Hwann-Tzong Chen; Tyng-Luh Liu", "journal": "", "ref_id": "b15", "title": "Text-guided graph neural networks for referring 3d instance segmentation", "year": "2021" }, { "authors": "Shijia Huang; Yilun Chen; Jiaya Jia; Liwei Wang", "journal": "", "ref_id": "b16", "title": "Multi-view transformer for 3d visual grounding", "year": "2022" }, { "authors": "Ayush Jain; Nikolaos Gkanatsios; Ishita Mediratta; Katerina Fragkiadaki", "journal": "Springer", "ref_id": "b17", "title": "Bottom up top down detection transformers for language grounding in images and point clouds", "year": "2022-10-23" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b18", "title": "Scaling up visual and visionlanguage representation learning with noisy text supervision", "year": "2021" }, { "authors": "Yang Jiao; Shaoxiang Chen; Zequn Jie; Jingjing Chen; Lin Ma; Yu-Gang Jiang", "journal": "Springer", "ref_id": "b19", "title": "More: Multi-order relation mining for dense captioning in 3d scenes", "year": "2022-10-23" }, { "authors": "Zhenzhong Lan; Mingda Chen; Sebastian Goodman; Kevin Gimpel; Piyush Sharma; Radu Soricut", "journal": "", "ref_id": "b20", "title": "Albert: A lite bert for self-supervised learning of language representations", "year": "2019" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b21", "title": "BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b22", "title": "Blip: Bootstrapping language-image pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Ramprasaath Selvaraju; Akhilesh Gotmare; Shafiq Joty; Caiming Xiong; Steven Chu; Hong Hoi", "journal": "Advances in neural information processing systems", "ref_id": "b23", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "Liunian Harold; Li ; Mark Yatskar; Cho-Jui Da Yin; Kai-Wei Hsieh; Chang", "journal": "", "ref_id": "b24", "title": "Visualbert: A simple and performant baseline for vision and language", "year": "2019" }, { "authors": "Xiujun Li; Xi Yin; Chunyuan Li; Pengchuan Zhang; Xiaowei Hu; Lei Zhang; Lijuan Wang; Houdong Hu; Li Dong; Furu Wei", "journal": "Springer", "ref_id": "b25", "title": "Oscar: Object-semantics aligned pre-training for vision-language tasks", "year": "2020-08-23" }, { "authors": "Chin-Yew Lin", "journal": "Text summarization branches out", "ref_id": "b26", "title": "Rouge: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b27", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b28", "title": "Decoupled weight decay regularization", "year": "2017" }, { "authors": "Junyu Luo; Jiahui Fu; Xianghao Kong; Chen Gao; Haibing Ren; Hao Shen; Huaxia Xia; Si Liu", "journal": "", "ref_id": "b29", "title": "3d-sps: Single-stage 3d visual grounding via referred point progressive selection", "year": "2022" }, { "authors": "Rafael Müller; Simon Kornblith; Geoffrey E Hinton", "journal": "", "ref_id": "b30", "title": "When does label smoothing help? Advances in neural information processing systems", "year": "2019" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b31", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Or Charles R Qi; Kaiming Litany; Leonidas J He; Guibas", "journal": "", "ref_id": "b32", "title": "Deep hough voting for 3d object detection in point clouds", "year": "2019" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b33", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Weijie Su; Xizhou Zhu; Yue Cao; Bin Li; Lewei Lu; Furu Wei; Jifeng Dai", "journal": "", "ref_id": "b34", "title": "Vl-bert: Pre-training of generic visual-linguistic representations", "year": "2019" }, { "authors": "Hao Tan; Mohit Bansal", "journal": "", "ref_id": "b35", "title": "Lxmert: Learning cross-modality encoder representations from transformers", "year": "2019" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b36", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b37", "title": "Visualizing data using t-SNE", "year": "2008" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b38", "title": "Attention is all you need", "year": "2017" }, { "authors": "Ramakrishna Vedantam; Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b39", "title": "Cider: Consensus-based image description evaluation", "year": "2015" }, { "authors": "Heng Wang; Chaoyi Zhang; Jianhui Yu; Weidong Cai", "journal": "", "ref_id": "b40", "title": "Spatialityguided transformer for 3d dense captioning on point clouds", "year": "2022" }, { "authors": "Jinyu Yang; Jiali Duan; Son Tran; Yi Xu; Sampath Chanda; Liqun Chen; Belinda Zeng; Trishul Chilimbi; Junzhou Huang", "journal": "", "ref_id": "b41", "title": "Vision-language pre-training with triple contrastive learning", "year": "2022" }, { "authors": "Zhengyuan Yang; Songyang Zhang; Liwei Wang; Jiebo Luo", "journal": "", "ref_id": "b42", "title": "Sat: 2d semantics assisted training for 3d visual grounding", "year": "2021" }, { "authors": "Zhou Yu; Jun Yu; Yuhao Cui; Dacheng Tao; Qi Tian", "journal": "", "ref_id": "b43", "title": "Deep modular coattention networks for visual question answering", "year": "2019" }, { "authors": "Zhihao Yuan; Xu Yan; Yinghong Liao; Ruimao Zhang; Sheng Wang; Zhen Li; Shuguang Cui", "journal": "", "ref_id": "b44", "title": "Instancerefer: Cooperative holistic understanding for visual grounding on point clouds through instance multi-level contextual referring", "year": "2021" }, { "authors": "Xiaohua Zhai; Xiao Wang; Basil Mustafa; Andreas Steiner; Daniel Keysers; Alexander Kolesnikov; Lucas Beyer", "journal": "", "ref_id": "b45", "title": "Lit: Zero-shot transfer with locked-image text tuning", "year": "2022" }, { "authors": "Han Zhang; Jing Yu Koh; Jason Baldridge; Honglak Lee; Yinfei Yang", "journal": "", "ref_id": "b46", "title": "Cross-modal contrastive learning for text-to-image generation", "year": "2021" }, { "authors": "Lichen Zhao; Daigang Cai; Lu Sheng; Dong Xu", "journal": "", "ref_id": "b47", "title": "3DVG-Transformer: Relation modeling for visual grounding on point clouds", "year": "2021" }, { "authors": "Lichen Zhao; Daigang Cai; Jing Zhang; Lu Sheng; Dong Xu; Rui Zheng; Yinjie Zhao; Lipeng Wang; Xibo Fan", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b48", "title": "Towards Explainable 3D Grounded Visual Question Answering: A New Benchmark and Strong Baseline", "year": "2022" }, { "authors": "Zhaohui Zheng; Ping Wang; Wei Liu; Jinze Li; Rongguang Ye; Dongwei Ren", "journal": "", "ref_id": "b49", "title": "Distance-IoU loss: Faster and better learning for bounding box regression", "year": "2020" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "", "ref_id": "b50", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 97.31, 362.6, 193.57, 21.76 ], "formula_id": "formula_0", "formula_text": "L 𝐷𝐼𝑜𝑈 (b 𝑝 , b 𝑔𝑡 ) = 1 -𝐼𝑜𝑈 + 𝜌 2 b 𝑝 , b 𝑔𝑡 𝑐 2 , (1" }, { "formula_coordinates": [ 4, 290.87, 371, 3.17, 8.97 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 4, 88.14, 615.37, 205.91, 41.62 ], "formula_id": "formula_2", "formula_text": "𝑦 𝑝 =          1 -𝜀 if 𝐼𝑜𝑈 𝑝 = 𝐼𝑜𝑈 𝑚𝑎𝑥 𝜀 𝐾 if 𝐼𝑜𝑈 𝑝 ≥ 𝛿 and 𝐼𝑜𝑈 𝑝 ≠ 𝐼𝑜𝑈 𝑚𝑎𝑥 0 otherwise(2)" }, { "formula_coordinates": [ 4, 113.39, 689.11, 177.49, 19.7 ], "formula_id": "formula_3", "formula_text": "L 𝑂𝐼 𝐷 = ∑︁ 𝑝 𝑦 𝑝 • L 𝐷𝐼𝑜𝑈 (b 𝑝 , b 𝑔𝑡 ). (3" }, { "formula_coordinates": [ 4, 290.87, 692.07, 3.17, 8.97 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 326.21, 426.53, 57.53, 20.08 ], "formula_id": "formula_5", "formula_text": "L OCC = - 1 2 E (" }, { "formula_coordinates": [ 5, 64.98, 526.13, 229.07, 24.47 ], "formula_id": "formula_6", "formula_text": "L OSC = -E b 𝑔𝑡 ∼𝐷 log 𝑝,p ∈𝑃 𝑝𝑜𝑠 exp(𝑠 (𝐻 𝑝 , 𝐻 p )) 𝑝,p ∈𝑃 𝑝𝑜𝑠 ∪𝑃 𝑛𝑒𝑔 exp(𝑠 (𝐻 𝑝 , 𝐻 p )) .(5)" }, { "formula_coordinates": [ 5, 109.58, 685.7, 184.46, 24.18 ], "formula_id": "formula_7", "formula_text": "L 𝑉 𝐺 = - 1 |𝑃 𝑚 | ∑︁ 𝑝 𝑚 ∈𝑃 𝑚 𝑦 𝑚 • 𝑙𝑜𝑔(𝑝 𝑚 ),(6)" }, { "formula_coordinates": [ 5, 359.3, 448.94, 198.9, 24.18 ], "formula_id": "formula_8", "formula_text": "L 𝐶𝐴𝑃 = - 1 |𝑃 𝑐𝑎𝑝 | ∑︁ 𝑝 𝑐𝑎𝑝 ∈𝑃 𝑐𝑎𝑝 𝑦 𝑐𝑎𝑝 • 𝑙𝑜𝑔(𝑝 𝑐𝑎𝑝 ),(7)" }, { "formula_coordinates": [ 5, 369.71, 685.03, 188.49, 24.18 ], "formula_id": "formula_9", "formula_text": "L 𝑄𝐴 = - 1 |𝑃 𝑞𝑎 | ∑︁ 𝑝 𝑞𝑎 ∈𝑃 𝑞𝑎 𝑦 𝑞𝑎 • 𝑙𝑜𝑔(𝑝 𝑞𝑎 ),(8)" }, { "formula_coordinates": [ 6, 111.31, 329.01, 182.74, 24.77 ], "formula_id": "formula_10", "formula_text": "𝑚@𝑘𝐼𝑜𝑈 = 1 𝑁 𝑁 ∑︁ 𝑖=1 𝑚 𝑖 • I(𝐼𝑜𝑈 ≥ 𝑘)(9)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [], "table_ref": [], "text": "\"What I Cannot Create, I Do Not Understand.\"" }, { "figure_ref": [ "fig_1" ], "heading": "Richard Feynman", "publication_ref": [ "b22", "b29", "b38", "b21", "b21", "b61", "b0" ], "table_ref": [], "text": "This quote by Richard Feynman perfectly captures the essence of human learning techniques. In the context of machine learning especially the area of vision and language, it can be interpreted as the ability to generate images given text prompts is a strong indicator of understanding and matching between visual and textual information (Kwon et al., 2022a). Despite the success of various methods in the image-text matching task (Karpathy and Fei-Fei, 2015;Lee et al., 2018), there is still a need for more advanced models that can better capture the fine-grained details, spatial relationships, and compositionality. Meanwhile, diffusion models (Sohl-Dickstein et al., 2015a;Saharia et al., 2022b;Rombach et al., 2022a) have been shown to produce high-quality and diverse images from text descriptions. Therefore, in this paper, we investigate the idea of leveraging the power of pre-trained Diffusion Models, specifically the state-of-the-art text-to-image generative model-Stable Diffusion, for the discriminative image-text matching task, as shown in Figure 1. The success of Stable Diffusion in generative tasks suggests that it has a strong capability of understanding the relationship between Preprint. Under review. visual and textual information, and we aim to harness the understanding for image-text matching tasks.\nThe key advantages of using Stable Diffusion for text-image alignment are two folds: first, Stable Diffusion uses a pre-trained Variational Autoencoder (VAE) (Kingma and Welling, 2013) and crossattention layers in its architecture, which provides strong compressed representations and shed information about the alignment of the data from different modalities. Second, Stable Diffusion has the ability to understand spatial relations and fine-grained disentangled concepts, so as to generate images per text prompts' requests, while traditional vision and language models pre-trained on discriminative tasks such as CLIP (Radford et al., 2021) only allow to model image-text contextual alignment at coarse-grained contextual (global) level but ignores the compositional matching of disentangled concepts (i.e., finer-grained cross-modal alignment at region-word level) (Jiang et al., 2022).\nHowever, to efficiently adapt Stable Diffusion, a pre-trained text-to-image generation model, to the image-text matching task, two key challenges need to be addressed: (1) how to disentangle the degree of alignment between the image and text from the latent space of Stable Diffusion? In text-to-image generation, the model is trained to generate an image that is semantically consistent with a given text prompt. However, in image-text matching, the task is to determine the degree of alignment between a given image and text. Therefore, it is important to disentangle the degree of alignment between the image and text in the latent space of Stable Diffusion, to effectively use it for image-text matching;\n(2) how to efficiently adapt the model in the few-shot setting. Fine-tuning a text-to-image generation model like Stable Diffusion for image-text matching requires adapting the model from a generative task to a discriminative task, which can be challenging and require much data.\nTo address these challenges, we propose the Discriminative Stable Diffusion (DSD) method, which includes two key ideas: (1) identifying and leveraging attention scores from the cross-attention maps in Stable Diffusion as the matching score and (2) using attention-based prompt learning to fine-tune the attention matrices. DSD can outperform the CLIP-based methods by 2.7% on the Compositional Visual Genome and 6.0% on the RefCOCOg datasets in terms of accuracy under the few-shot setting. Our approach reveals the potential of diffusion models that can broaden their scope of use to discriminative tasks.\nOur contributions in this paper are threefold:\n• We do a pioneer study using latent text-to-image diffusion-based generative models which are initially proposed for generative tasks to address discriminative tasks such as image-text matching.\n• We propose a new method based on exploiting the use of cross-attention maps of Stable Diffusion across layers and attention-based prompt learning for solving the image-text matching task. 2\n• We demonstrate the effectiveness of our approach through experimental evaluation under the few-shot setting on both the Compositional Visual Genome (Jiang et al., 2022) and the RefCOCOg (Yu et al., 2016) datasets for image-text matching. We also extend our method to the visual question answering task, demonstrating its potency on the VQAv2 (Antol et al., 2015) dataset." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b52", "b8", "b13", "b12", "b7", "b26", "b18", "b59", "b62", "b38", "b38", "b54", "b47", "b47", "b35", "b20" ], "table_ref": [], "text": "Diffusion Probabilistic Models (DPMs) Diffusion probabilistic models (DPMs) have been widely used as generative models for images in recent years. These models, which include diffusion (Sohl-Dickstein et al., 2015b) and score-based generative models (Song and Ermon, 2019), have been shown to outperform generative adversarial networks (GANs) (Goodfellow et al., 2014) in many cases. In the past two years, significant progress has been made in the development of DPMs, with a focus on improving sampling techniques such as classifier-free guidance (Ho and Salimans, 2021). DPMs are typically implemented using convolutional U-Net architectures (Ronneberger et al., 2015a) which contain cross-attention layers. Hertz et al. (2022) finds that replacing attention maps in the cross-attention module of text-to-image generation diffusion models can edit image attributes. Just scaling the attention maps of the respective word can adjust the effect of a particular word in the prompt. Feng et al. (2022) demonstrates that one can retain the compositional semantics in the generated image by manipulating the cross-attention. Kumari et al. (2022) proposes to fine-tune the key and value mapping from text to latent features in the cross-attention layers of text-to-image diffusion model to compose multiple new concepts in the image. In the context of image-text matching, the attention scores between the text and image representations in the DPMs can reflect the degree of alignment between them.\nFew-shot Learning for Vision and language Tasks Vision and Language discriminative models pretrained on large-scale image-text pairs have demonstrated great potential in multimodal representation learning (Jia et al., 2021;Yao et al., 2021;Yuan et al., 2021;Radford et al., 2021). Among them, CLIP (Radford et al., 2021) benefits from 400M curated data and defines various prompt templates to carry out zero-shot image classification. Like CLIP, several different few-shot learners were proposed. GPT (Brown et al., 2020), as a strong few-shot learner, is capable of performing a new language task by learning from only a few training instances. Frozen (Tsimpoukelli et al., 2021) is developed based on GPT and made into a multimodal few-shot learner by expanding the soft prompting to include a collection of images and text. The concept of prompt learning (Schick and Schütze, 2020) has been widely explored in natural language processing (NLP) and computer vision. It allows pre-trained models to adapt to various downstream tasks with minimal data by introducing a small prompt layer (Schick and Schütze, 2020;Liu et al., 2021). In the context of image-text matching, prompt learning has been used to fine-tune pre-trained models for the task (He et al., 2022b). In our work, instead of adding learnable prompts over the inputs or between transformer layers (Jia et al., 2022), we introduce learnable prompts over the attention layers. In our paper, our primary research question is the adaptation of pre-trained generative diffusion models into discriminative models for specific tasks. This focus is driven by the challenges and opportunities presented by utilizing diffusion-based processes in a discriminative setting, specifically for the image-text matching task, which has distinct characteristics compared to the modeling approaches mentioned above." }, { "figure_ref": [], "heading": "Generative Models for Discriminative Tasks", "publication_ref": [ "b65", "b5", "b36", "b4", "b25", "b56", "b17", "b63" ], "table_ref": [], "text": "There has been a significant amount of research on using generative models for discriminative tasks in the past decades (Zimmermann et al., 2021;Croce et al., 2020). Ng and Jordan (2001) Clark and Jaini (2023) propose to use pre-trained diffusion models for zero-shot classification. Krojer et al. (2023) extends the methods to the image-text retrieval task. Wei et al. (2023) formulate diffusion models as masked autoencoders and achieves state-of-the-art classification accuracy on video tasks. Different from these works, we are the first to explore the use of pre-trained diffusion models for discriminative tasks, specifically the image-text matching task.\nAnother line of works use diffusion models as data source and then training a discriminative model on the synthetic data generated from it (He et al., 2022a;Jahanian et al., 2021;Zhang et al., 2021). Differs from these works, our approach emphasizes the direct adaptation of generative diffusion models, leveraging their pre-existing structures and knowledge without the need to generate synthetic data. " }, { "figure_ref": [], "heading": "Preliminaries on Diffusion Models", "publication_ref": [ "b37", "b39" ], "table_ref": [], "text": "In this section, we provide a brief overview of the concepts and techniques in denoising diffusion models that are necessary to understand our proposed method. Diffusion models are a class of generative models that are particularly effective at generating high-quality images (Sohl-Dickstein et al., 2015b;Nichol et al., 2021;Ramesh et al., 2022;Saharia et al., 2022a;Rombach et al., 2022b). They aim to model a distribution p θ (x 0 ) that approximates the data distribution q (x 0 ) and is easy to sample from. DPMs model a \"forward process\" in the space of x 0 from data to noise by adding noise to real data, and a reverse process that tries to reconstruct the original data from the noisy version. The forward process is described by the equation\nq(x t |x 0 ) = N (x t ; √ ᾱt x 0 , (1 -ᾱt )I),(1)\nwhere x 1:T defines a set of noisy images and x 0 is the initial image. N denotes a Gaussian distribution, and ᾱt are hyperparameters. The reverse process is modeled by a Gaussian distribution\np θ (x t-1 |x t ) = N (µ θ (x t ), Σ θ (x t )),(2)\nwhere neural networks are used to predict the mean and covariance of the distribution. The parameters of the model, θ, are learned by optimizing a variational lower bound on the log-likelihood of the real data. Once trained, new images can be generated by starting from a noise sample and iteratively sampling from the reverse process distribution until reaching the final time step. In latent diffusion probabilistic models such as Stable Diffusion, this two process are similar, while they proceeds in the latent space: x 0 is encoded into z 0 in an efficient, low-dimensional latent space first and then do the diffusion process. And in the case where a DPM is conditioned on additional information, such as text information c, the reverse process becomes p θ (z t-1 |z t , y), where y is the input text.\n4 Discriminative Latent Diffusion Models" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "The problem of image-text matching is formalized as follows: given a text prompt y ∈ Y and a set of images X , we aim to find the image x * ∈ X that is most semantically aligned with the given text prompt y.\nFormally, we define the image-text matching problem as finding the function f : Y × X → [0, 1] that assigns a score to each image-text pair (y, x) indicating the degree of semantic alignment between the text and image. The goal is to find the image x * that maximizes the score for a given text prompt y, i.e., x * = arg max x∈X f (y, x)." }, { "figure_ref": [ "fig_2" ], "heading": "Method Overview", "publication_ref": [], "table_ref": [], "text": "To learn the function f , the main idea is to leverage the powerful representations learned by a pre-trained Stable Diffusion model to perform image-text matching. There are three key modules in DSD, cross-attention score computation, LogSumExp pooling, and attention-based prompt learning, as shown in Figure 2. The cross-attention score computation module extracts the mutual influence between visual and textual information by computing the attention scores from cross-attention matrices in U-Nets of the Stable Diffusion model. The LogSumExp pooling module pools these attention scores all over tokens to obtain a single matching score. Finally, the attention-based prompt learning module fine-tunes the model by updating the key and value mappings from text to latent features in the cross-attention layers under a few-shot setting. This allows the model to learn new image-text concepts while retaining the ability to capture complex and nuanced relationships between images and text. The model outputs a score that measures the alignment between the image and text, which can be used to adapt the model from a text-to-image generation task to an image-text matching task." }, { "figure_ref": [], "heading": "Cross-attention Score Computation", "publication_ref": [ "b3", "b33", "b15", "b15", "b55" ], "table_ref": [], "text": "Cross-attention scores are a measure of the relevance of an image and a text to each other (Chen et al., 2020;Li et al., 2019). They are calculated by taking the dot product of the representations of the image and text in a latent space, and normalizing by the product of their norms. We propose to adapt cross-attention scores as a way to better capture the complex relationships between images and text in the image-text matching task. In the sequel, we elaborate on our strategy in depth.\nStable Diffusion (Jaegle et al., 2021) is trained to generate images from text prompts, and as such, it has learned strong compressed representations of both text and images. We can make use of these representations to learn the function f for image-text matching.\nMore specifically, given a text prompt y, we first encode it into a intermediate text representation r y = τ θ (y) ∈ R M ×dτ using the domain specific encoder τ θ . We then encode each image x ∈ X where x ∈ R H×W ×3 in RGB space into a latent image representation z = E(x), where E(x) is the encoder. The encoder ϵ θ in the U-Net (Ronneberger et al., 2015b) of the pre-trained textto-image generation model then encode z into r x = φ i (z t ), where φ i (z t ) ∈ R N ×d i ϵ denotes a (flattened) intermediate representation of the UNet implementing ϵ θ , which are then mapped to intermediate layers of the UNet via a cross-attention layer implementing (Jaegle et al., 2021;Vaswani et al., 2017), mapping the inputs to a query, key, and value feature, respectively, and d τ is the output dimension of key and query features.\nA = softmax QK T √ d • V , with Q = W q (i) • r x , K = W k (i) • r y , V = W v (i) • r y . Here, W v (i) ∈ R d×d i ϵ , W q (i) ∈ R d×dτ , W k (i) ∈ R d×dτ are learnable projection matrices" }, { "figure_ref": [], "heading": "LogSumExp Pooling (LSE)", "publication_ref": [ "b1", "b31" ], "table_ref": [], "text": "To compute the function g and quantitatively evaluate the degree of semantic alignment between an image and a text prompt, we leverage LogSumExp (LSE) pooling (Blanchard et al., 2021) as a means of aggregating the attention maps generated by the cross-attention mechanism in our model. By using LSE pooling, we are able to take into account the relative importance of different image and text tokens in the attention map, rather than simply averaging or summing all elements in the map. This has several benefits. Firstly, LSE pooling is able to handle large values and outliers in the attention map more robustly than other pooling methods, such as average or sum pooling. Secondly, LSE pooling has high numerical stability during training. Thirdly, LSE pooling is able to better preserve the ordering of values in the attention map, allowing for more interpretable and accurate matching scores.\nFor notation simplicity, we drop the batch and attention head dimension, the attention map matrix is denoted as A ∈ R n×m , where n and m are the number of image tokens (height × width) in the latent space and length of text tokens, respectively. The LSE pooling operator is defined as:\nS(A) = 1 λ log n i=1 exp (λA i,: )(3)\nWhere A i,: represents the i-th row of the matrix A. λ is a factor that determines how much to magnify the importance of the most relevant pairs of image region features and attended text sentence vectors, which by default we took the value of 1.\nThe score for the image-text pair (y, x) is then computed by averaging sampled across-attention maps, denoted by\nf (y, x) = Ave(S(A)) = g(A)(4)\nwhere g : R M ×d × R N ×d → [0, 1] is a scoring function that measures the degree of semantic alignment between the text and image representations.\nOverall, our method combines the strengths of the U-Net architecture and the attention mechanism of the Stable Diffusion model. We resort to attention-based prompt learning (Lester et al., 2021) to efficiently adapt the model to perform image-text matching." }, { "figure_ref": [ "fig_2" ], "heading": "Attention-based Prompt Learning for Stable Diffusion", "publication_ref": [ "b14" ], "table_ref": [], "text": "We aim to adapt the latent diffusion probabilistic model to the image-text matching task leveraging only a few examples, that is, under the few-shot setting. The task of fine-tuning aims at updating the mapping from the given text to the aligned image distribution, and the text features are only input to W k and W v projection matrix in the cross-attention block. Therefore, we propose the use of learnable prompts, which are added to the attention matrices in our model. Specifically, as shown in Figure 2, we introduce learnable prompt embedding matrices, which are added element-wise to the key and value attention matrices ruing training and inference. To improve the fine-tuning efficiency, we implement the attention-based prompt learning using LoRA (Hu et al., 2021). As our addition operation applies to all layers and sampled time-steps, we will omit superscripts t and layer l for notational clarity and obtains:\nW ′ = W + W p . (5\n)\nwhere 2023), we combine the score computed from the cross-attention map and the distance between the predicted noise and the groudtruth to obtain d, which we find can lead to better performance. r xpos is the groundtruth image representation for the n-th text y n , r xneg is the negative image representation, and m is a predefined margin." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b24", "b61", "b24", "b24" ], "table_ref": [], "text": "We use the Compositional Visual Genome (ComVG) (Krishna et al., 2017) and RefCOCOg (Yu et al., 2016) datasets to do image-text matching, which requires model's ability to understand fine-grained details and spatial relationships of image and text pairs, that are more challenging for traditional vison-language models. We also test on the VQAv2 dataset to see if our method can be extended to other vision and language tasks.\nCompositional Visual Genome (ComVG) (Krishna et al., 2017) is a reconstructed dataset of the Visual Genome (Krishna et al., 2017) for (i, t) in the batch do 11:\nImage latent representation z i ← E(x)\n12:\nText latent representation r y ← τ (y)\n13:\nIntermediate representation r x ← φ(z t )\n14:\nUpdate W ′ ← W ▷ Eq. 5\n15:\nCompute attention maps A ← r y , r x 16:\nCompute LSE score S(A) ← A ▷ Eq. 3 17:\nCompute matching score g(A) ← A ▷ Eq. 4\n18:\nCompute loss L ← y n" }, { "figure_ref": [], "heading": "19:", "publication_ref": [ "b61", "b34", "b9", "b9" ], "table_ref": [], "text": "Update W p 20:\nend for 21: end function RefCOCOg (Yu et al., 2016) is a reconstructed dataset of the MSCOCO (Lin et al., 2014) dataset. The dataset was created in a non-interactive setting, using Amazon Mechanical Turk workers. The process consisted of two stages: first, workers were asked to write referring expressions for objects in the images, and then, another set of workers were asked to indicate the referred object in the image by clicking on it. We randomly sample 10 text prompts from the candidates pool including one groundtruth, and the model is asked to do the correct matching given the image and the 10 sampled text prompts.\nVQAv2 (Goyal et al., 2017) The VQAv2 dataset Goyal et al. (2017) is commonly converted to a classification task with 3,129 answer classes with frequency large than 9. In our setting, we modify the candidate text to be the concatenation of question and answer pair for each question and perform matching with images. We test the model for both \"binary\" type questions and also \"other\" type questions. For binary questions where answers tend to be closely related (\"Yes/No\"), we rewrite the answers and conduct a matching process against a pool of ten images. In the case of open-ended questions, we match the image with ten different textual prompts to test the model's matching accuracy." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b30", "b6", "b48", "b60" ], "table_ref": [], "text": "We employed the Stable Diffusion V21 model in conjunction with xFormers (Lefaudeux et al., 2022) and FlashAttention (Dao et al., 2022) using the implementation available in HuggingFace Diffusers2 . The Stable Diffusion utilizes a subset of the LAION-5B (Schuhmann et al., 2022) dataset during pretraining, specifically 170 million examples, along with LAION-2B-en and LAION-aesthetics v2 datasets for pretraining. We use LoRA training for the implementation of attention-based prompt learning. We test our Discriminative Stable Diffusion under the few-shot setting (Yoo et al., 2021) where we use 5% data to train the model." }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [], "table_ref": [], "text": "In order to provide a comprehensive evaluation of our Discriminative Stable Diffusion method, we establish two baselines for comparison.\nTable 1: Comparison of accuracy (%) on Compositional Visual Genome (ComVG) and Top-1 and Top-5 accuracy (%) on RefCOCOg using CLIP and Discriminative Stable Diffusion (DSD) under the few-shot setting. Our method outperforms CLIP based baselines, demonstrating the superiority of our approach compared with traditional vision and language pre-trained models such as CLIP pre-trained for discriminative tasks." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b38" ], "table_ref": [], "text": "Compositional Visual Genome RefCOCOg Subjects Objects Predicate Average Top-1 Acc. Top-5 Acc. • Fine-tuning (Radford et al., 2021): The first baseline involves fine-tuning the CLIP model, with the last layer being the only component subject to tuning." }, { "figure_ref": [], "heading": "CLIP (Fine", "publication_ref": [ "b64" ], "table_ref": [], "text": "• Prompt learning: The second baseline is based on the prompt learning strategy applied to CLIP, incorporating learnable prompts to the textual inputs that are conditioned on individual input images, as described in Zhou et al. (2022)." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "To compare the performance of our method to other approaches, we conducted fine-tuning on CLIP and prompt learning on CLIP, in addition to our method. The results of these experiments are summarized in Table 1 on the Compositional Visual Genome dataset and RefCOCOg dataset. From the results, it is clear that our method outperforms both fine-tuning on CLIP and prompt learning on CLIP on both Compositional Visual Genome across the three different problem types and RefCOCOg.\nWe also show the results on the VQAv2 dataset in Table 2. These results demonstrate the extentiveness of our method to other vision and language tasks." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b53", "b57" ], "table_ref": [], "text": "In this section, we delve deeper into the nuances of our experimental findings in the zero-shot setting.\nEffect of Attention Maps from Different Sets of U-Net Layers We investigate the effect of using different numbers of layers in the U-Net of Stable Diffusion to compute the attention map. We use two variants of Stable Diffusion v2 and take the average of the attention maps from different layers of U-Net. Specifically, we consider the last one, the last two, the last eight, and all layers. Kwon et al. (2022b); Tang et al. (2022) indicate that the latter layers typically contain more semantically important information. The results, shown in Figure 3, indicate that using all layers in the U-Net gives the best performance in terms of accuracy, suggesting that the information from all layers can make use of both the high-level and low-level features when making predictions and preserve both coarse and fine image details for the image-text matching task. The later layers in the U-Net may contain similar task-specific information, which can be found from observing that using only the last two layers also provides a close performance with that of one. Pooling for score computation in Figure 4. As can be seen, using LogSumExp performs the best, followed by the maximum value, and finally the cosine similarity. This suggests that LogSumExp can effectively capture the overall importance of each element in the attention map, rather than just relying on the maximum value. Additionally, LogSumExp can smooth out the influence of individual noisy elements, resulting in more robust and accurate matching scores. As the dimensions of the image feature and text feature vectors in Stable Diffusion are not the same, we implement the cosine similarity by only comparing the shared dimensions of the image and text feature vectors. Overall, these results highlight the effectiveness of LogSumExp as a method for computing the matching score in image-text matching tasks.\nEnsembling over Noise Levels In diffusion models, the level of noise controls the variance of the Gaussian noise added during the diffusion process, which can affect the degree of change in the generated image. To further improve the performance of our method, we use an ensemble technique inspired by Wolleb et al. (2022) by averaging the scores over four different noise levels: {0.2, 0.4, 0.6, 0.8}. This is done by first obtaining the score under each noise level scenario and then averaging them. The results of this comparison are shown in Figure 5. Our experimental results demonstrate that this ensemble technique leads to a noticeable improvement in performance, and therefore we sample over different timesteps to obtain the final average score." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed a method for matching text and images in the latent space of Stable Diffusion. By adapting generative models to discriminative tasks, we are exploring a domain with promising practical applications, which could be particularly beneficial in scenarios with data privacy concerns or when there is a lack of data in specific domains. For methods, we fine-tuned the U-Net part of the model by focusing on the cross-attention between text embeddings and image embeddings, which reflects the alignment between text and image. Our results show that this fine-tuning approach improves the alignment between text and image, leading to better performance in image-text matching tasks. Overall, our approach is pioneering work that leverages the inherent flexibility of diffusionbased visual generative models, opening new pathways for innovation where traditional methods may Figure 5: Ablation study on the amount of noise added during the diffusion process: using consistent noise levels of 0.4, 0.8 and using ensembling.\nfall short. Our results can motivate research on simpler alternatives to adapt Stable Diffusion models, as well as on future methods for better utilization of them." }, { "figure_ref": [], "heading": "A Limitations & Broader Impact", "publication_ref": [], "table_ref": [], "text": "The Discriminative Stable Diffusion (DSD) approach proposed in this paper is dependent on a pre-trained Stable Diffusion model, which may be challenging to obtain in certain scenarios where the model has yet to be publicly released or where the computational resources required for training are not available. Additionally, the quality of the pre-training can greatly impact the performance of DSD, highlighting the need for further research to investigate methods for improving the pre-training process. While our fine-tuning process is based on prompt learning, there are other techniques such as multi-task learning and meta-learning that can be possibly incorporated to improve the performance of DSD. Future research should explore the use of these techniques in combination with prompt learning to further improve the few-shot discriminative performance of DSD. It is worth noting that in real-world scenarios, there is often a limited amount of labeled data available, and collecting more data can be costly and time-consuming. Therefore, the ability of DSD to perform well under a few-shot setting is an important aspect of its potential utility in practical applications." }, { "figure_ref": [], "heading": "B Ethical Statement", "publication_ref": [], "table_ref": [], "text": "This paper proposes a novel framework for few-shot learning that adapts pre-trained stable diffusion models for discriminative tasks. The paper uses publicly available datasets for image and text matching and visual question answering, and does not involve any human or animal subjects. The paper also acknowledges the limitations and challenges of the proposed approach, such as the computational cost. The paper does not intend to cause any harm or bias to any individual or group, and respects the intellectual property and ethical standards of the research community. The paper also discusses some potential applications and implications of the proposed framework, such as paving a way to efficiently adapt pre-trained stable diffusion models for discriminative tasks. The paper hopes to inspire further research and innovation in few-shot learning and vision and language domains." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://github.com/eric-ai-lab/DSD." } ]
Diffusion models, such as Stable Diffusion (Rombach et al., 2022a), have shown incredible performance on text-to-image generation. Since text-to-image generation often requires models to generate visual concepts with fine-grained details and attributes specified in text prompts, can we leverage the powerful representations learned by pre-trained diffusion models for discriminative tasks such as image-text matching? To answer this question, we propose a novel approach, Discriminative Stable Diffusion (DSD), which turns pre-trained text-to-image diffusion models into few-shot discriminative learners. Our approach mainly uses the cross-attention score of a Stable Diffusion model to capture the mutual influence between visual and textual information and fine-tune the model via efficient attention-based prompt learning to perform image-text matching. By comparing DSD with state-of-the-art methods on several benchmark datasets, we demonstrate the potential of using pre-trained diffusion models for discriminative tasks with superior results on few-shot image-text matching. Codes can be found at
Discriminative Diffusion Models as Few-shot Vision and Language Learners
[ { "figure_caption": "Figure 1 :1Figure 1: The upper subfigure in the teaser image illustrates the ability of Stable Diffusion to generate realistic images given a text prompt. The bottom subfigure illustrates the process of our proposed method, Discriminative Stable Diffusion (DSD), for utilizing Stable Diffusion for the image-text matching task. DSD can output a matching score for a given text prompt and image, with a higher score indicating a stronger match.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Overview of our Discriminative Stable Diffusion framework, which measures how much the given images and texts matched use the cross-attention mechanism in the Stable Diffusion. Discriminative Stable Diffusion added learnable prompts over attention matrices (red boxes), which are fine-tuned under the few-shot setting.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "W is the original attention matrix, W p is decomposed into the products of two low-rank matrices and updated during training. This allows the model to adapt to new instances by attending to relevant information in the intermediate representation of the text inputs, τ θ (y). With the learned prompt embeddings in the few-shot scenario, we can effectively adapt the Stable Diffusion to improve the image-text matching performance. The overall algorithm is shown in Algorithm 1. For optimization, we use the margin-based triplet loss function between the predicted match score and the true match score. Let L be the loss, we have: L = E(max 0, d(r xpos , τ θ (y n )) -d(r xneg , τ θ (y n )) + m ), where d(•, •) denotes the distance based on the similarity score. Specifically, inspired by Li et al. (2023); Clark and Jaini (2023); Krojer et al. (", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :Figure 4 :34Figure 3: Ablation study on the number of attention maps used from layers of the U-Net (x-axis). The y-axis represents the accuracy on the ComVG dataset. Tests on two variants of Stable-Diffusion v2: trained as a standard noise-prediction model on 512x512 images and 768x768 images.", "figure_data": "", "figure_id": "fig_4", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "compare the discriminative classifier with generative classifier. Ranzato et al. (2011) apply deep generative models to the recognition task. For diffusion models, recently, Li et al. (2023);", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "dataset, which contains 108,007 images annotated with 2.3 million relationships. These relationships are represented as subject-predicate-object triplets and include both action and spatial relationships. ComVG was created by selecting a subset of 542 images from Visual Genome that contain clear relationships, and generating mutated images by changing a single value in the subject, predicate, or object of each image. These mutated images were then used as negative examples, while the original images were used as positive examples, resulting in a total of 5400 data points.", "figure_data": "Algorithm 1 Image-Text Matching with Discriminative Stable Diffusion1: I: Image space2: T: Text space3: x: Image4: y: Text5: z: Latent representation6: E: Encoder7: τ : Domain-specific encoder8: φ: Intermediate representation of the U-Net9: function DSD(I, T)10:", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of accuracy (%) on the sampled 'binary' and 'other' questions from the VQAv2 dataset under the few-shot setting. Our method outperforms CLIP, demonstrating the superiority of our approach compared with traditional vision and language pre-trained models pre-trained on other vision and language tasks.", "figure_data": "-tuning)80.7782.4960.5076.1069.8884.57CLIP (Prompt Learning)78.8879.5160.4174.2469.4084.48DSD79.8886.9063.2078.8175.8791.96MethodBinary OtherAllCLIP (Fine-tuning)66.94 32.41 59.06CLIP (Prompt Learning) 67.32 33.42 59.58DSD67.36 35.90 60.18", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Cosine Similarity vs. Maximum vs. LogSumExp Pooling for Score Computation We compare the overall accuracy of using Cosine Similarity, Maximum value from the attention map, and LogSumExp", "figure_data": "75.4Accuracy (%)74.8 75.0 75.2Stable Diffusion 2.1-v (768x768 resolution) Stable Diffusion 2.1-base (512x512 resolution)74.61 28 Numbers of Layers from U-Net16", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
Xuehai He; Weixi Feng; Tsu-Jui Fu; Varun Jampani; Arjun Akula; Pradyumna Narayana; Sugato Basu; William Yang Wang; Eric Xin; Wang
[ { "authors": "Stanislaw Antol; Aishwarya Agrawal; Jiasen Lu; Margaret Mitchell; Dhruv Batra; C Lawrence Zitnick; Devi Parikh", "journal": "", "ref_id": "b0", "title": "Vqa: Visual question answering", "year": "2015" }, { "authors": "Pierre Blanchard; Desmond J Higham; Nicholas J Higham", "journal": "IMA Journal of Numerical Analysis", "ref_id": "b1", "title": "Accurately computing the log-sum-exp and softmax functions", "year": "2021" }, { "authors": "Benjamin Tom B Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Askell", "journal": "", "ref_id": "b2", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Yen-Chun Chen; Linjie Li; Licheng Yu; Ahmed El Kholy; Faisal Ahmed; Zhe Gan; Yu Cheng; Jingjing Liu", "journal": "Springer", "ref_id": "b3", "title": "Uniter: Universal image-text representation learning", "year": "2020" }, { "authors": "Kevin Clark; Priyank Jaini", "journal": "", "ref_id": "b4", "title": "Text-to-image diffusion models are zero-shot classifiers", "year": "2023" }, { "authors": "Danilo Croce; Giuseppe Castellucci; Roberto Basili", "journal": "", "ref_id": "b5", "title": "Gan-bert: Generative adversarial learning for robust text classification with a bunch of labeled examples", "year": "2020" }, { "authors": "Tri Dao; Daniel Y Fu; Stefano Ermon; Atri Rudra; Christopher Ré", "journal": "", "ref_id": "b6", "title": "FlashAttention: Fast and memory-efficient exact attention with IO-awareness", "year": "2022" }, { "authors": "Weixi Feng; Xuehai He; Tsu-Jui Fu; Varun Jampani; Arjun Akula; Pradyumna Narayana; Sugato Basu; Xin ; Eric Wang; William Yang; Wang ", "journal": "", "ref_id": "b7", "title": "Training-free structured diffusion guidance for compositional text-to-image synthesis", "year": "2022" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "", "ref_id": "b8", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Yash Goyal; Tejas Khot; Douglas Summers-Stay; Dhruv Batra; Devi Parikh", "journal": "", "ref_id": "b9", "title": "Making the v in vqa matter: Elevating the role of image understanding in visual question answering", "year": "2017" }, { "authors": "Ruifei He; Shuyang Sun; Xin Yu; Chuhui Xue; Wenqing Zhang; Philip Torr; Song Bai; Xiaojuan Qi", "journal": "", "ref_id": "b10", "title": "Is synthetic data from generative models ready for image recognition", "year": "2022" }, { "authors": "Xuehai He; Diji Yang; Weixi Feng; Tsu-Jui Fu; Arjun Akula; Varun Jampani; Pradyumna Narayana; Sugato Basu; William Yang; Wang ; Xin Eric; Wang ", "journal": "", "ref_id": "b11", "title": "Cpl: Counterfactual prompt learning for vision and language models", "year": "2022" }, { "authors": "Amir Hertz; Ron Mokady; Jay Tenenbaum; Kfir Aberman; Yael Pritch; Daniel Cohen-Or", "journal": "", "ref_id": "b12", "title": "Prompt-to-prompt image editing with cross attention control", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b13", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Edward J Hu; Yelong Shen; Phillip Wallis; Zeyuan Allen-Zhu; Yuanzhi Li; Shean Wang; Lu Wang; Weizhu Chen", "journal": "", "ref_id": "b14", "title": "LoRA: Low-Rank Adaptation of Large Language Models", "year": "2021" }, { "authors": "Andrew Jaegle; Felix Gimeno; Andy Brock; Oriol Vinyals; Andrew Zisserman; Joao Carreira", "journal": "", "ref_id": "b15", "title": "Perceiver: General perception with iterative attention", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b16", "title": "", "year": "" }, { "authors": "Ali Jahanian; Xavier Puig; Yonglong Tian; Phillip Isola", "journal": "", "ref_id": "b17", "title": "Generative models as a data source for multiview representation learning", "year": "2021" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "", "ref_id": "b18", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": " Pmlr", "journal": "", "ref_id": "b19", "title": "", "year": "" }, { "authors": "Menglin Jia; Luming Tang; Bor-Chun Chen; Claire Cardie; Serge Belongie; Bharath Hariharan; Ser-Nam Lim", "journal": "", "ref_id": "b20", "title": "Visual prompt tuning", "year": "2022" }, { "authors": "Kenan Jiang; Xuehai He; Ruize Xu; Xin Eric; Wang ", "journal": "", "ref_id": "b21", "title": "Comclip: Training-free compositional image and text matching", "year": "2022" }, { "authors": "Andrej Karpathy; Li Fei-Fei", "journal": "", "ref_id": "b22", "title": "Deep visual-semantic alignments for generating image descriptions", "year": "2015" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b23", "title": "Auto-encoding variational bayes", "year": "2013" }, { "authors": "Ranjay Krishna; Yuke Zhu; Oliver Groth; Justin Johnson; Kenji Hata; Joshua Kravitz; Stephanie Chen; Yannis Kalantidis; Li-Jia Li; David A Shamma", "journal": "International journal of computer vision", "ref_id": "b24", "title": "Visual genome: Connecting language and vision using crowdsourced dense image annotations", "year": "2017" }, { "authors": "Benno Krojer; Elinor Poole-Dayan; Vikram Voleti; Christopher Pal; Siva Reddy", "journal": "", "ref_id": "b25", "title": "Are diffusion models vision-and-language reasoners?", "year": "2023" }, { "authors": "Nupur Kumari; Bingliang Zhang; Richard Zhang; Eli Shechtman; Jun-Yan Zhu", "journal": "", "ref_id": "b26", "title": "Multiconcept customization of text-to-image diffusion", "year": "2022" }, { "authors": "Mingi Kwon; Jaeseok Jeong; Youngjung Uh", "journal": "", "ref_id": "b27", "title": "Diffusion models already have a semantic latent space", "year": "2022" }, { "authors": "Mingi Kwon; Jaeseok Jeong; Youngjung Uh", "journal": "", "ref_id": "b28", "title": "Diffusion models already have a semantic latent space", "year": "2022" }, { "authors": "Kuang-Huei Lee; Xi Chen; Gang Hua; Houdong Hu; Xiaodong He", "journal": "", "ref_id": "b29", "title": "Stacked cross attention for image-text matching", "year": "2018" }, { "authors": "Benjamin Lefaudeux; Francisco Massa; Diana Liskovich; Wenhan Xiong; Vittorio Caggiano; Sean Naren; Min Xu; Jieru Hu; Marta Tintore; Susan Zhang; Patrick Labatut; Daniel Haziza", "journal": "", "ref_id": "b30", "title": "xformers: A modular and hackable transformer modelling library", "year": "2022" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "", "ref_id": "b31", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Mihir Alexander C Li; Shivam Prabhudesai; Ellis Duggal; Deepak Brown; Pathak", "journal": "", "ref_id": "b32", "title": "Your diffusion model is secretly a zero-shot classifier", "year": "2023" }, { "authors": "Liunian Harold; Li ; Mark Yatskar; Cho-Jui Da Yin; Kai-Wei Hsieh; Chang", "journal": "", "ref_id": "b33", "title": "Visualbert: A simple and performant baseline for vision and language", "year": "2019" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "", "ref_id": "b34", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Xiao Liu; Kaixuan Ji; Yicheng Fu; Zhengxiao Du; Zhilin Yang; Jie Tang", "journal": "", "ref_id": "b35", "title": "P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks", "year": "2021" }, { "authors": "Andrew Ng; Michael Jordan", "journal": "Advances in neural information processing systems", "ref_id": "b36", "title": "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes", "year": "2001" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b37", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b38", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b39", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Aurelio Marc; Joshua Ranzato; Volodymyr Susskind; Geoffrey Mnih; Hinton", "journal": "IEEE", "ref_id": "b40", "title": "On deep generative models with applications to recognition", "year": "2011" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b41", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b42", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b43", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b44", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; S Sara Mahdavi; Rapha Gontijo Lopes; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b45", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b46", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Timo Schick; Hinrich Schütze", "journal": "", "ref_id": "b47", "title": "Exploiting cloze questions for few shot text classification and natural language inference", "year": "2020" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b48", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b49", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": " Pmlr", "journal": "", "ref_id": "b50", "title": "", "year": "" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b51", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Yang Song; Stefano Ermon", "journal": "", "ref_id": "b52", "title": "Generative modeling by estimating gradients of the data distribution", "year": "2019" }, { "authors": "Raphael Tang; Akshat Pandey; Zhiying Jiang; Gefei Yang; Karun Kumar; Jimmy Lin; Ferhan Ture", "journal": "", "ref_id": "b53", "title": "What the daam: Interpreting stable diffusion using cross attention", "year": "2022" }, { "authors": "Maria Tsimpoukelli; Jacob L Menick; Serkan Cabi; Oriol Eslami; Felix Vinyals; Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b54", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b55", "title": "Attention is all you need", "year": "2017" }, { "authors": "Chen Wei; Karttikeya Mangalam; Po-Yao Huang; Yanghao Li; Haoqi Fan; Hu Xu; Huiyu Wang; Cihang Xie; Alan Yuille; Christoph Feichtenhofer", "journal": "", "ref_id": "b56", "title": "Diffusion models as masked autoencoders", "year": "2023" }, { "authors": "Julia Wolleb; Robin Sandkühler; Florentin Bieder; Philippe Valmaggia; Philippe C Cattin", "journal": "", "ref_id": "b57", "title": "Diffusion models for implicit image segmentation ensembles", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b58", "title": "", "year": "" }, { "authors": "Lewei Yao; Runhui Huang; Lu Hou; Guansong Lu; Minzhe Niu; Hang Xu; Xiaodan Liang; Zhenguo Li; Xin Jiang; Chunjing Xu", "journal": "", "ref_id": "b59", "title": "Filip: Fine-grained interactive language-image pre-training", "year": "2021" }, { "authors": "Min Kang; Dongju Yoo; Jaewook Park; Sang-Woo Kang; Woomyeong Lee; Park", "journal": "", "ref_id": "b60", "title": "Gpt3mix: Leveraging large-scale language models for text augmentation", "year": "2021" }, { "authors": "Licheng Yu; Patrick Poirson; Shan Yang; Alexander C Berg; Tamara L Berg", "journal": "Springer", "ref_id": "b61", "title": "Modeling context in referring expressions", "year": "2016" }, { "authors": "Lu Yuan; Dongdong Chen; Yi-Ling Chen; Noel Codella; Xiyang Dai; Jianfeng Gao; Houdong Hu; Xuedong Huang; Boxin Li; Chunyuan Li", "journal": "", "ref_id": "b62", "title": "Florence: A new foundation model for computer vision", "year": "2021" }, { "authors": "Yuxuan Zhang; Huan Ling; Jun Gao; Kangxue Yin; Jean-Francois Lafleche; Adela Barriuso; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b63", "title": "Datasetgan: Efficient labeled data factory with minimal human effort", "year": "2021" }, { "authors": "Kaiyang Zhou; Jingkang Yang; Chen Change Loy; Ziwei Liu", "journal": "", "ref_id": "b64", "title": "Conditional prompt learning for vision-language models", "year": "2022" }, { "authors": "Lukas Roland S Zimmermann; Yang Schott; Benjamin A Song; David A Dunn; Klindt", "journal": "", "ref_id": "b65", "title": "Score-based generative classifiers", "year": "2021" } ]
[ { "formula_coordinates": [ 4, 229.17, 414.13, 275.5, 17.25 ], "formula_id": "formula_0", "formula_text": "q(x t |x 0 ) = N (x t ; √ ᾱt x 0 , (1 -ᾱt )I),(1)" }, { "formula_coordinates": [ 4, 234.69, 471.5, 269.98, 9.65 ], "formula_id": "formula_1", "formula_text": "p θ (x t-1 |x t ) = N (µ θ (x t ), Σ θ (x t )),(2)" }, { "formula_coordinates": [ 5, 107.64, 439.57, 397.6, 42.08 ], "formula_id": "formula_2", "formula_text": "A = softmax QK T √ d • V , with Q = W q (i) • r x , K = W k (i) • r y , V = W v (i) • r y . Here, W v (i) ∈ R d×d i ϵ , W q (i) ∈ R d×dτ , W k (i) ∈ R d×dτ are learnable projection matrices" }, { "formula_coordinates": [ 5, 237.37, 695.01, 267.3, 30.32 ], "formula_id": "formula_3", "formula_text": "S(A) = 1 λ log n i=1 exp (λA i,: )(3)" }, { "formula_coordinates": [ 6, 245.34, 137.56, 259.33, 8.96 ], "formula_id": "formula_4", "formula_text": "f (y, x) = Ave(S(A)) = g(A)(4)" }, { "formula_coordinates": [ 6, 272.69, 361.3, 228.11, 11.72 ], "formula_id": "formula_5", "formula_text": "W ′ = W + W p . (5" }, { "formula_coordinates": [ 6, 500.8, 363.69, 3.87, 8.64 ], "formula_id": "formula_6", "formula_text": ")" } ]
10.14569/IJACSA.2022.0130502
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b6" ], "table_ref": [], "text": "Today, we have a mixture of young and older individuals, people with special needs, and people who can care for themselves. Over 1 billion people are estimated to be disabled; this figure corresponds to about 15% of the world's population, with 3.8% (approximately 190 million people) accounting for people aged 15 and up (Organization, 2011). The number of people with disabilities is upward due to the increase in chronic health conditions and many other things. These and other factors have made the need for proper care facilities urgent in today's society. Several care facilities are built to help people with disabilities live their everyday lives and not be left out of the community." }, { "figure_ref": [], "heading": "What exactly are care facilities?", "publication_ref": [ "b9", "b1" ], "table_ref": [], "text": "Care is caring for somebody or something and providing what they need for their health or protection (Oxford University Press, 2022). Moreover, facilities are defined as buildings, services, equipment, etc., provided for a particular purpose. Hence, care facilities can be seen as equipment or services used to provide health protection to someone or people in need. In today's care homes, healthcare workers, such as doctors, nurses, midwives, etc., perform repetitive tasks such as monitoring their patients to ensure they do what they are supposed to or assisting. We are currently experiencing an increasing need for care workers. Some people travel miles to meet up with clients or patients. The National Health Service (NHS) faces a general shortage of about 100,000 healthcare workers across the health sector (Fund, 2018). If current trends continue, the need will be around 250,000 by 2030. In a media release, 40% of medical doctors are close to retirement age in one-third of Europe and Central Asia countries (WHO, 2022). The workforce in this sector is declining, hence the need for a system that can silently assist in providing first-level care.\nImagine a system that can monitor and keep track of your health status continuously, diagnose possible health conditions, and offer advice on your habits to persuade you to do or stop some things you are currently doing that are detrimental to your health. This system could be knitted into a regular clothing fibre or a wearable with a tiny sensor that uses ambient intelligence to communicate with various devices in your apartment, collecting data and giving instructions to all of these other devices in real-time. The term \"ambient intelligence\" refers to the future vision of intelligent computing where the environment supports the people inhabiting it (E. Aarts et al., 2009). In this technology, the human effort used when inputting and outputting information will no longer exist; it depends on interconnected sensors communicating with one another to make critical decisions based on the data gathered." }, { "figure_ref": [], "heading": "2.0", "publication_ref": [], "table_ref": [], "text": "System Engineering Features " }, { "figure_ref": [], "heading": "Attitude towards technology:", "publication_ref": [], "table_ref": [], "text": "James love technology and believes it is here to improve our lives." }, { "figure_ref": [], "heading": "Scenario James", "publication_ref": [], "table_ref": [], "text": "User type: A person with Asthma at retirement age Actors: James (70), hospital and Support workers.\nHelp needed with: ensuring the room is ventilated, needs help during asthmatic attacks and monitoring the respiratory system. James lives alone in a retirement settlement. He moved into the home at age 60. He loves to do things alone.\nIn his apartment, James has graphical schedules; his care provider assists him with household chores and assistance during asthmatic attacks. James is growing old and unable to move around as fast as he used to, so he sometimes needs help with some little things in the house. James' main problem is the asthmatic attack that comes up frequently; He needs help knowing the state of his environment to prevent asthmatic attacks" }, { "figure_ref": [], "heading": "Identifying stakeholders for Bola", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Stakeholder Description", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Primary user", "publication_ref": [], "table_ref": [], "text": "People with Asthma.\nSecondary User Home carers, service support workers, and doctors." }, { "figure_ref": [], "heading": "Tertiary User Son", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Calls Provider", "publication_ref": [], "table_ref": [], "text": "Companies that connect mobile devices with calls and SMS." }, { "figure_ref": [], "heading": "Internet provider", "publication_ref": [], "table_ref": [], "text": "Internet service provider for mobile devices." }, { "figure_ref": [], "heading": "Device Manufacturer", "publication_ref": [], "table_ref": [], "text": "Organisations that make the product." }, { "figure_ref": [], "heading": "Identifying Activities for Bola Stakeholder Goals Sub-Goals Requirements", "publication_ref": [], "table_ref": [], "text": "Primary User (PU)\nIncrease their independence from SUs by checking tendencies of Helps the PU ventilate the environment at home and advises him to leave high pollen or polluted environment.\nRead the amount of air entering the PU's system and its content and make decisions based on the received data.\nasthmatic attacks and ventilating the room.\nAdvises the user what to do during an attack and assist as best as possible PU can access his data." }, { "figure_ref": [], "heading": "Secondary User (SU)", "publication_ref": [], "table_ref": [], "text": "Reduce attention PU requires.\nShow that PU is safe. SU Can get all the respiratory readings of PU remotely.\nSU can know the risk of an attack based on data received from the body. Show SU that PU is doing what he should do at the right time." }, { "figure_ref": [], "heading": "Situational services and interaction types for Bola Activity Situation of Interest", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Situational Need Situational Service", "publication_ref": [], "table_ref": [], "text": "Leave the environment for better ventilation.\nIt is time for him to leave his current environment.\nA record that the user has left the environment.\nSensor updates the PU's response.\nRespiratory readings.\nThe system hospital/ carer needs to keep track of the PU's respiratory reading.\nTake the respiratory reading of the PU's body.\nSensor updates the current activity. " }, { "figure_ref": [], "heading": "James' General information", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Limitations: Joy is not able to communicate", "publication_ref": [], "table_ref": [], "text": "The technology used: No technology.\nAttitude towards technology: No attitude." }, { "figure_ref": [], "heading": "Scenario Joy", "publication_ref": [], "table_ref": [], "text": "User type: A baby that needs monitoring Actors: Joy (2 months old), her Mother and a doctor. Help needed with: ensuring that Joy is safe at night. Joy is a baby who needs monitoring. James' main problem is that the mother sleeps a lot and feels the baby needs to be monitored and catered for at all times." }, { "figure_ref": [], "heading": "Identifying stakeholders for Bola Stakeholder Description", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Primary user Babies", "publication_ref": [], "table_ref": [], "text": "Secondary User Nursing mother" }, { "figure_ref": [], "heading": "Tertiary User Doctor", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Calls Provider", "publication_ref": [], "table_ref": [], "text": "Companies that connect mobile devices with calls and SMS." }, { "figure_ref": [], "heading": "Internet provider", "publication_ref": [], "table_ref": [], "text": "Internet service provider for mobile devices." }, { "figure_ref": [], "heading": "Device Manufacturer", "publication_ref": [], "table_ref": [], "text": "Organisations that make the product. Sensor records event." }, { "figure_ref": [], "heading": "Identifying Activities for", "publication_ref": [], "table_ref": [], "text": "PU's vital is not Ok.\nSU gets notified and can.\nTake PU to the hospital for a check-up.\nPU need to be diagnosed.\nPU is due for a check-up SU gets a reminder before the due date and on the due date.\nSU takes PU to the hospital Sensor records event." }, { "figure_ref": [], "heading": "Expected behaviour", "publication_ref": [], "table_ref": [], "text": "A sensor is embedded in the user's body or put on a wearable; this device can read body vitals in real-time, as described in Figure 3.0 below. -If it is time to take wake up system triggers an alarm for the user to wake up -If the sensor senses no movement, the user's vitals are analysed for a liveliness check -If the user's body readings/ vitals are nuanced, the system prompts the carer to wake the patient up -If any of the user's vital is not OK, an SOS call is sent, notifying the nearest ambulance service and the carer for necessary medical care." }, { "figure_ref": [], "heading": "System architecture and diagram", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Body Reading Scenario for Bola", "publication_ref": [], "table_ref": [], "text": "The sensor constantly takes readings of the kidney status and the blood pressure level of the user as all other necessary body readings.\n-If the system notices any abnormality in Bola's body reading, such as his BP or kidney, the carer is prompted to attend to Bola -If the situation gets worse, the system makes an SOS call to the nearest ambulance service 3.2.0 Describing Scenario for James" }, { "figure_ref": [], "heading": "Leave the current environment (Outside the house)", "publication_ref": [], "table_ref": [], "text": "-If the air in the environment hits, -A high pollen rate or is polluted -The sensor notifies the user to leave the environment and records the user's response -Ensure mother contact the registered doctor on the mobile app" }, { "figure_ref": [], "heading": "Societal Implications", "publication_ref": [], "table_ref": [], "text": "This list some critical societal implications and how the system addresses them." }, { "figure_ref": [], "heading": "1.", "publication_ref": [], "table_ref": [], "text": "Non-Maleficence / Beneficence: The users may be exposed to the infinitesimal amount of electromagnetic rays emitted from sensors; long-time exposure to them can lead to health hazards. An improvement in the system should be implemented to tackle this limitation." }, { "figure_ref": [], "heading": "2.", "publication_ref": [], "table_ref": [], "text": "User-Centred: The system was built to be customisable for each user's health-related issues, giving the user complete control of the system so the system cannot override the user's decision." }, { "figure_ref": [], "heading": "3.", "publication_ref": [], "table_ref": [], "text": "Multiple Users: The system must be built to accommodate different users with different roles and privileges to control what each user can see and do." }, { "figure_ref": [], "heading": "4.", "publication_ref": [], "table_ref": [], "text": "Privacy: Information shared will be consensual, the patient must agree to share data before it is shared, and a case of unintended data sharing is not possible according to the system design. Each user's data is discrete and is not transmitted or used outside the purpose for which It was collected." }, { "figure_ref": [], "heading": "5.", "publication_ref": [], "table_ref": [], "text": "Data Protection: All the sensor data collected are in line with countries' data protection policies. The system stores users' data securely in a distributed database so that there may be no loss of data and the system works effectively when a few data sources are down." }, { "figure_ref": [], "heading": "6.", "publication_ref": [], "table_ref": [], "text": "Security: The sensors communicate over TLS 3.0 protocols, the current standard of encryption, to ensure that a malicious user, which will lead to several risks, is not manipulating data." }, { "figure_ref": [], "heading": "7.", "publication_ref": [], "table_ref": [], "text": "Autonomy: These sensors are very much configurable, and users can override its actions." }, { "figure_ref": [], "heading": "8.", "publication_ref": [], "table_ref": [], "text": "Transparency: There is a precise term stating the nature of the data being stored, the people who can access it, the user being able to see his data and the side effect of the technology. This aims at providing transparency of data." }, { "figure_ref": [], "heading": "9.", "publication_ref": [], "table_ref": [], "text": "Equality, Dignity and Inclusiveness: The values and differences of the users are respected regardless of their sexual orientation, gender, disabilities, race, religious beliefs, political status etc. The cost of these sensors is quite affordable, considering the amount people put into care services." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The sensor connecting with smart devices in the home can tackle the shortage of caregivers and improve healthcare systems for people in need of care using Ambient Intelligent technology. It promises accuracy in real-time analysis of data, intelligently working in the background with minimal human input needed according to how it is configured." } ]
[ { "figure_caption": "Figure 3 .Figure 3 . 1 .331Figure 3.0. Showing the architecture of the System", "figure_data": "", "figure_id": "fig_0", "figure_label": "331", "figure_type": "figure" }, { "figure_caption": "-If baby is cry prompt phone to wake the mother up,3.5.2 Joy's vital is abnormal-If Joy's vital is abnormal -Prompt mother to contact the doctor.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "VivianFemale, 17She lives with her parent. He has a habit of drinking a lot.(Norway)Age: 65Health Condition: High BloodThe technology used: Bola isGender: Malepressure and Kidney failurenot very conversant withFamily/support: no familySensory: Poor visual -needs totechnological gadgets. He has anmemberwear glasses. Body: AverageiPad, which he can operate quiteLiving Conditions: Bola lives in aweight.well. He needs constantcare home with three other men inassistance.their 60s80s with schizophreniaand autism. Hobbies/Interests:Attitude towards technology:Bola enjoys listening to music andBola is not able to navigate hiswatching football matchesway very well across the internetAttitude/Feelings: Bola is aconfident and outspoken person.Skills: Can play the keyboard.Limitations: Bola is not verycommitted to taking his drugs. Heneeds to be prompted.2.1PersonaName 2.2.2 Scenario Bola GenderSpecification(Country)User type: A person with Kidney failure and highBola blood pressure living in a care home need Male, 50 Living in a care home. High Blood pressure and kidney failure. Not very good(Canada) technology to closely monitor his vitals and ensure with technologyJames (US) medication is taken at the right time. Male, 70 Living alone. Asthmatic. Not good with technological devices Actors: James (50) and Support workers.Joy (UK)Female, twoStays with her parents. Nocturnal monitoringmonth", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "", "figure_data": "TakeIt is time for the user toA record that the user hasSensor Updates the state of themedicationtake medicationtaken medicationcurrent activityPrimary user Vital readings The carer needs users' People with High Blood pressure and kidney Take the readings of the Sensor updates the state ofdata for analysis.user's body, kidney status andcurrent activitySecondary UserCarers of a patient with a medical condition in old age blood pressure.Tertiary UserNo tertiary UserCalls ProviderCompanies that connect mobile devices with calls and SMS2.3.1 James' General informationInternet providerInternet service provider for mobile devices James, USDevice Manufacturer General informationOrganisations that make the product ConditionTechnologyAge: 70Health Condition:The technology used: James isGender: MaleAsthmaticnot very conversant withFamily/support: Has a son inSensory: Poor visual -needs totechnological gadgets.2.2.4 Identifying Activities for Bola Dubaiwear glasses. Body: AverageHe has a smart inhaler.Stakeholder Living Conditions: Lives alone in Goals a retirement home.Sub-Goals weight. He has difficulties making Requirements healthy eating choices.Primary UserIncrease theirGuide the PUsReceive data(High Blood pressure and(PU)independence from SUsthrough reminders ofkidney status) from PU's body systemon reminders to take Limitations: James is not able to what they need to do and make decisions based on themedication andnext and advise them see very well without his glasses. received datamonitoring changes in He cannot tell if the environment on what to eat andtheir body systemwhat not to hits high pollen, which couldexpose him to an attack.PU can access his data and the analysedresult of the data.Secondary UserReduce attention PUShow that PU is safe SU Can get all the vital body readings of(SU)requiresPU remotely.Show SU that PU isSU can know the current medicaldoing what he shouldcondition of PU and if he takes hisdo at the right time.medication constantly.2.2.5 Situational services and interaction types for BolaActivitySituation of InterestSituational NeedSituational Service", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" } ]
Amos Okomayin; Tosin Ige
[ { "authors": "", "journal": "", "ref_id": "b0", "title": "Engineering better care a systems approach to health and care design and continuous improvement", "year": "2017" }, { "authors": "E Aarts; R Wichert", "journal": "Technology Guide", "ref_id": "b1", "title": "Ambient Intelligence", "year": "2009" }, { "authors": "T Ige; S Adewale", "journal": "International Journal of Advanced Computer Science and Applications", "ref_id": "b2", "title": "AI powered anti-cyber bullying system using machine learning algorithm of multinomial naïve Bayes and optimized linear support vector machine", "year": "2022" }, { "authors": "D Cook; J Carlos; V R Jakkula", "journal": "Pervasive and mobile computing", "ref_id": "b3", "title": "Ambient intelligence: Technologies, applications, and opportunities", "year": "2009" }, { "authors": "G Acampora; D J Cook; P Rashidi; A V Vasilakos", "journal": "", "ref_id": "b4", "title": "A Survey on Ambient Intelligence in Health Care", "year": "" }, { "authors": "G Acampora; D J Cook; P Rashidi; A V Vasilakos", "journal": "Institute of Electrical and Electronics Engineers", "ref_id": "b5", "title": "A Survey on Ambient Intelligence in Health Care", "year": "2013" }, { "authors": "W H Organization", "journal": "WHO", "ref_id": "b6", "title": "World Report on Disability", "year": "2011" }, { "authors": "", "journal": "World Health Organization", "ref_id": "b7", "title": "Oxford University Learners Dictionary", "year": "2022" }, { "authors": " Who", "journal": "World Health Organization", "ref_id": "b8", "title": "", "year": "2022-11" }, { "authors": "T K Fund", "journal": "", "ref_id": "b9", "title": "The health care workforce in England: make or break", "year": "2018" } ]
[]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b18", "b19", "b20", "b21" ], "table_ref": [], "text": "Federated Learning (FL) [1] has been widely acknowledged as a promising means to design largescale distributed Artificial Intelligence (AI) applications, e.g., Artificial Intelligence of Things (AIoT) systems [2,3], healthcare systems [4,5], and recommender systems [6]. Unlike conventional Deep Learning (DL) methods, the cloud-client architecture based FL supports the collaborative training of a global DL model among clients without compromising their data privacy [7][8][9]. In each FL Besides the average operation, another bottleneck of FedAvg is the uniform global model for local training initialization, which further degrades the overall generalization performance of derived global models. Some recent research on model training indicates that, from the perspective of the loss landscapes of DL models, optimal solutions with the best generalization performance often lie in flat valleys, while inferior ones are always located in sharp ravines [19][20][21][22]. According to such a finding, the generalization performance of FedAvg is hard to be guaranteed. This is because FedAvg initializes the local training with the same global models, which limits the diversity of local searching. This will inevitably result in the notorious stuck-at-local-search problem during the local training, leading to the problems of both an extremely long convergence time and low inference performance.\nTo address the above issues, this paper proposes a novel FL paradigm called FedMR (Federated Model Recombination), which can effectively help the training of local models escape from local minima, while the individual characteristics of local models are still maintained. Unlike FedAvg that aggregates all the collected local models in each FL training round, FedMR randomly shuffles the parameters of different local models within the same layers, and recombines them to form new local models. In this way, FedMR can not only keep individual characteristics of local models at layer-level, but also derive diversified models that can effectively escape from local optimal solutions for the local training of clients. The main contributions of this paper can be summarized as follows:\n• We propose to use our designed fine-grained model recombination method to replace the traditional FedAvg-based model aggregation, and prove its convergence, with the aim of improving FL inference performance. • We introduce an effective two-stage training scheme for FedMR, which combines the merits of both model recombination and aggregation to accelerate the overall FL training process. • We conduct extensive experiments on various models and datasets to show both the effectiveness and compatibility of FedMR." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b22", "b23", "b15", "b13", "b17", "b16" ], "table_ref": [], "text": "To address the problem of uneven data distributions, exiting solutions can be mainly classified into three categories, i.e., client grouping-based methods, global control variable-based methods, and knowledge distillation-based methods. The device grouping-based methods group and select clients for aggregation based on the data similarity between clients. For example, FedCluster [23] divides clients into multiple clusters and performs multiple cycles of meta-update to boost the overall FL convergence. Based on either sample size or model similarity, CluSamp [24] groups clients to achieve a better client representativity and a reduced variance of client stochastic aggregation parameters in FL. By modifying the penalty terms of loss functions during FL training, the global control variable-based methods can be used to smooth the FL convergence process. For example, FedProx [16] regularizes local loss functions with the squared distance between local models and the global model to stabilize the model convergence. Similarly, SCAFFOLD [14] uses global control variables to correct the \"client-drift\" problem in the local training process. Knowledge Distillation (KD)-based methods adopt soft targets generated by the \"teacher model\" to guide the training of \"student models\". For example, by leveraging a proxy dataset, Zhu et al. [18] proposed a data-free knowledge distillation method named FedGen to address the heterogeneous FL problem using a built-in generator. With ensemble distillation, FedDF [17] accelerates the FL training by training the global model through unlabeled data on the outputs of local models.\nAlthough the above methods can optimize FL performance from different perspectives, since conducts coarse-grained model aggregation, the inference capabilities of local models are still strongly restricted. Furthermore, most of them cannot avoid non-negligible communication and computation overheads or the risk of data privacy exposure. To the best of our knowledge, FedMR is the first attempt that uses model recombination and different models for fine-grained FL. Since FedMR considers the specific characteristics and efforts of local models, it can further mitigate the weight divergence problem, thus achieving better inference performance than state-of-the-art FL methods." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Motivation", "publication_ref": [ "b24", "b25" ], "table_ref": [], "text": "Comparison between FedAvg and Independent Training. Figure 1 illustrates the FL training processes on the same loss landscape using FedAvg and Independent training (Indep), respectively, where the server of Indep just shuffles the received local models and then randomly dispatches them to clients without aggregation. In each subfigure, the local optima are located within the areas surrounded by red solid lines. Note that, since the upper surrounded area is flatter than the lower surrounded area in the loss landscape, the solutions within it will exhibit better generalization. According to [25,26], a small perturbation of the model weights can make it easier for local training to jump out of sharp ravines rather than flat valleys. In other words, the recombined models are more likely to converge toward flat valleys along the local training. For example, in the end of round 3, we can find that all three local models are located in the upper surrounded area, where their aggregated model has better generalization performance than the one achieved in Figure 1(a). Step 1 (Model Dispatching): The cloud server dispatches K recombined models to K selected clients according to their indices, where K is the number of activated clients, which participate in each round of FL training. Note that, unlike FedAvg-based FL methods, in FedMR different clients will receive different models for the local training purpose." }, { "figure_ref": [], "heading": "Local Model Global Model Local Training", "publication_ref": [], "table_ref": [], "text": "… … ! ! \" ! # \" ! $ \" ! ! ! # ! $ 𝑙𝑖𝑠𝑡 ! 𝑙𝑖𝑠𝑡 \" 𝑙𝑖𝑠𝑡 # 𝑙𝑖𝑠𝑡 $(\nStep 2 (Model Upload): Once the local training is finished, a client needs to upload the parameters of its trained local model to the cloud server.\nStep 3 (Model Recombination): The cloud server decomposes received local models into multiple layers individually in the same manner and then conducts the random shuffling of the same layers among different local models. Finally, by concatenating layers from different sources in order, a new local model can be reconstructed. Note that any decomposed layer of the uploaded model will eventually be used by one and only one of the recombined models. v" }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Implementation of FedMR Algorithm 1 Implementation of FedMR", "publication_ref": [], "table_ref": [], "text": "Lr[i] r+1 = w Lr[i] r -η∇f Lr[i] (w Lr[i] r ), s.t., f Lr[i] (w Lr[i] r ) = 1 |D Lr[i] | |D Lr [i] | j=1 (w Lr[i] r ; x j ; y j ),\nwhere v\nLr[i] r indicates parameters of the trained local model, D Lr[i] denotes the dataset of client L r [i],\nη is the learning rate, (•) is the loss function, x j is the j th sample in D Lr[i] , and y j is the label of x j . Once the local training is finished, the client needs to upload the parameters of its trained local model to the cloud server by updating\nL m using L m [i] = v Lr[i]\nr+1 . Note that, similar to traditional FL methods, in each training round FedMR needs to transmit the parameters of 2K models between the cloud server and its selected clients.\nModel Recombination. Typically, a DL model consists of multiple layers, e.g., convolutional layers, pooling layers, and Fully Connected (FC) layers. To simplify the description of our model recombination method, we do not explicitly present the layer types here. Let w x = {l x 1 , l x 2 , ..., l x n } be the parameters of model x, where l x i (i ∈ [n]) denotes the parameters of the i th layer of model x. In each FL round, FedMR needs to conduct the model recombination base on L m to obtain new models for the local training. Figure 3(b) shows an example of model recombination based on the shuffling of model layers. When receiving all the trained local models (i.e., m 1 , m 2 , ..., m K ) from clients, firstly the cloud server needs to decouple the layers of these models individually. For example, the model m 1 can be decomposed into four layers. Assuming that the local models are with an architecture of w, to enable the recombination, FedMR then constructs n lists, where the k th (k ∈ [n]) list contains all the k th layers of the models in L m . As an example shown in Figure 3(b), FedMR constructs four lists (i.e., list 1 -list 4 ) for the K models (i.e., m 1 -m K ), where each list consists of K elements (i.e., K layers with the same index). After shuffling each list, FedMR generates |L m | recombined models based on shuffled results. For example, the top three layers of the recombined model m 1 come from the models m 1 , m 2 and m K , respectively." }, { "figure_ref": [], "heading": "Two-Stage Training Scheme for FedMR", "publication_ref": [], "table_ref": [], "text": "Although FedMR enables finer FL training, when starting from blank models, FedMR converges more slowly than traditional FL methods at the beginning. This is mainly because, due to the low matching degree between layers in the recombined models, the model recombination operation in this stage requires more local training time to re-construct the new dependencies between layers. To accelerate the overall convergence, we propose a two-stage training scheme for FedMR, consisting of both the aggregation-based pre-training stage and model recombination stage. In the first stage, we train the local models coarsely using the FedAvg-based aggregation, which can quickly form a pre-trained global model. In the second stage, starting from the pre-trained models, FedMR dispatches recombined models to clients for local training. Due to the synergy of both FL paradigms, the overall FedMR training time can be reduced." }, { "figure_ref": [], "heading": "Convergence Analysis", "publication_ref": [ "b27", "b27" ], "table_ref": [], "text": "Based on the same assumptions [28] as follows posed on the loss functions of local clients in FedAvg, this subsection conducts the convergence analysis for FedMR.\nAssumption 4.1. For i ∈ {1, 2, • • • , K}, f i is L-smooth satisfying ||∇f i (x)-∇f i (y)|| ≤ L 2 ||x-y||. Assumption 4.2. For i ∈ {1, 2, • • • , K}, f i is µ-strongly convex satisfying ||∇f i (x) -∇f i (y)|| ≥ µ 2 ||x -y||, where µ ≥ 0. Assumption 4.3.\nThe variance of stochastic gradients is upper bounded by θ 2 and the expectation of squared norm of stochastic gradients is upper bounded by G 2 , i.e., E||∇f k (w; ξ)\n-∇f k (w)|| 2 ≤ θ 2 , E||∇f k (w; ξ)|| 2 ≤ G 2 ,\nwhere ξ is a data batch of the k th client in the t th FL round.\nBased on the implementation of function ModelRecombine(•), we can derive the following two lemmas for the model recombination operation: Lemma 4.4. Assume that in FedMR there are K clients participating in every FL training round. Let {v 1 r , v 2 r , .., v K r } and {w 1 r , w 2 r , .., w K r } be the set of trained local model weights and the set of recombined model weights generated in the (r -1) th round, respectively. Assume x is a vector with the same size as that of v k r . We have\nK k=1 v k r = K k=1\nw k r , and\nK k=1 ||v k r -x|| 2 = K k=1 ||w k r -x|| 2 .\nWe prove Theorem 1 based on Lemmas 4.4. Please refer to Appendix A for the proof. Theorem 1. (Convergence of FedMR) Let Assumption 4.1, 4.2, and 4.3 hold. Assume that E is the number of SGD iterations conducted within one FL round, model recombination is conducted at the end of each FL round, and the whole training terminates after n FL rounds. Let T = n × E be the total number of SGD iterations conducted so far, and η k = 2 µ(T +γ) be the learning rate. We can have\nE[f (w T )] -f ≤ L 2µ(T + γ) [ 4B µ + µ(γ + 1) 2 ∆1],\nwhere\nB = 10LΓ + 4(E -1) 2 G 2 , w T = k = 1 K w k T .\nTheorem 1 indicates that the difference between the current loss f (w T ) and the optimal loss f is inversely related to t. From Theorem 1, we can find that the convergence rate of FedMR is similar to that of FedAvg, which has been analyzed in [28]." }, { "figure_ref": [], "heading": "Experimental Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_5" ], "heading": "Experimental Settings", "publication_ref": [ "b0", "b15", "b13", "b28", "b17", "b23", "b17", "b23", "b29", "b30", "b31", "b0", "b32" ], "table_ref": [], "text": "To evaluate the effectiveness of FedMR, we implemented FedMR on top of a cloud-based architecture. Since it is impractical to allow all the clients to get involved in the training processes simultaneously, we assumed that there are only 10% of clients participating in the local training in each FL round. To enable fair comparison, all the investigated FL methods including FedMR set SGD optimizer with a learning rate of 0.01 and a momentum of 0.9. For each client, we set the batch size of local training to 50, and performed five epochs for each local training. All the experimental results were obtained from an Ubuntu workstation with Intel i9 CPU, 32GB memory, and NVIDIA RTX 3080 GPU.\nBaseline Method Settings. We compared the test accuracy of FedMR with six baseline methods, i.e., FedAvg [1], FedProx [16], SCAFFOLD [14], MOON [29], FedGen [18], and CluSamp [24]. Here, FedAvg is the most classical FL method, while the other five methods are the state-of-the-art (SOTA) representatives of the three kinds of FL optimization methods introduced in the related work section. Specifically, FedProx, SCAFFOLD, and MOON are global control variable-based methods, FedGen is a KD-based approach, and CluSamp is a device grouping-based method. For FedProx, we used a hyper-parameter µ to control the weight of its proximal term, where the best values of µ for CIFAR-10, CIFAR-100, and FEMNIST are 0.01, 0.001, and 0.1, respectively. For FedGen, we adopted the same server settings presented in [18]. For CluSamp, the clients were clustered based on the model gradient similarity as described in [24].\nDataset Settings. We investigated the performance of our approach on three well-known datasets, i.e., CIFAR-10, CIFAR-100 [30], and FMNIST [31]. We adopted the Dirichlet distribution [32] to control the heterogeneity of client data for both CIFAR-10 and CIFAR-100. Here, the notation Dir(α) indicates a different Dirichlet distribution controlled by α, where a smaller α means higher data heterogeneity of clients. Note that, different from datasets CIFAR-10 and CIFAR-100, the raw data of FEMNIST are naturally non-IID distributed. Since FEMNIST takes various kinds of data heterogeneity into account, we did not apply the Dirichlet distribution on FEMNIST. For both CIFAR-10 and CIFAR-100, we assumed that there are 100 clients in total participating in FL. For FEMNIST, we only considered one non-IID scenario involving 180 clients, where each client hosts more than 100 local data samples.\nModel Settings. To demonstrate the pervasiveness of our approach, we developed different FedMR implementations based on three different models (i.e., CNN, ResNet-20, VGG-16). Here, we obtained the CNN model from [1], which consists of two convolutional layers and two FC layers. When conducting FedMR based on the CNN model, we directly applied the model recombination for local training on it without pre-training a global model, since CNN here only has four layers. We obtained both ResNet-20 and VGG-16 models from Torchvision [33]. • When performing FedMR based on ResNet-20 and VGG-16, due to the deep structure of both models, we adopted the two-stage training scheme, where the first stage lasts for 100 rounds to obtain a pre-trained global model. FedAvg from the perspectives of both test loss and inference accuracy. Due to the space limitation, for Indep here we only present the results of its four random local models (denoted by Model-1, Model-2, Model-3, and Model-4). To enable a fair comparison with FedAvg, although there is no aggregated global model in Indep, we considered the aggregated model of all its local models for each FL round, whose results are indicated by the notion \"IndepAggr\". From Figure 4, we can find that all the local models in Indep can achieve both higher accuracy and lower loss than those of FedAvg, though their loss and accuracy curves fluctuate more sharply. Moreover, IndepAggr exhibits much worse performance than the other references. This is mainly because, according to the definition of Indep, each local model needs to traverse multiple clients along with the FL training processes, where the optimization directions of client models differ in the corresponding loss landscape. Model Recombination. To validate the intuition about the impacts of model recombination as presented in Section 3, we conducted three experiments on CIFAR-10 dataset using ResNet-20 model. Our goal is to figure out the following three questions: i) by using model recombination, can all the models eventually have the same optimization direction; ii) compared with FedAvg, can the global model of FedMR eventually converge into a more flat solution; and iii) can the global model of FedMR eventually converge to a more generalized solution?" }, { "figure_ref": [ "fig_7", "fig_1" ], "heading": "Validation for Intuitions", "publication_ref": [], "table_ref": [], "text": "Figure 5 presents the average cosine similarity between all the intermediate models, taking four different client data distributions into account. We can observe that the model similarity decreases first and then gradually increases in all the investigated IID and non-IID scenarios. This is mainly because, due to the nature of Stochastic Gradient Descent (SGD) mechanism and the data heterogeneity among clients, all local models are optimized toward different directions at the beginning of training. However, as the training progresses, the majority of local models will be located in the same flat valleys, leading to similar optimization directions for local models. These results are consistent with our intuition as shown in Figure 2. We can observe that, due to the superiority in generalization, the models trained by FedMR outperform those by FedAvg for all the four cases." }, { "figure_ref": [], "heading": "Performance Comparison", "publication_ref": [ "b30" ], "table_ref": [ "tab_2", "tab_2" ], "text": "We compared the performance of our FedMR approach with six SOTA baselines. For datasets CIFAR-10 and CIFAR-100, we considered both IID and non-IID scenarios (with α = 0.1, 0.5, 1.0, respectively). For dataset FEMNIST, we considered its original non-IID settings [31]. We also present the analysis of the comparison of computing and communication overhead in Appendix C.1.\nComparison of Test Accuracy Table 1 compares FedMR with the SOTA FL methods considering both non-IID and IID scenarios based on three different DL models. The first two columns denote the model type and dataset type, respectively. Note that to enable fair comparison, we cluster the test accuracy results generated by the FL methods based on the same type of local models. The third column shows different distribution settings for client data, indicating the data heterogeneity of clients. The fourth column has seven sub-columns, which present the test accuracy information together with its standard deviation for all the investigated FL methods, respectively.\nFrom Table 1, we can observe that FedMR can achieve the highest test accuracy in all the scenarios regardless of model type, dataset type, and data heterogeneity. For CIFAR-10 and CIFAR-100, we can find that FedMR outperforms the six baseline methods significantly in both non-IID and IID scenarios. For example, when dealing with a non-IID CIFAR-10 scenario (α = 0.1) using ResNet-20-based models, FedMR achieves test accuracy with an average of 62.09%, while the second highest average test accuracy obtained by SCAFFOLD is only 50.46%. Note that the performance of FedMR on FEMNIST is not as notable as the one on both CIFAR-10 and CIFAR-100. This is mainly because the classification task on FEMNIST is much simpler than the ones applied on datasets CIFAR-10 and CIFAR-100, which leads to the high test accuracy of the six baseline methods. However, even in this case, FedMR can still achieve the best test accuracy among all the investigated FL methods. distributions of clients. From this figure, we can find that FedMR outperforms the other six FL methods consistently in both non-IID and IID scenarios. This is mainly because FedMR can easily escape from the stuck-at-local-search due to the model recombination operation in each FL round. Moreover, due to the fine-grained training, we can observe that the learning curves in each sub-figure are much smoother than the ones of other FL methods. We also conducted the comparison for CNN-and VGG-16-based FL methods, and found similar observations from them. Please refer to Appendix C.2 for more details. " }, { "figure_ref": [], "heading": "Comparison of Model Convergence", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation Study Impacts of Activated Clients", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_16" ], "heading": "Impacts of Model Layer Partitioning", "publication_ref": [], "table_ref": [], "text": "To show the effectiveness of our layer-wise model recombination scheme, we evaluated the FedMR performance using different model layer partitioning strategies. We use \"FedMR-px\" to denote that the model is divided into 1\nx (x ∈ (0, 1.0]) segments, where the model recombination is based on segments rather than layers. Note that x = 1.0 indicates an extreme case, where local models are randomly dispatched to clients without recombination.\nFigure 10 presents the ablation study results on CIFAR-10 dataset using ResNet-20-based and VGG-16-based FedMR, where the data on clients are non-IID distributed (α = 1.0). Note that, all the cases here did not use the two-stage training scheme. From this figure, we can find that FedMR outperforms the other variants significantly. Moreover, when the granularity of partitioning goes coarser, the classification performance of FedMR becomes worse. 11, we can observe that the two-stage training-based FedMR methods (i.e., FedMR-50 and FedMR-100) achieve the best performance from the perspectives of test accuracy and convergence rate. Note that our two-stage training scheme can achieve a more significant improvement on the case using VGG-16 model, which has a much larger size than ResNet-20 model. This is because the fine-grained FedMR without the first-stage training is not good at dealing with large-size models at the beginning of FL training, which requires much more training rounds than the coarse-grained aggregation-based methods to achieve a given preliminary classification accuracy target. By resorting to the two-stage training scheme, such a slow convergence problem can be greatly mitigated." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "Due to the coarse-grained aggregation of FedAvg as well as the uniform client model initialization, when dealing with uneven data distribution among clients, existing Federated Learning (FL) methods greatly suffer from the problem of low inference performance. To address this problem, this paper presented a new FL paradigm named FedMR, which enables different layers of local models to be trained on different clients. Since FedMR supports both fine-grained model recombination and diversified local training initialization, it enables effective and efficient search for superior generalized models for all clients. Comprehensive experimental results show both the effectiveness and pervasiveness of our proposed method in terms of inference accuracy and convergence rate." }, { "figure_ref": [], "heading": "A Proof of FedMR Convergence A.1 Notations", "publication_ref": [ "b33", "b27" ], "table_ref": [], "text": "In our FedMR approach, the global model is aggregated from all the recombined models and all the models have the same weight. Let t exhibit the t th SGD iteration on the local device, v is the intermediate variable that represents the result of SGD update after exactly one iteration. The update of FedMR is as follows:\nv k t+1 = w k t -η t ∇f k (w k t , ξ k t ),(1)\nw k t+1 = v k t+1 , if E t + 1 RM (v k t+1 ), if E | t + 1\n, where w k t represents the model of the k th client in the t th iteration. w t+1 denotes the global model of the (t + 1)\nth iteration. RM (v k t+1 ) denotes the recombined model. Since FedMR recombines all the local models in each round and the recombination only shuffles layers of models, the parameters of recombined models are all from the models before recombination, and no parameters are discarded. Therefore, when E | t + 1, we can obtain the following invariants:\nK k=1 v k t+1 = K k=1 RM (v k t+1 ) = K k=1 w k t+1 ,(2)\nK k=1 ||v k t+1 -x|| 2 = K k=1 ||w k t+1 -x|| 2 ,(3)\nwhere w k t is the k th recombined model in (t -1) th iteration, which is as the local model to be dispatched to k th client in t th iteration, x can any vector with the same size as v k t . Similar to [34], we define two variables v t and w t :\nv t = 1 K K k=1 v k t , w t = 1 K k k=1 w k t .(4)\nInspired by [28], we make the following definition:\ng k t = ∇f k (w k t ; ξ k t ).\nA.2 Proof of Lemma 4.4\nProof. Assume v k t has n layers, we have\nv k t = L 1 ⊕L 2 ⊕...⊕L n . Let L i = [p v k t (i,0) , p v k t (i,1) , ..., p v k t (i,|Li|) ]\n, where p v k t (i,j) denotes the j th parameter of the layer L i in the model v k t . We have\nK k=1 ||v k t -x|| 2 = K k=1 n i=1 |Li| j=1 ||p v k t (i,j) -p x (i,j) || 2 (5) K k=1 ||w k t -x|| 2 = K k=1 n i=1 |Li| j=1 ||p w k t (i,j) -p x (i,j) || 2(6)\nSince model recombination only shuffles layers of models, the parameters of recombined models are all from the models before recombination, and no parameters are discarded. We have\n∀ i∈[1,n],j∈[1,|Li|] K k=1 p v k t (i,j) = K k=1 p w k t (i,j)(7)\n∀ k∈[1,K],i∈[1,n],j∈[1,|Li|] ∃ q∈[1,K] {p v k t (i,j) = p w q t (i,j) } (8) ∀ k∈[1,K],i∈[1,n],j∈[1,|Li|] ∃ q∈[1,K] {p w k t (i,j) = p v q t (i,j) }(9)\nAccording to Equation 7-9, we have\nK k=1 ||v k t -x|| 2 = K k=1 n i=1 |Li| j=1 ||p v k t (i,j) -p x (i,j) || 2 = K k=1 n i=1 |Li| j=1 ||p w k t (i,j) -p x (i,j) || 2 = K k=1 ||w k t -x|| 2(10)" }, { "figure_ref": [], "heading": "A.3 Key Lemmas", "publication_ref": [ "b27" ], "table_ref": [], "text": "To facilitate the proof of our theorem, we present the following lemmas together with their proofs. Lemma A.1. (Results of one step SGD). If η t ≤ 1 4L , we have\nE||v t+1 -w || 2 ≤ 1 K K k=1 (1 -µη t )||v k t -w || 2 + 1 K K k=1 ||w k t -w k t0 || 2 + 10η 2 t LΓ.\nProof. According to Equation 2 and Equation 3, we have\n||v t+1 -w || 2 ≤ 1 K K k=1 ||v k t+1 -w || 2 = 1 K K k=1 ||v k t -η t g k t -w || 2 = 1 K K k=1 (||v k t -w || 2 -2η t w k t -w , g k t + η 2 t ||g k t || 2 )(11)\nLet\nB 1 = -2η t w k t -w , g k t and B 2 = η 2 t K k=1 ||g k t || 2 . According to Assumption 4.2, we have B 1 ≤ -2η t (f k (w k t ) -f k (w )) -µη t ||w k t -w || 2(12)\nAccording to Assumption 4.1, we have\nB 2 ≤ 2η 2 t L(f k (w k t ) -f k )(13)\nAccording to Equation 12 and 13, we have\n||v t+1 -w || 2 ≤ 1 K K k=1 [(1-µη t )||v k t -w || 2 -2η t (f k (w k t )-f k (w ))+2η 2 t L(f k (w k t )-f k )] (14) Let C = 1 K K k=1 [-2η t (f k (w k t ) -f k (w )) + 2η 2 t L(f k (w k t ) -f k )]. We have C = -2η t K K k=1 (f k (w k t ) -f k (w )) + 2η 2 t L K K k=1 (f k (w k t ) -f k ) = - 2η t (1 -η t L) K K k=1 (f k (w k t ) -f ) + 2η 2 t L K K k=1 (f -f k )(15)\nLet Γ = f -1 K K k=1 f k and φ = 2η t (1 -Lη t ). We have C = - φ K K k=1 (f k (w k t ) -f ) + 2η 2 t LΓ(16)\nLet D = -1 K K k=1 (f k (w k t ) -f ), E | t 0 and t -t 0 ≤ E. We have D = - 1 K K k=1 (f k (w k t ) -f k (w k t0 ) + f k (w k t0 ) -f )(17)\nBy Cauchy-Schwarz inequality, we have\nD ≤ 1 2K K k=1 (η t ||∇f k (w k t0 )|| 2 + 1 η t ||w k t -w k t0 || 2 ) - 1 K K k=1 (f k (w k t0 ) -f ) ≤ 1 2K K k=1 [2η t L(f k (w k t0 ) -f k ) + 1 η t ||w k t -w k t0 || 2 ] - 1 K K k=1 (f k (w k t0 ) -f )(18)\nNote that since η ≤ 1 4L , η t ≤ φ ≤ 2η t and η t L ≤ 1 4 . According to Equation 18, we have\nC ≤ φ 2K K k=1 [2η t L(f k (w k t0 ) -f k ) + 1 η t ||w k t -w k t0 || 2 ] - φ K K k=1 (f k (w k t0 ) -f ) + η 2 t LΓ = φ 2η t K K k=1 ||w k t -w k t0 || 2 + (φη t L + 2η 2 t L)Γ + φ K K k=1 (f -f k (w k t0 )) ≤ φ 2η t K K k=1 ||w k t -w k t0 || 2 + (φη t L + 2η 2 t L)Γ + φ K K k=1 (f -f k ) ≤ φ 2η t K K k=1 ||w k t -w k t0 || 2 + (φη t L + φ + 2η 2 t L)Γ ≤ 1 K K k=1 ||w k t -w k t0 || 2 + (2η 2 t L + 2η t )Γ ≤ 1 K K k=1 ||w k t -w k t0 || 2 + 10η 2 t LΓ(19)\nLemma A.2. Within our configuration, the model recombination occurs every E iterations. For arbitrary t, there always exists t 0 ≤ t while t 0 is the nearest recombination to t. As a result, t -t 0 ≤ E -1 holds. Given the constraint on learning rate from [28], we know that η t ≤ η t0 ≤ 2η t . It follows that\n1 K K k=1 ||w k t -w k t0 || 2 ≤ 4η 2 t (E -1) 2 G 2 .\nProof.\n1 K K k=1 ||w k t -w k t0 || 2 = 1 K K k=1 || t0+E-1 t=t0 η t ∇f a1 (w a1 t ; ξ a1 t )|| 2 ≤ (t -t 0 ) t0+E-1 t=t0 η 2 t G 2 ≤ (E -1) t0+E-1 t=t0 η 2 t G 2 ≤ 4η 2 t (E -1) 2 G 2 . A.4 Proof of Theorem 1 Proof. Let ∆ t = ||w t -w || 2 and ∆ t = 1 K K k=1 ||w k t -w || 2 .\nAccording to Lemma A.1 and Lemma A.2, we have\n∆ t+1 ≤ ∆ t+1 ≤ (1 -µη t )∆ t + η 2 t B,\nwhere\nB = 10LΓ + 4(E -1) 2 G 2 .\nWhen the step size is diminish, let η t = β t+γ , where\nβ > 1 µ , γ > 0 such that η t ≤ min{ 1 µ , 1 4L } = 1 4L and η t ≤ 2η t+E . Let ψ = max{ β 2 B µβ-1 , (γ + 1)∆ 1 }, we firstly proof ∆ t ≤ ψ t+γ . For t = 1, ∆ 1 = ∆ 1 = γ + 1 γ + 1 ∆ 1 ≤ ψ γ + 1(20)\nAssume that\n∆ t ≤ ∆ t ≤ ψ γ+1 , ∆ t+1 ≤ ∆ t+1 ≤ (1 -µη t )∆ t + η 2 t B = (1 - µβ t + γ ) ψ t + γ + β 2 B (t + γ) 2 ≤ t + γ -1 (t + γ) 2 ψ + [ β 2 B (t + γ) 2 - µβ -1 (t + γ) 2 ψ] ≤ t + γ -1 (t + γ) 2 ψ + [ β 2 B (t + γ) 2 - µβ -1 (t + γ) 2 β 2 B µβ -1 ] ≤ ψ t + 1 + γ .(21)\nAccording to Equation 20 and Equation 21, we have\n∆ t ≤ ψ t + γ .(22)\nAccording to Assumption 4.1 and Equation 22, we have\nE[f (w t )] -f ≤ L 2 ∆ t ≤ ψL 2(t + γ)(23)\nIf we set β = 2 µ and γ = max{ 10L µ , E} -1, we have η t = 2 µ(t+γ) and η t ≤ 2η t+E for t ≥ 1. Then we have\nψ = max{ β 2 B µβ -1 , (γ + 1)∆ 1 } ≤ β 2 B µβ -1 + (γ + 1)∆ 1 ≤ 4B µ 2 + (γ + 1)∆ 1(24)\nAccording to Equation 23 and Equation 24, we have \nE[f (w t )] -f ≤ L 2(t + γ) [ 4B µ 2 + (γ + 1)∆ 1 ] = L 2µ(t + γ) [ 4B µ + µ(γ + 1) 2 ∆ 1 ](25)" }, { "figure_ref": [ "fig_1", "fig_1", "fig_1", "fig_1" ], "heading": "B Secure Model Recombination Mechanism", "publication_ref": [], "table_ref": [], "text": "To avoid the risk of privacy leakage caused by exposing gradients or models to the cloud server, we propose a secure model recombination mechanism for FedMR, which allows the random exchange of model layers among clients before model training or upload. As shown in Figure 12, within a round of the secure model recombination, the update of each model (i.e., m) consists of four stages:\nStage 1: Assume that the local model has len layer. Each client maintains a buffer for each layer. Firstly, each client randomly selects a part of its layers and sends them to other activated clients, while the remaining layers are saved in their corresponding buffers. Note that a selected layer can only be sent to one client. For example, in Figure 12, the client m sends layer 2 and layer 4 to c i and c j , respectively.\nStage 2: Once receiving a layer from other client, the receiving client m will add the layer to its corresponding buffer. For example, in Figure 12, the client m totally receives five layers. Besides the retained two layers in stage 1, m now has seven layers in total in its buffers.\nStage 3: For each layer buffer of m, if there contains one element received from a client c in stage 2, our mechanism will randomly select one layer in the buffer and return it back to c. For example, in Figure 12, m randomly returns a layer in Buffer-layer1 back to a client c γ . To further prevent privacy leakage, the cloud server will broadcast a public key before the secure recombination. By using the public key to encrypt the model parameters of each layer, the other clients cannot directly obtain their received parameters.\nAlgorithm 2 Secure Model Recombination Input: i) r s , # of repeat times; ii) idx, index of the client; iii) w, parameters of the model in client idx; iv) L a , the list of activated clients; v) n l , the low bound of # of sending layer; vi) n u , the upper bound of # of sending layer.\nOutput: w, the parameters of the recombined model. SecMR(rnd,S dev ,K) 1: for r = 1, ..., r s do 2:\nn ← Random(n l , n u ); L layer ← Random select n layers from w; Send layer l to client c; \nSince n k ∈ [n l , n u ] where n l > 0 and n u ≤ len(w), and each layer can only be sent to one client, we have\nE K k=1 2 n k j=1 size(l k,j ) = (n u + n l )K len(w) × size(w) ≤ 2n u K len(w) × size(w) ≤ 2K × size(w).(27)\nFrom Equation 27, we can find as the increase of n l + n u , the expectation of communication overhead of secure recombination increases linearly. Comparison of Computing Overhead: Let l n be the number of layers of a model, and P be the number of parameters of a model. For FedMR, in each FL training round, the cloud server needs to calculate l n × K random numbers for shuffling, and each recombination needs to move K × P model parameters. Since l n P , the overall complexity of one FedMR round is O(K × P ) for the cloud server. Note that FedMR does not cause additional computing overhead to the client. For FedAvg, the overall complexity of each FL training round is also O(K × P ), since it needs to aggregate all the local models. Therefore FedMR has a similar computing overhead as FedAvg. Since CluSamp requires cluster clients, its complexity in the cloud server is O(K 2 × P ). Since all the other baselines conduct FedAvg-based aggregation, the computation overhead of the cloud server for them is similar to FedAvg. However, since FedProx, SCAFFOLD, MOON, and FedGen require clients to conduct additional computations, their overall computing overhead is more than those of FedAvg and FedMR." }, { "figure_ref": [], "heading": "C Experimental Results and Discussions", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "C.2 Experimental Results for Accuracy Comparison", "publication_ref": [], "table_ref": [], "text": "In this section, we present all the experimental results. Similar to traditional FedAvg-based FL methods, FedMR does not require clients to send their data to the cloud server, thus the data privacy can be mostly guaranteed by the secure clients themselves. Someone may argue that dispatching the recombined models to adversarial clients may expose the privacy of some clients by attacking their specific layers. However, since our model recombination operation breaks the dependencies between model layers and conducts the shuffling of layers among models, in practice it is hard for adversaries to restore the confidential data from a fragmentary recombined model without knowing the sources of layers. Our secure recombination mechanism ensures that the cloud server only received recombined models from clients, which means the cloud server cannot restore the model of each client." }, { "figure_ref": [], "heading": "C.3.2 Limitations", "publication_ref": [], "table_ref": [], "text": "As a novel FL paradigm, FedMR shows much better inference performance than most SOTA FL methods. Although this paper proposed an efficient two-stage training scheme to accelerate the overall FL training processes, there still exists numerous chances (e.g., client selection strategies, dynamic combination of model aggregation and model recombination operations) to enable further optimization on the current version of FedMR. Meanwhile, the current version of FedMR does not take personalization into account, which is also a very important topic that is worthy of studying in the future. " } ]
Although Federated Learning (FL) enables global model training across clients without compromising their raw data, existing Federated Averaging (FedAvg)based methods suffer from the problem of low inference performance, especially for unevenly distributed data among clients. This is mainly because i) FedAvg initializes client models with the same global models, which makes the local training hard to escape from the local search for optimal solutions; and ii) by averaging model parameters in a coarse manner, FedAvg eclipses the individual characteristics of local models. To address such issues that strongly limit the inference capability of FL, we propose a novel and effective FL paradigm named FedMR (Federated Model Recombination). Unlike conventional FedAvg-based methods, the cloud server of FedMR shuffles each layer of collected local models and recombines them to achieve new models for local training on clients. Due to the diversified initialization models for clients coupled with fine-grained model recombination, FedMR can converge to a well-generalized global model for all the clients, leading to a superior inference performance. Experimental results show that, compared with state-of-the-art FL methods, FedMR can significantly improve inference accuracy in a quicker manner without exposing client privacy.
FedMR: Federated Learning via Model Recombination
[ { "figure_caption": "Figure 1 :1Figure 1: Training processes on the same loss landscape. As shown in Figure 1(a), along with the training process, the aggregated global models denoted by red circles gradually move toward the lower sharp area with inferior solutions, though the optimization of some local models heads toward the upper surrounded area with better solutions. The reason for such biased training is mainly because the local training starts from the same global model in each FL round. As an alternative, due to the lack of aggregation operation, the local models of Indep may converge in different directions as shown in Figure 1(b). In this case, even if some local training in Idep achieves a better solution than the one obtained by FedAvg, due to diversified optimization directions of local models, such an important finding can be eclipsed by the results of other local models. Clearly, there is a lack of mechanisms for Indep that can guide the overall training toward such superior solutions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: An example of model recombination. Intuition of Model Recombination. Based on the Indep training results shown in Figure 1(b), Figure 2 illustrates the intuition of our model recombination method, where the FL training starts from the three local models (denoted by yellow diamonds in round 1) obtained in figure 1(b).Note that, at the beginning of round 1, two of the three local models are located in the sharp ravine. In other words, without model recombination, the training of such two local models may get stuck in the lower surrounded area. However, due to the weight adjustment by shuffling the layers among local models, we can find that the three recombined models (denoted by yellow squares) are sparsely scattered in the loss landscape, which enables the local training escape from local optima. According to[25,26], a small perturbation of the model weights can make it easier for local training to jump out of sharp ravines rather than flat valleys. In other words, the recombined models are more likely to converge toward flat valleys along the local training. For example, in the end of round 3, we can find that all three local models are located in the upper surrounded area, where their aggregated model has better generalization performance than the one achieved in Figure1(a).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "a) Framework and Workflow of FedMR (b) An example of Model Recombination", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Our FedMR approach", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: FedAvg vs. Indep. Independent Training. Based on the experimental settings presented in Section 5.1, we conducted the experiments to evaluate the effectiveness of each local model in Indep. The FL training is based on the ResNet-20 model and dataset CIFAR-10, where we set α = 0.5 for non-IID scenarios. Figures 4 compares Indepwith FedAvg from the perspectives of both test loss and inference accuracy. Due to the space limitation, for Indep here we only present the results of its four random local models (denoted by Model-1, Model-2, Model-3, and Model-4). To enable a fair comparison with FedAvg, although there is no aggregated global model in Indep, we considered the aggregated model of all its local models for each FL round, whose results are indicated by the notion \"IndepAggr\". From Figure4, we can find that all the local models in Indep can achieve both higher accuracy and lower loss than those of FedAvg, though their loss and accuracy curves fluctuate more sharply. Moreover, IndepAggr exhibits much worse performance than the other references. This is mainly because, according to the definition of Indep, each local model needs to traverse multiple clients along with the FL training processes, where the optimization directions of client models differ in the corresponding loss landscape.", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "with", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Cosine similarity of local models in FedMR.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Comparison of loss landscapes with different FL and client data settings.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 66Figure 6 compares the loss landscapes of final global models obtained by FedAvg and FedMR with different client data settings, respectively. We can find that, compared with FedMR counterparts, the global models trained by FedAvg are located in sharper solutions, indicating the generalization superiority of final global models achieved by FedMR.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Comparison of test losses of FedAvg and FedMR with different client data settings.", "figure_data": "", "figure_id": "fig_10", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 77Figure7compares the test losses for the global models of FedAvg and FedMR (without using two-stage training) within different IID and non-IID scenarios. Note that here the global models of FedMR are only for the purpose of fair comparison rather than local model initialization. We can observe that, due to the superiority in generalization, the models trained by FedMR outperform those by FedAvg for all the four cases.", "figure_data": "", "figure_id": "fig_11", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 Figure 8 :88Figure 8: Learning curves of FL methods based on the ResNet-20 model for CIFAR-100 dataset.", "figure_data": "", "figure_id": "fig_12", "figure_label": "88", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Comparison of FL methods using ResNet-20 model on CIFAR-10 dataset with α = 0.1.", "figure_data": "", "figure_id": "fig_13", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 99compares the learning trends between FedMR and six baselines for a non-IID scenario (α = 0.1) with both ResNet-20 model and CIFAR-10 dataset, where the numbers of activated clients are 5, 10, 20, 50, and 100, respectively. From Figure9, we can observe that FedMR achieves the best inference performance for all cases. When the number of activated clients increases, the convergence fluctuations reduce significantly. Please refer to Appendix C.2 for more results on IID scenarios.", "figure_data": "", "figure_id": "fig_14", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "16 Figure 10 :1610Figure 10: Learning curves for different partitioning strategies.", "figure_data": "", "figure_id": "fig_15", "figure_label": "1610", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: Learning curves for different two-stage training settings. Two-stage Training Scheme. To demonstrate the effectiveness of our proposed two-stage training scheme, we conducted experiments on CIFAR-10 dataset using ResNet-20-based and VGG-16-based FedMR, where the data on clients are non-IID distributed (α = 1.0). Figure 11 presents the learning trends of FedMR with five different two-stage training settings. Here, we use the notation \"FedMR-n\" to denote that the first stage involves n rounds of model aggregation-based local training to obtain a pre-trained global model, while the remaining rounds conduct local training based on our proposed model recombination-based method. From Figure 11, we can observe that the two-stage training-based FedMR methods (i.e., FedMR-50 and FedMR-100) achieve the best performance from", "figure_data": "", "figure_id": "fig_16", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Stage 4 :B. 1 ImplementationStage 2 (Stage 3 (Stage 4 (41234Once receiving the returned layers from other clients, our mechanism will recombine them with all the other layers in the buffers to form a new model. Note that the recombined model may be significantly different from the original model in Stage 1.Note that each FL training round can perform multiple times secure model recombination. Due to the randomness, it is hard for adversaries to figure out the sources of client model layers.Algorithm 2 presents the implementation of our secure model recombination mechanism. The input r s denotes the number of repeat times and Lines 2-26 presents the process of one time of secure model recombination. Since the number of sending layers for each client is random, in our implementation, we set a low bound n l and an upper bound n u . In Line 2, the client randomly generates n from [n l , n u ] at the beginning of each time of secure model recombination. Line 3 initializes layer buffers, where Buf l is a list of lists, which consists of len(w) lists. len(w) denotes the number of layers in w. Line 4 initializes the client buffer Buf c , which is used to store the host and type of received layers. Therefore, elements of Buf c are two tuples c, idx l , where c denotes a client and idx l denotes the index of a layer.Stage 1 (Lines 5-10): In Line 5, the client randomly selects n layers from its model. Lines 6-10 randomly send selected layers to activated clients. Line 7 randomly selects an activated client c from L a . Then, in Line 8, the client sends the i th layer in L layer to c. Line 9 removes the sent layer from layer buffers. Lines 11-14): When receiving a layer l j from a client c, Line 12 pushes the layer into the corresponding layer buffer and Line 13 pushes the client with the index of the received layer as a two-tuple c, j into Buf c . Lines 15-19): Line 15 gets an element c, j from Buf c . Line 16 randomly selects a layer l with index j from layer buffer Buf l [j]. Line 17 sends the selected layer l to c. Line 18 removes l from Buf l [j]. Lines 20-26): Lines 20-22 receive layers from activated clients. Note that such clients are the same as the clients selected in Line 7. Line 21 pushes the received layer into layer buffers. Lines 23-26 recombine layers in layer buffers to generate a new model. Note that, since the number and type of sent layers are equal to that of received layers, after Line 22, each list in Buf l has only one layer. Line 24 pulls the i th layer l from Buf l and Line 25 replaces the i th layer in w with l.", "figure_data": "", "figure_id": "fig_17", "figure_label": "41234", "figure_type": "figure" }, { "figure_caption": "3 : 4 :34Buf l ← [[w[1]], [w[2]], ..., [w[len(w)]]]; Buf c ← []; /* Stage 1 start */ 5:", "figure_data": "", "figure_id": "fig_18", "figure_label": "34", "figure_type": "figure" }, { "figure_caption": "while Receive layer l j from a client c do 21 : 23 :2123Buf l[j].append(l j ); for i = 1, 2, ..., len(w) do 24:l ← Buf l [i][0]; 25: w[i] ← l;Since clients received layers are encrypted by the public key from the cloud server, they cannot directly obtain the model parameters of other clients. Note that, only in Stage 2 of the first time of secure model recombination clients can distinguish the source of received layers. Since layers sent in Stage 3 are randomly selected, clients cannot distinguish the received layers in Stage 4 from senders or other clients. Therefore, even if there are multiple malicious clients colluding with the cloud, it can only distinguish the source of the n u layer at most for each client. Since models received by the cloud server are recombined, the cloud server cannot restore the original model of each client. Based on the above analysis, we suggest that each client set a small n u at the first time of secure recombination and conduct multiple times of secure recombination before uploading its model.B.2.2 Communication OverheadAt each time of secure model recombination, each client sends n ∈ [n l , n u ] layers to other clients in Stage 1 and receives n layers from other clients in Stage 4. Note that layers received in Stage 2 are from layers sent in Stage 1 and layers received in Stage 4 are from layers sent in Stage 3. Therefore the communication overhead of a time of secure model recombination can be presented as K k=1 2 n k j=1 size(l k,j ).", "figure_data": "", "figure_id": "fig_19", "figure_label": "2123", "figure_type": "figure" }, { "figure_caption": "Figure 13 -1315 compare the learning curves of FedMR with all the six baselines based on the CNN, ResNet-20, and VGG-16, respectively.", "figure_data": "", "figure_id": "fig_20", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 16 -1617 compare the learning curves of FedMR with all the six baselines with different numbers of activated clients based on the ResNet-20 model for CIFAR-10 dataset with IID scenario and non-IID scenario with α = 0.1, respectively.", "figure_data": "", "figure_id": "fig_21", "figure_label": "16", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Learning curves of different FL methods based on the CNN model.", "figure_data": "", "figure_id": "fig_22", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 17 :17Figure 17: Learning curves of different FL methods based on the ResNet-20 model for CIFAR-10 dataset with α = 0.1.", "figure_data": "", "figure_id": "fig_24", "figure_label": "17", "figure_type": "figure" }, { "figure_caption": "Local Model Training. Unlike conventional FL methods that conduct local training on clients starting from the same aggregated model, in each training round FedMR uses different recombined models (i.e., K models in the model list L m ) for the local training purpose. Note that, in the whole training phase, FedMR only uses K (K ≤ |S c |) models, since there are only K devices activated in each training round. Let w c r be the parameters of some model that is dispatched to the c th client in the r th training round. In the r th training round, we dispatch the i th model in L m to its corresponding client using w Based on the recombined model, FedMR conducts the local training on client L r [i] as follows:", "figure_data": "Lr[i] r= L m [i].Algorithm 1 details the implementation ofFedMR. Line 1 initializes the model list L m ,which includes K initial models. Lines 2-10performs rnd rounds of FedMR training. Ineach round, Line 3 selects K random clientsto participate the model training and createsa client list L r . Lines 4-7 conduct the localtraining on clients in parallel, where Line 5applies the local model L m [i] on client L r [i]for local training by using the function Clien-tUpdate, and Line 6 achieves a new local model after the local training. After the cloud server receives all the K local models, Line 8 uses the function ModelRcombine to recom-bine local models and generate K new local5: 6: 7: 8:v i r+1 ←ClientUpdate(Lm[i], Lr[i]) Lm[i] ← v i r+1 end for [w 1 r+1 , w 2 r+1 , ..., w K r+1 ] ←ModelRcombine(Lm)models, which are saved in L m as shown in Line 9. Finally, Lines 11-12 will report an op-timal global model that is generated based on L m . Note that the global model will be dis-patched by the cloud server to all the clients9: 10: end for Lm ← [w 1 r+1 , w 2 r+1 , ..., w K r+1 ] 11: w glb ← 1 K K i=1 w i rnd+1 12: return w glbfor the purpose of inference rather than local training. The following parts will detail the key compo-nents of FedMR. Since FedMR cannot adapt to the secure aggregation mechanism [27], to furtherprotect privacy, we present an extended secure recombination mechanism in Appendix B, whichenables the exchange of partial layers among clients before local training or model uploading toensure that the cloud server cannot directly obtain the gradients of each local model.", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Test accuracy comparison for both non-IID and IID scenarios using three DL models .12 ± 2.35 47.17 ± 1.65 49.12 ± 0.91 42.61 ± 2.65 49.27 ± 0.85 47.09 ± 0.97 54.22 ± 1.25 0.5 52.82 ± 0.91 53.59 ± 0.88 54.50 ± 0.44 53.56 ± 1.74 51.77 ± 0.73 54.00 ± 0.38 59.13 ± 0.65 1.0 54.78 ± 0.56 54.96 ± 0.60 56.75 ± 0.26 54.51 ± 1.24 55.38 ± 0.66 55.82 ± 0.73 61.10 ± 0.49 IID 57.64 ± 0.22 58.34 ± 0.15 59.98 ± 0.22 57.33 ± 0.30 58.71 ± 0.19 57.32 ± 0.21 62.07 ± 0.29 CF100 0.1 28.37 ± 1.10 28.11 ± 1.03 30.32 ± 1.05 28.15 ± 1.54 28.18 ± 0.58 28.63 ± 0.63 33.33 ± 0.87 0.5 30.01 ± 0.56 32.16 ± 0.50 33.49 ± 0.73 30.93 ± 0.49 29.55 ± 0.41 33.04 ± 0.41 36.96 ± 0.30 1.0 32.34 ± 0.65 32.78 ± 0.13 34.95 ± 0.58 31.46 ± 0.66 31.88 ± 0.65 32.92 ± 0.31 38.05 ± 0.24 IID 32.98 ± 0.20 33.39 ± 0.25 35.11 ± 0.23 32.39 ± 0.19 32.43 ± 0.20 34.97 ± 0.24 40.01 ± 0.11 FEM -81.67 ± 0.36 82.10 ± 0.61 81.65 ± 0.21 81.13 ± 0.39 81.95 ± 0.36 80.80 ± 0.40 82.73 ± 0.36 .11 ± 2.13 45.45 ± 3.42 50.46 ± 1.76 46.38 ± 2.66 42.71 ± 3.48 44.87 ± 1.65 62.09 ± 1.77 0.5 60.56 ± 0.95 59.52 ± 0.74 58.85 ± 0.85 60.47 ± 0.68 60.29 ± 0.68 59.55 ± 1.00 74.00 ± 0.32 1.0 62.99 ± 0.62 61.47 ± 0.66 61.63 ± 0.78 61.99 ± 0.68 63.81 ± 0.33 63.32 ± 0.71 76.92 ± 0.38 IID 67.12 ± 0.27 66.06 ± 0.22 65.20 ± 0.27 66.19 ± 0.22 65.89 ± 0.17 65.62 ± 0.23 77.94 ± 0.14 CF100 0.1 31.90 ± 1.16 33.00 ± 1.21 35.71 ± 0.62 32.91 ± 0.70 32.40 ± 1.45 34.34 ± 0.52 45.13 ± 1.05 0.5 42.45 ± 0.53 42.83 ± 0.54 42.33 ± 1.23 41.76 ± 0.22 42.72 ± 0.32 42.07 ± 0.39 54.73 ± 0.27 1.0 44.22 ± 0.36 44.35 ± 0.36 43.28 ± 0.61 42.92 ± 0.67 44.75 ± 0.57 43.29 ± 0.41 56.96 ± 0.31 IID 44.42 ± 0.18 45.16 ± 0.24 44.37 ± 0.19 46.13 ± 0.13 45.21 ± 0.19 43.59 ± 0.24 59.25 ± 0.35 FEM -78.47 ± 0.40 79.74 ± 0.54 76.14 ± 0.90 79.50 ± 0.46 79.56 ± 0.34 79.28 ± 0.42 81.27 ± 0.31 .79 ± 3.90 63.35 ± 4.31 64.18 ± 3.86 60.19 ± 3.73 66.52 ± 1.46 66.91 ± 1.83 74.38 ± 0.71 0.5 78.14 ± 0.67 77.70 ± 0.45 76.22 ± 1.37 77.41 ± 0.77 78.9 ± 0.39 78.82 ± 0.40 82.86 ± 0.37 1.0 78.55 ± 0.21 79.10 ± 0.28 76.99 ± 1.01 78.81 ± 0.41 79.75 ± 0.26 80.00 ± 0.37 84.45 ± 0.23 IID 80.02 ± 0.05 80.77 ± 0.22 78.80 ± 0.07 81.11 ± 0.12 80.00 ± 0.27 80.96 ± 0.12 85.87 ± 0.23 CF100 0.1 46.60 ± 1.45 45.88 ± 3.35 45.79 ± 1.77 42.74 ± 1.10 49.04 ± 0.63 48.04 ± 1.76 56.60 ± 0.83 0.5 55.86 ± 0.64 55.79 ± 0.56 55.30 ± 0.61 53.29 ± 0.79 56.40 ± 0.37 56.23 ± 0.34 65.04 ± 0.16 1.0 57.55 ± 0.51 57.40 ± 0.32 55.43 ± 0.45 54.67 ± 0.55 57.15 ± 0.27 57.95 ± 0.35 66.28 ± 0.34 IID 58.30 ± 0.23 58.49 ± 0.11 56.51 ± 0.08 57.39 ± 0.24 57.62 ± 0.18 58.14 ± 0.20 66.28 ± 0.11 FEM -84.22 ± 0.46 83.98 ± 0.48 82.65 ± 0.74 79.09 ± 0.42 84.69 ± 0.28 84.32 ± 0.36 85.36 ± 0.21", "figure_data": "Model Datas.Heter. Set.FedAvgFedProxSCAFFOLDTest Accuracy (%) MOONFedGenCluSampFedMRCNN 0.1 46ResNet CF10 CF10 CF10 0.1 45VGG 0.1 63", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Let K be the number of activated clients in each FL training round. Since the cloud server needs to send K recombined models to K clients and receive K trained local models from K clients in an FL round, the communication overhead of FedMR is 2K models in each round, which is the same as FedAvg, FedProx, and CluSamp. Since the cloud server needs to dispatch an extra global control variable to each client and clients also need to update and upload these global control variables to the cloud server, the communication overhead of SCAFFOLD is 2K models plus 2K global control variables in each FL training round. Unlike the other FL methods, the cloud server of FedGen needs to dispatch an additional built-in generator to the selected clients, the communication overhead of FedGen in each FL training round is 2K models plus K generators. Base on the above analysis, we can find that FedMR does not cause any extra communication overhead. Therefore, FedMR requires the least communication overhead among all the investigated FL methods in each FL training round. Note that, as shown in Figure8, although FedMR requires more rounds to achieve the highest test accuracy, to achieve the highest accuracy of other FL methods, FedMR generally requires less FL rounds. In other words, to achieve the same test accuracy, FedMR requires much fewer overall communication overhead.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
Ming Hu; Zhihao Yue; Zhiwei Ling; Yihao Huang; Cheng Chen; Xian Wei; Yang Liu; Mingsong Chen
[ { "authors": "Brendan Mcmahan; Eider Moore; Daniel Ramage; Seth Hampson; Blaise Aguera Y Arcas", "journal": "", "ref_id": "b0", "title": "Communication-efficient learning of deep networks from decentralized data", "year": "2017" }, { "authors": "Xinqian Zhang; Ming Hu; Jun Xia; Tongquan Wei; Mingsong Chen; Shiyan Hu", "journal": "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems", "ref_id": "b1", "title": "Efficient federated learning for cloud-based aiot applications", "year": "2020" }, { "authors": "Moayad Haya Elayan; Mohsen Aloqaily; Guizani", "journal": "IEEE Internet of Things Journal", "ref_id": "b2", "title": "Sustainability of healthcare data analysis iot-based systems using deep federated learning", "year": "2021" }, { "authors": "Quande Liu; Cheng Chen; Jing Qin; Qi Dou; Pheng-Ann Heng", "journal": "", "ref_id": "b3", "title": "Feddg: Federated domain generalization on medical image segmentation via episodic learning in continuous frequency space", "year": "2021" }, { "authors": "Qian Yang; Jianyi Zhang; Weituo Hao; Gregory P Spell; Lawrence Carin", "journal": "", "ref_id": "b4", "title": "Flop: Federated learning on medical datasets using partial networks", "year": "2021" }, { "authors": "David Leroy; Alice Coucke; Thibaut Lavril; Thibault Gisselbrecht; Joseph Dureau", "journal": "", "ref_id": "b5", "title": "Federated learning for keyword spotting", "year": "2019" }, { "authors": "Naman Agarwal; Peter Kairouz; Ziyu Liu", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "The skellam mechanism for differentially private federated learning", "year": "2021" }, { "authors": "Bo Zhao; Peng Sun; Tao Wang; Keyu Jiang", "journal": "", "ref_id": "b7", "title": "Fedinv: Byzantine-robust federated learning by inversing local model updates", "year": "2022" }, { "authors": "Chendi Zhou; Ji Liu; Juncheng Jia; Jingbo Zhou; Yang Zhou; Huaiyu Dai; Dejing Dou", "journal": "", "ref_id": "b8", "title": "Efficient device scheduling with multi-job federated learning", "year": "2022" }, { "authors": "Peter Kairouz; Brendan Mcmahan; Brendan Avent; Aurélien Bellet; Mehdi Bennis; Nitin Arjun; Kallista Bhagoji; Zachary Bonawitz; Graham Charles; Rachel Cormode; Cummings", "journal": "Foundations and Trends® in Machine Learning", "ref_id": "b9", "title": "Advances and open problems in federated learning", "year": "2021" }, { "authors": "Hao Wang; Zakhary Kaplan; Di Niu; Baochun Li", "journal": "", "ref_id": "b10", "title": "Optimizing federated learning on non-iid data with reinforcement learning", "year": "2020" }, { "authors": "Durmus Alp; Emre Acar; Yue Zhao; Ramon Matas; Matthew Mattina; Paul Whatmough; Venkatesh Saligrama", "journal": "", "ref_id": "b11", "title": "Federated learning based on dynamic regularization", "year": "2020" }, { "authors": "Ming Xie; Guodong Long; Tao Shen; Tianyi Zhou; Xianzhi Wang; Jing Jiang; Chengqi Zhang", "journal": "", "ref_id": "b12", "title": "Multi-center federated learning", "year": "2020" }, { "authors": "Praneeth Sai; Satyen Karimireddy; Mehryar Kale; Sashank Mohri; Sebastian Reddi; Ananda Stich; Suresh Theertha", "journal": "", "ref_id": "b13", "title": "Scaffold: Stochastic controlled averaging for federated learning", "year": "2020" }, { "authors": "Yutao Huang; Lingyang Chu; Zirui Zhou; Lanjun Wang; Jiangchuan Liu; Jian Pei; Yong Zhang", "journal": "", "ref_id": "b14", "title": "Personalized cross-silo federated learning on non-iid data", "year": "2021" }, { "authors": "Tian Li; Anit Kumar Sahu; Manzil Zaheer; Maziar Sanjabi; Ameet Talwalkar; Virginia Smith", "journal": "Proceedings of Machine Learning and Systems", "ref_id": "b15", "title": "Federated optimization in heterogeneous networks", "year": "2020" }, { "authors": "Tao Lin; Lingjing Kong; Sebastian U Stich; Martin Jaggi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b16", "title": "Ensemble distillation for robust model fusion in federated learning", "year": "2020" }, { "authors": "Zhuangdi Zhu; Junyuan Hong; Jiayu Zhou", "journal": "", "ref_id": "b17", "title": "Data-free knowledge distillation for heterogeneous federated learning", "year": "2021" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Advances in neural information processing systems", "ref_id": "b18", "title": "Simplifying neural nets by discovering flat minima", "year": "1994" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural computation", "ref_id": "b19", "title": "Flat minima", "year": "1997" }, { "authors": "Lei Wu; Zhanxing Zhu", "journal": "", "ref_id": "b20", "title": "Towards understanding generalization of deep learning: Perspective of loss landscapes", "year": "2017" }, { "authors": "Jungmin Kwon; Jeongseop Kim; Hyunseo Park; In Kwon Choi", "journal": "PMLR", "ref_id": "b21", "title": "Asam: Adaptive sharpnessaware minimization for scale-invariant learning of deep neural networks", "year": "2021" }, { "authors": "Cheng Chen; Ziyi Chen; Yi Zhou; Bhavya Kailkhura", "journal": "", "ref_id": "b22", "title": "Fedcluster: Boosting the convergence of federated learning via cluster-cycling", "year": "2020" }, { "authors": "Yann Fraboni; Richard Vidal; Laetitia Kameni; Marco Lorenzi", "journal": "", "ref_id": "b23", "title": "Clustered sampling: Lowvariance and improved representativity for clients selection in federated learning", "year": "2021" }, { "authors": "Moritz Hardt; Ben Recht; Yoram Singer", "journal": "PMLR", "ref_id": "b24", "title": "Train faster, generalize better: Stability of stochastic gradient descent", "year": "2016" }, { "authors": "Dongxian Wu; Shu-Tao Xia; Yisen Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b25", "title": "Adversarial weight perturbation helps robust generalization", "year": "2020" }, { "authors": "Keith Bonawitz; Vladimir Ivanov; Ben Kreuter; Antonio Marcedone; Brendan Mcmahan; Sarvar Patel; Daniel Ramage; Aaron Segal; Karn Seth", "journal": "", "ref_id": "b26", "title": "Practical secure aggregation for privacy-preserving machine learning", "year": "2017" }, { "authors": "Xiang Li; Kaixuan Huang; Wenhao Yang; Shusen Wang; Zhihua Zhang", "journal": "", "ref_id": "b27", "title": "On the convergence of fedavg on non-iid data", "year": "2020" }, { "authors": "Qinbin Li; Bingsheng He; Dawn Song", "journal": "", "ref_id": "b28", "title": "Model-contrastive federated learning", "year": "2021" }, { "authors": "Alex Krizhevsky", "journal": "", "ref_id": "b29", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Sebastian Caldas; Sai Meher; Karthik Duddu; Peter Wu; Tian Li; Jakub Konečnỳ; Brendan Mcmahan; Virginia Smith; Ameet Talwalkar", "journal": "", "ref_id": "b30", "title": "Leaf: A benchmark for federated settings", "year": "2018" }, { "authors": "Tzu-Ming Harry Hsu; Hang Qi; Matthew Brown", "journal": "", "ref_id": "b31", "title": "Measuring the effects of non-identical data distribution for federated visual classification", "year": "2019" }, { "authors": " Torchvisionmodel", "journal": "", "ref_id": "b32", "title": "Models and pre-trained weight", "year": "2019" }, { "authors": "Sebastian Urban; Stich ", "journal": "", "ref_id": "b33", "title": "Local sgd converges fast and communicates little", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 120.92, 72.43, 284.28, 93.72 ], "formula_id": "formula_0", "formula_text": "… … ! ! \" ! # \" ! $ \" ! ! ! # ! $ 𝑙𝑖𝑠𝑡 ! 𝑙𝑖𝑠𝑡 \" 𝑙𝑖𝑠𝑡 # 𝑙𝑖𝑠𝑡 $(" }, { "formula_coordinates": [ 5, 127.24, 119.31, 362.35, 32.02 ], "formula_id": "formula_1", "formula_text": "Lr[i] r+1 = w Lr[i] r -η∇f Lr[i] (w Lr[i] r ), s.t., f Lr[i] (w Lr[i] r ) = 1 |D Lr[i] | |D Lr [i] | j=1 (w Lr[i] r ; x j ; y j )," }, { "formula_coordinates": [ 5, 138.58, 161.71, 366.67, 13.09 ], "formula_id": "formula_2", "formula_text": "Lr[i] r indicates parameters of the trained local model, D Lr[i] denotes the dataset of client L r [i]," }, { "formula_coordinates": [ 5, 261.52, 199.87, 99.52, 12.78 ], "formula_id": "formula_3", "formula_text": "L m using L m [i] = v Lr[i]" }, { "formula_coordinates": [ 5, 107.64, 587.14, 398.1, 54.05 ], "formula_id": "formula_4", "formula_text": "Assumption 4.1. For i ∈ {1, 2, • • • , K}, f i is L-smooth satisfying ||∇f i (x)-∇f i (y)|| ≤ L 2 ||x-y||. Assumption 4.2. For i ∈ {1, 2, • • • , K}, f i is µ-strongly convex satisfying ||∇f i (x) -∇f i (y)|| ≥ µ 2 ||x -y||, where µ ≥ 0. Assumption 4.3." }, { "formula_coordinates": [ 5, 108, 641.64, 397.24, 23.17 ], "formula_id": "formula_5", "formula_text": "-∇f k (w)|| 2 ≤ θ 2 , E||∇f k (w; ξ)|| 2 ≤ G 2 ," }, { "formula_coordinates": [ 6, 190.23, 102.13, 54.01, 30.55 ], "formula_id": "formula_6", "formula_text": "K k=1 v k r = K k=1" }, { "formula_coordinates": [ 6, 284.21, 102.13, 137.56, 30.55 ], "formula_id": "formula_7", "formula_text": "K k=1 ||v k r -x|| 2 = K k=1 ||w k r -x|| 2 ." }, { "formula_coordinates": [ 6, 220.95, 204.71, 170.1, 16.47 ], "formula_id": "formula_8", "formula_text": "E[f (w T )] -f ≤ L 2µ(T + γ) [ 4B µ + µ(γ + 1) 2 ∆1]," }, { "formula_coordinates": [ 6, 134.47, 226.41, 200, 12.72 ], "formula_id": "formula_9", "formula_text": "B = 10LΓ + 4(E -1) 2 G 2 , w T = k = 1 K w k T ." }, { "formula_coordinates": [ 13, 246.16, 160.38, 257.84, 12.69 ], "formula_id": "formula_10", "formula_text": "v k t+1 = w k t -η t ∇f k (w k t , ξ k t ),(1)" }, { "formula_coordinates": [ 13, 220.65, 186.08, 160.09, 23.97 ], "formula_id": "formula_11", "formula_text": "w k t+1 = v k t+1 , if E t + 1 RM (v k t+1 ), if E | t + 1" }, { "formula_coordinates": [ 13, 225.25, 288.23, 278.75, 30.55 ], "formula_id": "formula_12", "formula_text": "K k=1 v k t+1 = K k=1 RM (v k t+1 ) = K k=1 w k t+1 ,(2)" }, { "formula_coordinates": [ 13, 228.84, 326.06, 275.16, 30.55 ], "formula_id": "formula_13", "formula_text": "K k=1 ||v k t+1 -x|| 2 = K k=1 ||w k t+1 -x|| 2 ,(3)" }, { "formula_coordinates": [ 13, 238.75, 403.64, 265.25, 30.55 ], "formula_id": "formula_14", "formula_text": "v t = 1 K K k=1 v k t , w t = 1 K k k=1 w k t .(4)" }, { "formula_coordinates": [ 13, 314.95, 447.58, 78.06, 12.19 ], "formula_id": "formula_15", "formula_text": "g k t = ∇f k (w k t ; ξ k t )." }, { "formula_coordinates": [ 13, 267.64, 491, 235, 16.58 ], "formula_id": "formula_16", "formula_text": "v k t = L 1 ⊕L 2 ⊕...⊕L n . Let L i = [p v k t (i,0) , p v k t (i,1) , ..., p v k t (i,|Li|) ]" }, { "formula_coordinates": [ 13, 209.82, 535.65, 294.18, 81.43 ], "formula_id": "formula_17", "formula_text": "K k=1 ||v k t -x|| 2 = K k=1 n i=1 |Li| j=1 ||p v k t (i,j) -p x (i,j) || 2 (5) K k=1 ||w k t -x|| 2 = K k=1 n i=1 |Li| j=1 ||p w k t (i,j) -p x (i,j) || 2(6)" }, { "formula_coordinates": [ 13, 228.94, 660.53, 275.06, 30.55 ], "formula_id": "formula_18", "formula_text": "∀ i∈[1,n],j∈[1,|Li|] K k=1 p v k t (i,j) = K k=1 p w k t (i,j)(7)" }, { "formula_coordinates": [ 13, 209.19, 707.17, 294.81, 16.97 ], "formula_id": "formula_19", "formula_text": "∀ k∈[1,K],i∈[1,n],j∈[1,|Li|] ∃ q∈[1,K] {p v k t (i,j) = p w q t (i,j) } (8) ∀ k∈[1,K],i∈[1,n],j∈[1,|Li|] ∃ q∈[1,K] {p w k t (i,j) = p v q t (i,j) }(9)" }, { "formula_coordinates": [ 14, 210.93, 149.97, 293.07, 105.35 ], "formula_id": "formula_20", "formula_text": "K k=1 ||v k t -x|| 2 = K k=1 n i=1 |Li| j=1 ||p v k t (i,j) -p x (i,j) || 2 = K k=1 n i=1 |Li| j=1 ||p w k t (i,j) -p x (i,j) || 2 = K k=1 ||w k t -x|| 2(10)" }, { "formula_coordinates": [ 14, 128.17, 338.44, 339.05, 30.55 ], "formula_id": "formula_21", "formula_text": "E||v t+1 -w || 2 ≤ 1 K K k=1 (1 -µη t )||v k t -w || 2 + 1 K K k=1 ||w k t -w k t0 || 2 + 10η 2 t LΓ." }, { "formula_coordinates": [ 14, 161.62, 398.71, 342.38, 100.91 ], "formula_id": "formula_22", "formula_text": "||v t+1 -w || 2 ≤ 1 K K k=1 ||v k t+1 -w || 2 = 1 K K k=1 ||v k t -η t g k t -w || 2 = 1 K K k=1 (||v k t -w || 2 -2η t w k t -w , g k t + η 2 t ||g k t || 2 )(11)" }, { "formula_coordinates": [ 14, 123.77, 506.1, 380.23, 32.36 ], "formula_id": "formula_23", "formula_text": "B 1 = -2η t w k t -w , g k t and B 2 = η 2 t K k=1 ||g k t || 2 . According to Assumption 4.2, we have B 1 ≤ -2η t (f k (w k t ) -f k (w )) -µη t ||w k t -w || 2(12)" }, { "formula_coordinates": [ 14, 252.44, 560.46, 251.56, 12.69 ], "formula_id": "formula_24", "formula_text": "B 2 ≤ 2η 2 t L(f k (w k t ) -f k )(13)" }, { "formula_coordinates": [ 14, 108, 601.54, 396, 123.8 ], "formula_id": "formula_25", "formula_text": "||v t+1 -w || 2 ≤ 1 K K k=1 [(1-µη t )||v k t -w || 2 -2η t (f k (w k t )-f k (w ))+2η 2 t L(f k (w k t )-f k )] (14) Let C = 1 K K k=1 [-2η t (f k (w k t ) -f k (w )) + 2η 2 t L(f k (w k t ) -f k )]. We have C = -2η t K K k=1 (f k (w k t ) -f k (w )) + 2η 2 t L K K k=1 (f k (w k t ) -f k ) = - 2η t (1 -η t L) K K k=1 (f k (w k t ) -f ) + 2η 2 t L K K k=1 (f -f k )(15)" }, { "formula_coordinates": [ 15, 108, 72.19, 396, 48.08 ], "formula_id": "formula_26", "formula_text": "Let Γ = f -1 K K k=1 f k and φ = 2η t (1 -Lη t ). We have C = - φ K K k=1 (f k (w k t ) -f ) + 2η 2 t LΓ(16)" }, { "formula_coordinates": [ 15, 108, 123.24, 396, 48.08 ], "formula_id": "formula_27", "formula_text": "Let D = -1 K K k=1 (f k (w k t ) -f ), E | t 0 and t -t 0 ≤ E. We have D = - 1 K K k=1 (f k (w k t ) -f k (w k t0 ) + f k (w k t0 ) -f )(17)" }, { "formula_coordinates": [ 15, 144.63, 187.52, 359.37, 65.73 ], "formula_id": "formula_28", "formula_text": "D ≤ 1 2K K k=1 (η t ||∇f k (w k t0 )|| 2 + 1 η t ||w k t -w k t0 || 2 ) - 1 K K k=1 (f k (w k t0 ) -f ) ≤ 1 2K K k=1 [2η t L(f k (w k t0 ) -f k ) + 1 η t ||w k t -w k t0 || 2 ] - 1 K K k=1 (f k (w k t0 ) -f )(18)" }, { "formula_coordinates": [ 15, 119.2, 277.88, 384.8, 206.45 ], "formula_id": "formula_29", "formula_text": "C ≤ φ 2K K k=1 [2η t L(f k (w k t0 ) -f k ) + 1 η t ||w k t -w k t0 || 2 ] - φ K K k=1 (f k (w k t0 ) -f ) + η 2 t LΓ = φ 2η t K K k=1 ||w k t -w k t0 || 2 + (φη t L + 2η 2 t L)Γ + φ K K k=1 (f -f k (w k t0 )) ≤ φ 2η t K K k=1 ||w k t -w k t0 || 2 + (φη t L + 2η 2 t L)Γ + φ K K k=1 (f -f k ) ≤ φ 2η t K K k=1 ||w k t -w k t0 || 2 + (φη t L + φ + 2η 2 t L)Γ ≤ 1 K K k=1 ||w k t -w k t0 || 2 + (2η 2 t L + 2η t )Γ ≤ 1 K K k=1 ||w k t -w k t0 || 2 + 10η 2 t LΓ(19)" }, { "formula_coordinates": [ 15, 225.03, 550.52, 163.14, 30.55 ], "formula_id": "formula_30", "formula_text": "1 K K k=1 ||w k t -w k t0 || 2 ≤ 4η 2 t (E -1) 2 G 2 ." }, { "formula_coordinates": [ 15, 185.06, 606.82, 242.58, 118.71 ], "formula_id": "formula_31", "formula_text": "1 K K k=1 ||w k t -w k t0 || 2 = 1 K K k=1 || t0+E-1 t=t0 η t ∇f a1 (w a1 t ; ξ a1 t )|| 2 ≤ (t -t 0 ) t0+E-1 t=t0 η 2 t G 2 ≤ (E -1) t0+E-1 t=t0 η 2 t G 2 ≤ 4η 2 t (E -1) 2 G 2 . A.4 Proof of Theorem 1 Proof. Let ∆ t = ||w t -w || 2 and ∆ t = 1 K K k=1 ||w k t -w || 2 ." }, { "formula_coordinates": [ 16, 226.17, 157.43, 159.67, 12.89 ], "formula_id": "formula_32", "formula_text": "∆ t+1 ≤ ∆ t+1 ≤ (1 -µη t )∆ t + η 2 t B," }, { "formula_coordinates": [ 16, 249.08, 197.86, 113.84, 10.81 ], "formula_id": "formula_33", "formula_text": "B = 10LΓ + 4(E -1) 2 G 2 ." }, { "formula_coordinates": [ 16, 108, 219.82, 396, 77.11 ], "formula_id": "formula_34", "formula_text": "β > 1 µ , γ > 0 such that η t ≤ min{ 1 µ , 1 4L } = 1 4L and η t ≤ 2η t+E . Let ψ = max{ β 2 B µβ-1 , (γ + 1)∆ 1 }, we firstly proof ∆ t ≤ ψ t+γ . For t = 1, ∆ 1 = ∆ 1 = γ + 1 γ + 1 ∆ 1 ≤ ψ γ + 1(20)" }, { "formula_coordinates": [ 16, 159.67, 307.33, 344.33, 165.65 ], "formula_id": "formula_35", "formula_text": "∆ t ≤ ∆ t ≤ ψ γ+1 , ∆ t+1 ≤ ∆ t+1 ≤ (1 -µη t )∆ t + η 2 t B = (1 - µβ t + γ ) ψ t + γ + β 2 B (t + γ) 2 ≤ t + γ -1 (t + γ) 2 ψ + [ β 2 B (t + γ) 2 - µβ -1 (t + γ) 2 ψ] ≤ t + γ -1 (t + γ) 2 ψ + [ β 2 B (t + γ) 2 - µβ -1 (t + γ) 2 β 2 B µβ -1 ] ≤ ψ t + 1 + γ .(21)" }, { "formula_coordinates": [ 16, 280.13, 500.41, 223.87, 22.31 ], "formula_id": "formula_36", "formula_text": "∆ t ≤ ψ t + γ .(22)" }, { "formula_coordinates": [ 16, 234.18, 555.64, 269.82, 22.31 ], "formula_id": "formula_37", "formula_text": "E[f (w t )] -f ≤ L 2 ∆ t ≤ ψL 2(t + γ)(23)" }, { "formula_coordinates": [ 16, 148.91, 623.85, 355.09, 23.89 ], "formula_id": "formula_38", "formula_text": "ψ = max{ β 2 B µβ -1 , (γ + 1)∆ 1 } ≤ β 2 B µβ -1 + (γ + 1)∆ 1 ≤ 4B µ 2 + (γ + 1)∆ 1(24)" }, { "formula_coordinates": [ 16, 142.64, 675.72, 361.36, 22.31 ], "formula_id": "formula_39", "formula_text": "E[f (w t )] -f ≤ L 2(t + γ) [ 4B µ 2 + (γ + 1)∆ 1 ] = L 2µ(t + γ) [ 4B µ + µ(γ + 1) 2 ∆ 1 ](25)" }, { "formula_coordinates": [ 19, 116.76, 357.77, 387.24, 30.72 ], "formula_id": "formula_41", "formula_text": "E K k=1 2 n k j=1 size(l k,j ) = (n u + n l )K len(w) × size(w) ≤ 2n u K len(w) × size(w) ≤ 2K × size(w).(27)" } ]
10.18653/v1/S19-2007
2023-10-08
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b40", "b6", "b9", "b0", "b7", "b4", "b17", "b27", "b3", "b25", "b21" ], "table_ref": [], "text": "Interactive live streaming services such as Twitch1 and YouTube Live2 have emerged as one of the most popular and widely-used social platforms. Unfortunately, streamers on these platforms struggle with an increasing volume of toxic comments and norm-violating behavior. 3 While there has been extensive research on mitigating similar problems for online conversations across various platforms such as Twitter (Waseem and Hovy, 2016;Davidson et al., 2017;Founta et al., 2018;Basile et al., 2019; Test (2) relationships between chats are less clearly defined. Such differences make chats in the synchronous domain more difficult to be moderated by existing approaches. ElSherief et al., 2021), Reddit (Datta and Adar, 2019;Kumar et al., 2018;Park et al., 2021), Stackoverflow (Cheriyan et al., 2017) and Github (Miller et al., 2022), efforts that extend them to live streaming platforms have been absent. In this paper, we study unique characteristics of comments in livestreaming services and develop new datasets and models for appropriately using contextual information to automatically moderate toxic content and norm violations.\nConversations in online communities studied in previous work are asynchronous: utterances are grouped into threads that structurally establish conversational context, allowing users to respond to prior utterances without time constraints. The lack of time constraints allows users to formulate longer and better thought-out responses and more easily reference prior context.\nOn the other hand, conversations on live streaming platforms are synchronous, i.e. in real-time, as utterances are presented in temporal order without a thread-like structure. Context is mostly established by consecutive utterances (Li et al., 2021). The transient nature of live-stream utterances encourages fast responses, and encourages producing multiple short comments that may be more prone to typos (70% of comments are made up of < 4 words). Figure 1 shows an illustration of the contrasting temporal and length patterns between the asynchronous and synchronous platforms.\nOwing to these different characteristics, we find that previous approaches for detecting norm violations are ineffective for live-streaming platforms.\nTo address this limitation, we present the first NLP study of detecting norm violations in live-stream chats. We first establish norms of interest by collecting 329 rules from Twitch streamers' channels and define 15 different fine-grained norm categories through an iterative coding process. Next, we collect 4,583 moderated chats and their corresponding context from Twitch live streams and annotate them with these norm categories ( §2.1- §2.3). With our data, we explore the following research questions: (1) How are norm violations in live-stream chats, i.e. synchronous conversations, different from those in previous social media datasets, i.e. asynchronous conversations?;\n(2) Are existing norm violation or toxicity detection models robust to the distributional shift between the asynchronous and synchronous platforms? ( §3.1, §3.3); and (3) Which features (e.g., context and domain knowledge) are important for detecting norm violation in synchronous conversations? ( §3.2) From our explorations, we discover that (1) livestream chats have unique characteristics and norm violating behavior that diverges from those in previous toxicity and norm-violation literature; (2) existing models for moderation perform poorly on detecting norm violations in live-stream chats; and\n(3) additional information, such as chat and video context, are crucial features for identifying norm violations in live-stream chats. We show that incorporating such information increases inter-annotator agreement for categorizing moderated content and that selecting temporally proximal chat context is crucial for enhancing the performance of norm violation detection models in live-stream chats." }, { "figure_ref": [ "fig_1" ], "heading": "NormVio-RT", "publication_ref": [], "table_ref": [], "text": "To investigate norm-violations in live-stream chat, we first collect Norm Violations in Real-Time Conversations (NormVio-RT), which contains 4,583 norm-violating comments on Twitch that were moderated by channel moderators. 4 An overview of our data collection procedure is illustrated in Figure 2. 4 Please contact the authors for the anonymized study data. We first select 200 top Twitch streamers and collect moderated comments from their streamed sessions ( §2.1). To understand why these chats are moderated, we collect chat rules from these streamers and aggregate them to define coarse and fine-grained norm categories ( §2.2). We design a three-step annotation process to determine the impact of the chat history, video context, and external knowledge on labeling decisions ( §2.3). Lastly, we present analysis of the collected data ( §2.4)." }, { "figure_ref": [], "heading": "Data Collection", "publication_ref": [], "table_ref": [], "text": "We collected data using the Twitch API and IRC 5 from the streamers with videos that are available for download among the top 200 Twitch streamers as of June 2022 6 . We specifically looked for comments that triggered a moderation event during a live stream (e.g. user ban, user timeout), and collected the moderated comment and the corresponding video and chat logs up to two minutes prior to the moderation event. Logs of moderated events from August 22, 2022 to September 3, 2022 were collected. We excluded comments that were moderated within less than 1 second of being posted, as they are likely to have been moderated by bots rather than humans." }, { "figure_ref": [], "heading": "Norm Categorization", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Twitch streamers can set their own rules for their channels, and these channel-specific rules are essential for understanding why comments were moderated. We first collect 329 rules from the top 200 Twitch streamers' channels. Next, following Fiesler et al. ( 2018), we take an iterative coding process such that the authors of this paper individually code for rule types with certain categories, come together to determine differences and then repeat the coding process individually. With this process, we aggregated similar rules into 15 different finegrained level norm categories (e.g., controversial topics, begging) and cluster multiple fine-grained categories into 8 different coarse-grained norm categories (e.g., off-topic). To better understand the targets of offensive comments in the HIB (Harassment, Intimidation, Bullying) class, we added an additional dimension to consider whether the target is the broadcaster (streamer), participants in the channel (e.g., moderators and viewers), or someone not directly involved in the broadcast. We asked annotators to assign \"Incivility\" to cases where annotators do not believe that a specific pre-defined rule type has been violated although moderated. Examples of \"Incivility\" are provided in Appendix A.4.\nTable 1 shows the resulting norm categories and corresponding fine-grained norms with examples." }, { "figure_ref": [], "heading": "Violated Norm Type Annotation", "publication_ref": [], "table_ref": [], "text": "We recruited three annotators who are fluent in English and spend at least 10 hours a week on live streaming platforms to ensure that annotators understood live streaming content and conventions.\nTheir fluency was verified through several rounds of pilot annotation work. Internal auditors continuously conducted intermittent audits to ensure that annotators fully understood the guidelines.\nAnnotators were asked to annotate each moderation event (i.e. moderated comment) with the Lastly, to examine how much external knowledge matters in understanding comments on live streaming platforms, we asked annotators to (1) indicate whether external knowledge is necessary to understand why a comment triggered a moderation event and if so (2) describe what that knowledge is. We focus on two types of external knowledge: platform-and streamer-specific. Platform-specific knowledge includes the implicit meaning of particular emojis, emotes, and slang that are commonly " }, { "figure_ref": [], "heading": "Data Statistics and Analysis", "publication_ref": [ "b30", "b42", "b27" ], "table_ref": [ "tab_5", "tab_7", "tab_5", "tab_5" ], "text": "General Observations We identified three characteristics that distinguish real-time live-streaming chat from other domains. First, the majority of comments are very short; 70% of comments are made up of < 4 words. Additionally, they are often very noisy due to the real-time nature of communication, which leads to a high number of typos, abbreviations, acronyms, and slang in the comments. Lastly, some comments use unusual visual devices such as ASCII art and \"all caps\", to make them more noticeable. This is because each comment is visible only for a short time in popular streams (on average, there are around 316 chats per minute for the streamers in our data). The chat window in live streaming platforms can only display a limited number of comments, so viewers are incentivized to use visual devices to draw the streamer's attention in these fast-paced conditions.\nFalse positives in data. We find that the \"Incivility\" case contains many false positives, as they include cases that seem to have been moderated for no particular reason. We asked annotators to put all miscellaneous things into the \"Incivility\" category, and also to mark as \"Incivility\" if they could not identify any reason for the moderation. We found that many cases are not identifiable, as shown in Table 3. It is natural that many cases are non-identifiable in stage 1, as annotators are only given the moderated comment and no context. However, the 7.45% non-identifiable cases that remain even after stage 3 could be false positives, or they could be cases where the moderation event occurred more than two minutes after a problematic comment was made.\nContext improves inter-annotator agreement.\nInterestingly, providing context helps mitigate annotator bias, as shown by the increase in interannotator agreement from stage 1 to stages 2 and 3 in Table 4. Here, the exact match determines whether all three annotators have exactly the same rules; partial match determines whether there is at least one intersection rule between three annotators; and majority vote chooses the rule types that were selected by at least two people. Also, non-identifiable and disagreement cases drop significantly when the contexts are given as shown in Table 3. Similarly for determining rule types, context also helps annotators identify targets for HIB and reduces inconsistencies between annotators. Our observation emphasizes the importance of context in synchronous communication and differs from previous findings that context-sensitive toxic content is rare in asynchronous communication (Pavlopoulos et al., 2020;Xenos et al., 2021). Analysis details are in Appendix A.2.\nExternal knowledge helps annotations. To investigate the impact of external knowledge on annotators' labeling decisions, we compare annotations made with and without external knowledge provided. For examples with knowledge statements, we expect to see differences in annotation if external knowledge is necessary to comprehend why they were moderated. Statistics show that 296 examples (6.6%) require knowledge, with 183 examples requiring streamer knowledge and 187 examples requiring platform knowledge. Note that there are some examples require both. Details of statistics and examples are presented in Appendix A.3.\nNorm Category Distribution Table 3 shows the norm category distribution of streamers' rules and the moderated comments. While the categories are not directly comparable to the ones defined in Nor-mVio for Reddit (Park et al., 2021), we identified a few similar patterns. First, in both domains, Harassment and Incivility (i.e., Discrimination, HIB, Incivility) take up a significant portion of the entire set of norm violations. Also, the two domains show a similar pattern where rules for Off-Topic, Inappropriate Contents, and Privacy exist but are relatively less enforced in practice. However, we also found that the two domains differ in various ways. For example, Spam and Meta-Rules cover significantly higher portions of both rules and moderated comments on Twitch than on Reddit. On the other hand, there are fewer rules about content on Twitch, which implies that streamers are less concerned about the content of the comments than Reddit community moderators. As our data shows that norm-violating comments on live chats exhibit distinctive rules and patterns, it suggests that the existing norm violation detection systems may not perform well without domain adaptation to account for these distributional differences. We examine this hypothesis empirically in the following section and suggest appropriate modeling adjustments to better detect toxicity for real-time comments." }, { "figure_ref": [], "heading": "Toxicity Detection in Live-stream Chat", "publication_ref": [], "table_ref": [], "text": "In this section, we first check whether norm violation and toxicity detection models are robust to the distributional shift from asynchronous con- versations to synchronous conversations and vice versa, and identify how important the context or domain knowledge are for detecting toxicity and norm violation in synchronous conversations." }, { "figure_ref": [], "heading": "Performance of Existing Frameworks.", "publication_ref": [ "b20", "b22", "b12" ], "table_ref": [ "tab_18" ], "text": "To examine the difference in toxicity detection between asynchronous and synchronous communication, we investigate whether existing toxicity detection models are effective for synchronous communication. We evaluate the performance of four existing tools on NormVio-RT: Google's Perspective API (Lees et al., 2022) 7 , OpenAI content filter8 , OpenAI moderation (Markov et al., 2022) 9 , and a RoBERTa-large model fine-tuned on machine-generated toxicity dataset called Toxi-Gen (Hartvigsen et al., 2022). We only use examples from the discrimination and HIB categories in NormVio-RT, as they are most similar to the label space that the existing models are trained for (e.g., hateful content, sexual content, violence, self-harm, and harassment). Categories are determined based on the stage 1 consolidated labels, as we do not provide any context to the model. Additionally, we select an equal number of random chats from the collected stream to construct negative examples.\nTo ensure the quality of negative examples, we only select chats that are not within two minutes prior to any moderation event as they are less likely to contain norm violating chats. We also only select chats from users who have never been moderated in our data. To obtain the predictions from the models, we check whether toxicity score is greater than or equal to 0.5 for Perspective API, and for OpenAI, check the value of the \"flagged\" field which indicates whether OpenAI's content policy is violated. We use binary classification outputs for ToxiGen. existing models do not frequently produce false positives (high recall), they perform poorly in detecting toxic messages found in synchronous chats, with a detection rate of only around 55% at best (low precision). 12). Here, the labels are based on stage 3." }, { "figure_ref": [ "fig_3", "fig_4", "fig_5" ], "heading": "Norm Classification in", "publication_ref": [], "table_ref": [ "tab_10" ], "text": "To examine how context affects model performance, we experiment with four model variants with different input context: (1) Single user context is only the chat logs of the moderated user that took place up to two minutes before the moderation event;\n(2) Multi-user context (event) is N messages that directly precede the moderation event, regardless of whether it belongs to the moderated user; (3) Multi-user context (utterance) is N messages that directly precedes the single utterance, which is the moderated user's last message before the moderation event (i.e., chat 3 in Figure 3).; (4) Multi-user context (first) is the first N messages of the collected two-minute chat logs. The intuition for this selection is that the moderation event may have taken place much earlier than the moderation event. In all the Multi-user con- Experimental Results. Table 6 presents performance of norm classification for coarse-level norm categories. \"All\" refers to binary moderation detection, whether the message is moderated or not, and not the specific norm type. First, we can see that additional context improves the performance of \"All,\" but context does not consistently improve the performance of category-specific norm classifiers. For example, context reduces performance for categories where the issues are usually limited to the utterance itself (e.g., discrimination and privacy). In contrast, categories that rely on the relationships between utterances, such as HIB and incivility, show improved performance with context. Secondly, multi-user context performs quite well compared to the other contexts, indicating that a more global context that includes utterances from other users helps determine the toxicity of target utterances. Lastly, the strong performance of Multiuser context (first) suggests that earlier messages in the two-minute window are more important, meaning that the temporal distance between the moderation event and the actual offending utterance may be substantial in many cases. Thus, our results encourage future efforts on developing a more sophisticated approach for context selection.\nAvailability of Context. To compare human decisions with those of our models, we conduct experiments varying the context available to annotators and models. For example, we expect models trained with only single utterances to perform best when using stage 1 (utterance only) labels as ground-truth labels since humans are also not given any context at stage 1. Indeed, in Figure 4, using the stage 1 labels as the ground truth labels yields the best performance for a model trained without any context, while using the stage 2 (context) labels as the ground truth labels shows the best performance for a model trained with previous chat history. Since our experiments only handle text inputs, it is not surprising that using stage 3 (video) labels as ground-truth labels yields worse performance than using stage 2 labels. However, interestingly, the gap is not large, which indicates that gains from a multi-modal model that incorporates information from the video may be small and that single modality (text-only) models can be sufficient for the majority of moderation instances.\nContext Size. To understand how the amount of available context affects moderation performance, we compare the multi-user context configurations with various number of messages from one to 25. Figure 5 demonstrates that 15 to 20 messages prior to the moderated user's message helps with mod- eration performance the most (See utterance and first). However, increasing the number of messages that directly precede the moderation event actually lowers moderation performance (See event). It may be that most of this context serves as noise." }, { "figure_ref": [], "heading": "Distribution Shift in Norm Classification.", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "Existing tools often focus on identifying harmful speech, but NormVio (Park et al., 2021) also considers a wider range of norm-violating comments on Reddit, similar to NormVio-RT but in a different domain. We compare NormVio and NormVio-RT by evaluating the performance of a model finetuned on NormVio with NormVio-RT, and vice versa, to examine the impact of distribution shift between these domains. We choose six coarse-level categories that overlap between the two, as shown in Table 7. To measure with-context performance, we use the previous comment history for Reddit and multi-user context (utterance) for Twitch to simulate the most similar setup in both domains. Overall, experimental results show a pronounced distribution shift between Reddit (asynchronous) and Twitch (synchronous). Interestingly, models trained on Twitch are able to generalize better than models trained on Reddit despite having 6x less training data. Specifically, models trained using the out-of-domain Twitch+context data perform comparably on the Reddit test set to those trained using in-domain Reddit+context data." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b40", "b6", "b9", "b0", "b39", "b16", "b36", "b33", "b16", "b12", "b18", "b15", "b7", "b12", "b14", "b31", "b23", "b34", "b7", "b37", "b2", "b27", "b26", "b23", "b24", "b30", "b42", "b10", "b30", "b42", "b24", "b38", "b19", "b1", "b11", "b35", "b5" ], "table_ref": [], "text": "Toxicity Detection Most toxic language data consists of explicit hate speech consisting of hate lexicons (Waseem and Hovy, 2016;Davidson et al., 2017;Founta et al., 2018;Basile et al., 2019), group identifiers (Warner and Hirschberg, 2012;Kennedy et al., 2020), and hateful phrase (Silva et al., 2016) Twitter, Reddit). However, models trained on such data may have spurious correlations that result in many false positives (e.g., group identifiers) (Sap et al., 2019;Kennedy et al., 2020;Hartvigsen et al., 2022;Lee et al., 2022). To reduce such bias, implicit hate speech, toxic language use without any explicit hateful words or phrases, has been explored (Kennedy et al., 2018;ElSherief et al., 2021;Hartvigsen et al., 2022).\nBeyond Binary Toxicity Detection Treating toxicity detection as a binary task may not be enough to understand nuanced intents and people's reactions to toxic language use (Jurgens et al., 2019;Rossini, 2022). To holistically analyze toxicity, recent works take a more fine-grained and multidimensional approach: (1) Explainability explains why a particular chat is toxic with highlighted rationales (Mathew et al., 2021), free-text annotations of implied stereotype (Sap et al., 2020;ElSherief et al., 2021;Sridhar and Yang, 2022), or pre-defined violation norms (Chandrasekharan et al., 2018;Park et al., 2021). These explanations can be used not only to improve the performance of the toxicity detection model, but also to train models that generate explanations;\n(2) Target identification finds the targets of toxic speech, such as whether the target is an individual or a group, or the name of the group (e.g., race, religion, gender) (Ousidhoum et al., 2019;Mathew et al., 2021); (3) Context sensitivity determines toxicity by leveraging context, such as previous tweets (Menini et al., 2021), comments (Pavlopoulos et al., 2020;Xenos et al., 2021) or previous sentences and phrases within the comments (Gong et al., 2021). They show that context can alter labeling decisions by annotators, but that it does not largely impact model performance (Pavlopoulos et al., 2020;Xenos et al., 2021;Menini et al., 2021); (4) implication understands veiled toxicity that are implied in codewords and emojis (Taylor et al., 2017;Lees et al., 2021), and microaggressions that subtly expresses a prejudice attitude toward certain groups (Breitfeller et al., 2019;Han and Tsvetkov, 2020); and (5) Subjectivity measures annotation bias (Sap et al., 2022) and manage annotator subjectivity involved in labeling various types of toxicity, which arises from differences in social and cultural backgrounds (Davani et al., 2022). In this paper, we analyze the toxicity of synchronous conversations in terms of the aforementioned dimensions by identifying explanation of toxicity as a form of norm categories (explainability), finding targets of HIB words (target identification), leveraging context for both annotation and modeling (context sensitivity), asking annotators for implied knowledge statement (implication), and examining how human decisions align with machine decisions under different amounts of information (subjectivity)." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we analyzed messages flagged by human moderators on Twitch to understand the nature of norm violations in live-stream chats, a previously overlooked domain. We annotated 4,583 moderated chats from live streams with their norm violation category and contrasted them with those from asynchronous platforms. We shed light on the unique characteristics of live-stream chats and showed that models trained with existing data sets perform poorly in detecting toxic messages in our data, which motivates the development of specialized approaches for the synchronous setting. Our experiments established that selecting relevant context is an important feature for detecting norm violations in the synchronous domain. we hope our work will help develop tools that enable human moderators to efficiently moderate problematic comments in real-time synchronous settings and make the user-experience in these communities more pleasant.\nOur data, analysis, and findings have certain limitations. Our research is restricted to the English language and the Twitch platform, although the methods used to detect rule violations in live-stream chat and collect data can be adapted to other languages. Additionally, we recognize that our annotators were recruited from one country, which may result in a lack of diversity in perspectives and potential societal biases. Furthermore, we established a 2-minute context window for each moderated comment within the moderation event, but this may not capture all relevant context. Additionally, the small size of our humanannotated data may limit the generalizability of our findings to other situations. We recognize that our data set may not represent all instances of rule violations in real-world scenarios. This may be due to the biases of the moderators in choosing which users or comments to moderate or prioritizing certain types of violations over others. Also, the randomly sampled data we annotated may not be representative of the entire population and the imbalance of rule violation classes in our data set may not contain enough samples of rare categories to make definitive conclusions.\nOur experimental results indicate that models trained to detect norm violation using our data are far from perfect and may produce errors. When such models are used in real world applications, this can result in overlooking potentially problematic comments or incorrectly flagging nonproblematic comments. Therefore, we recommend using AI-based tools to assist human moderators rather than trying to fully replace them. Practitioners should also be aware that there may be users with malicious intent who try to bypass moderation by making their comments appear innocent. By employing moderation models, malicious users may be better able to craft toxic messages undetectable by existing models. As mentioned above, having a final step of human review or verification of the model output will be beneficial. Additionally, it may be necessary to continuously update the model and limit public access to it." }, { "figure_ref": [], "heading": "Ethical Considerations", "publication_ref": [], "table_ref": [], "text": "We took several steps to ensure that our data collection was ethical and legal. We set the hourly rate of compensation for workers at $16.15, which was well above the country's minimum wage at the time ($7.4). To ensure the safety and well-being of our workers, we maintained open communication channels, allowing them to voice any question, concerns, or feedback about the data annotation. This also helped to improve the quality of the collected data as we promptly addressed issues reported by workers throughout the process. We also give each annotation instance enough time so that we do not pressure annotators (40 days for 4,583 instances). We did not collect any personal information from annotators and we did not conduct any experiments with human subjects.\nWe confirm that we collected and used chats, also referred to as user content, in accordance with Twitch's Terms of Service and do not publicly release the data as it may be in violation of laws against unauthorized distribution of user content. However, we intend to make the platform-specific knowledge statements we compiled available to support future research on real-time chat in the livestreaming domain. During the collection process, we used the official Twitch API to monitor and retrieve chats.\nLastly, we want to emphasize that careful consideration must be given to user privacy when using moderation events to study norm violations. While users may be aware that their comments can be viewed by others in the chat room, researchers must also understand that users have the right to request not to be included in the data and establish a mechanism for users to contact researchers to have their data removed, and refrain from publicly releasing the data and instead share it on a need-to-know basis to control who has access to the data." }, { "figure_ref": [], "heading": "A Annotation Details", "publication_ref": [], "table_ref": [], "text": "We engage in active discussions with annotators and provide detailed feedback after multiple rounds of pilot study to ensure the data quality." }, { "figure_ref": [ "fig_6", "fig_7", "fig_8" ], "heading": "A.1 Annotation UI", "publication_ref": [], "table_ref": [], "text": "To make it easy for annotators to annotate with various types of contexts, we create an annotation tool. The annotation tool has three options and the user can select each option for each step annotation. Figure 6 shows the UI for step 1 which shows only the user's last chat (bad utterance) before the moderation event. Figure 7 shows chat logs up to two minutes ago based on the moderation events on multi user context panel. To make it easier for annotators to find previous chats from moderated users, we create single user context panel to only display chat logs of the moderated user in multi user context. Figure 8 shows both chat logs and video context. The video context shows 1-minute clipped video around the moderation event." }, { "figure_ref": [], "heading": "A.2 Annotation Consolidation", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To determine the final label for each moderated event, we aggregate the labels of annotators using a majority vote with heuristic rules. Each annotator a i identifies a list of violated rules L = {l 1 , l 2 , • • • , l k } for a moderated event e at each stage k = {1, 2, 3}. Here, we don't consider the target for HIB. We first evaluate the percentage agreement to measure inter-annotator agreement in each stage by exact match and partial match. The exact match determines whether all three annotators have exactly the same rules (L a1 = L a2 = L a3 ) and partial match determines whether there is at least one intersection rule between three annotators ((L a1 ∩ L a2 ∩ L a3 ) > 0). Table 4 shows the inter-annotator agreement percentage. We find that 98% of agreements from exact match are single label cases (i.e., 98% of exact matches have only one label) and many disagreements are resolved using the partial match method. 92% disagreements that persist even with the partial match method are the case where one or two annotators marking a comment as violating the \"Incivility\" rule while the others do not. Finally, to determine the gold label using the annotations from the three annotators, we apply a majority vote approach, choosing the rule types that were selected by at least two people. We discard approximately 3% of events that cannot be consolidated because all three annotators provided different labels. Target Agreement for HIB For cases consolidated as HIB with the majority vote, we further analyze the inter-annotator agreement of target labels among annotators who have marked them as HIB. In cases where the annotator was unable to identify the target, we asked them to mark the target as \"non-identifiable\". " }, { "figure_ref": [], "heading": "B Experimental Setup Details", "publication_ref": [ "b28", "b41" ], "table_ref": [], "text": "Each fine-tuned experiment uses 1 NVIDIA RTX A5000 GPU and uses FP16. We implement models using PyTorch (Paszke et al., 2019) and Huggingface Transformers (Wolf et al., 2019). We use the Adam optimizer with a maximum sequence length of 256 and a batch size of 4. We set 100 epochs and validate the performance every 100 steps. The stopping criteria is set to 10. For each data, we searched for the best learning rate for our model out of [1e-5, 2e-5, 5e-5, 1e-4, 3e-4]. Then, we report the average score of 3 runs by different random seeds (42,2023,5555). Each run takes 10 to 30 minutes. To determine the data distribution ratio between positives and negatives in the training data, we searched for the best distribution out of [1:1, 1:2, 1:5, Original] by random negative sampling. As shown in " }, { "figure_ref": [], "heading": "C Ablation Study C.1 Context Arrangement", "publication_ref": [ "b32", "b13", "b30" ], "table_ref": [ "tab_22" ], "text": "To understand how the context arrangement in the input affects the performance, we conduct experiments with multiple variants of context arrangement on moderation detection (See Table 15). First, the results show that randomly shuffled context consistently harm the performance. It indicates that context order matters, in contrast to the findings in dialog system study results (Sankar et al., 2019;He et al., 2021). Moreover, input as the sequential order of chats presented in the contextaware model (Pavlopoulos et al., 2020), or adding more contexts (e.g., broadcast category, rule text) degrade the performance. This indicates that the target text should always be placed first, and some contexts may not be helpful. " } ]
Toxic language, such as hate speech, can deter users from participating in online communities and enjoying popular platforms. Previous approaches to detecting toxic language and norm violations have been primarily concerned with conversations from online forums and social media, such as Reddit and Twitter. These approaches are less effective when applied to conversations on live-streaming platforms, such as Twitch and YouTube Live, as each comment is only visible for a limited time and lacks a thread structure that establishes its relationship with other comments. In this work, we share the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms. We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch. We articulate several facets of live-stream data that differ from other forums, and demonstrate that existing models perform poorly in this setting. By conducting a user study, we identify the informational context humans use in live-stream moderation, and train models leveraging context to identify norm violations. Our results show that appropriate contextual information can boost moderation performance by 35%.
Analyzing Norm Violations in Live-Stream Chat
[ { "figure_caption": "Figure 1 :1Figure 1: A Motivating Example. Chat in the synchronous domain has different characteristics than those in the asynchronous domain: (1) the temporal gap between chats and message length are much smaller; and(2) relationships between chats are less clearly defined. Such differences make chats in the synchronous domain more difficult to be moderated by existing approaches.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Data Construction. Norms are manually defined based on the chat rules of the top 200 streamers, and annotators annotate the violated norm of moderated event by three stages.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "NormVio-RT. To understand the model's ability to detect norm violations and how additional information can affect detection, we train binary classification models for each category with different types of context including conversation history, broadcast category, and rule description following Park et al. (2021).Experimental Setup. For each coarse-level category, we train a RoBERTa-base model with a binary cross entropy loss to determine whether the message is violating the certain norm or not.Following Park et al. (2021), we perform an 80-10-10 train/dev/test random split of moderated messages and add the same number of unmoderated messages in the same split. Next, for each binary classification, we consider the target category label as 1 and others as 0 and construct a balanced training data set. Appendix B (See Table", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Multi-user context is chat logs that occurred up to two minutes before the moderation event while single-user context is chat logs of moderated user in a multi-user context and single utterance is the moderated user's last message before the moderation event.texts, we use N = 5; (5) Broadcast category is the category that streamers have chosen for their broadcast. It usually is the title of a game or set to \"just chatting\"; and (6) Rule text is a representative rule example shown in Table 1. The rule text is only used for training examples because it is not possible to know which rule was violated for unseen examples and we use randomly selected rule text for unmoderated negative examples in training examples. All contexts are appended to the input text (single utterance) with a special token ([SEP]) added between the input text and the context. Chat logs for multi-user context and single-user context are placed sequentially with spaces between chats. Training details and data statistics are presented in Appendix B.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance (F1 Score) of moderation detection by different ground truth label for each context.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Performance (F1 Score) trend of moderation detection with varying context length.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Step 1. single utterance shows only the user's last chat before the moderation event.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Step 2. + chat context shows chat logs up to two minutes ago based on the moderation events (multi user context). single user context only shows the moderated user's messages within two minutes.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Step 3. + video context shows both chat logs and video context.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Twitch API & IRCMonitor Moderation Events(Ban, Timeout, Delete)Stage 1 Moderated MessageGather chat rulesStage 2Moderated Message + Chat ContextStage 3Moderated Message + Chat / Video Context+ Knowledge StatementMulti-class AnnotationNormRule ExamplesDefine NormsIncivilityBe NiceMap rulesDiscrimination No Racism……", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Live streaming norms. We map rules from top 200 Twitch streamers' channels to coarse and fine-grained level norms. Some rules specify targets (OOB: Others Outside of Broadcast, OIB: Others In Broadcast).", "figure_data": "SpamExcessive & Repetitive Advertisements--No walls of text. No self promotion unless authorized.Meta-Rules (Live streaming specific)Backseating & Tall order Mentioning other broadcasters Specific language only---Don't tell me what to do. Don't ask for mod. Don't talk down on other streamers. English only.Incivility (Miscellaneous)Incivility-Be nice, Be civil", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "A knowledge statement template.", "figure_data": "rule types it violates. To measure the importanceof context in determining types of norm violations,annotators were asked to provide labels for threedifferent scenarios with varying amounts of con-text: (1) Stage 1: annotate based on only the user'slast message before the moderation event (singleutterance); (2) Stage 2: annotate based on chatlogs up to two minutes prior to the moderation(+chat context); (3) Stage 3: annotate based onchat logs and their corresponding video clip of thesame duration (+video context). Since rules are notmutually exclusive (e.g., a message can violate bothdiscrimination & harassment), they are allowed tochoose multiple rule types if there are more thanone violated rule at each stage. All the annotationsare done with our internal annotation user interface(See Appendix A.1). To determine the final labelfor each moderated event, we aggregate the labelsof annotators using a majority vote with heuristicrules (See Appendix A.2).", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Data Statistics. # of rules indicates the number of streamers specifying the norm in their channels and # of violates indicates actual number of messages that violate corresponding norms.", "figure_data": "used on Twitch. Streamer-specific knowledge in-volves the streamer's personal background and pre-vious streaming sessions. As shown in Table 2, weprovide templates for each type that annotators caneasily fill out (More details in Appendix A.3).", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Inter", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Performance (Binary F1) of toxicity detection models on HIB and Discrimination data. Binary F1 refers to the results for the 'toxic' class.", "figure_data": "ModelPrecision RecallF1ToxiGen0.310.910.46Perspective API0.390.950.56OpenAI moderation0.110.940.20OpenAI content filter0.550.860.67", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "±0.01 0.03 ±0.02 0.18 ±0.05 0.05 ±0.00 0.67 ±0.02 0.58 ±0.04 0.28 ±0.06 Multi-user context (event) 0.75 ±0.04 0.03 ±0.00 0.44 ±0.02 0.01 ±0.00 0.14 ±0.12 0.05 ±0.01 0.66 ±0.00 0.60 ±0.03 0.17 ±0.00 Multi-user context (utterance) 0.91 ±0.01 0.04 ±0.00 0.61 ±0.05 0.00 ±0.00 0.09 ±0.03 0.10 ±0.04 0.66 ±0.01 0.65 ±0.04 0.24 ±0.12 Multi-user context (first) 0.95 ±0.00 0.05 ±0.01 0.61 ±0.01 0.01 ±0.00 0.11 ±0.02 0.08 ±0.03 0.70 ±0.03 0.62 ±0.02 0.45 ±0.03 Broadcast category 0.77 ±0.03 0.13 ±0.03 0.48 ±0.01 0.02 ±0.01 0.13 ±0.04 0.13 ±0.05 0.65 ±0.01 0.64 ±0.02 0.30 ±0.02 Rule text 0.75 ±0.01 0.05 ±0.08 0.11 ±0.17 0.00 ±0.00 0.12 ±0.06 0.29 ±0.18 0.58 ±0.04 0.38 ±0.02 0.13 ±0.03", "figure_data": "shows the results obtained from 2,102 ex-amples with 1,051 examples each for toxic and non-toxic messages. The results illustrate that while", "figure_id": "tab_9", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Performance on Norm Classification. macro F1 score for each coarse-level norm category. \"All\" refers to binary classification between moderated and unmoderated messages without considering norm category. Best models are bold and second best ones are underlined. Scores are average of 3 runs (3 random seeds).", "figure_data": "", "figure_id": "tab_10", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "±0.00 0.84 ±0.04 0.70 ±0.00 0.67 ±0.00 0.99 ±0.00 0.98 ±0.00 0.91 ±0.01 0.67 ±0.00 Incivility Incivility 0.67 ±0.00 0.16 ±0.09 0.28 ±0.04 0.09 ±0.03 0.74 ±0.00 0.56 ±0.14 0.24 ±0.12 0.09 ±0.03 Harassment HIB, Privacy 0.34 ±0.01 0.19 ±0.01 0.51 ±0.01 0.27 ±0.01 0.41 ±0.00 0.20 ±0.00 0.62 ±0.02 0.26 ±0.03 Spam Spam 0.47 ±0.02 0.22 ±0.02 0.63 ±0.01 0.28 ±0.01 0.53 ±0.01 0.27 ±0.01 0.66 ±0.01 0.28 ±0.01 Off Topic Off Topic 0.25 ±0.02 0.12 ±0.01 0.07 ±0.00 0.00 ±0.00 0.28 ±0.01 0.12 ±0.02 0.10 ±0.04 0.00 ±0.00 Hate Speech Discrimination 0.17 ±0.02 0.05 ±0.04 0.11 ±0.00 0.02 ±0.00 0.19 ±0.00 0.06 ±0.04 0.04 ±0.00 0.02 ±0.00 Content Inapt. Contents 0.30 ±0.06 0.08 ±0.03 0.12 ±0.01 0.00 ±0.00 0.37 ±0.01 0.05 ±0.02 0.09 ±0.03 0.00 ±0.00 Performance on distribution shift between norm violations in Reddit and Twitch. Macro F1 scores for each overlapped norm category. Scores are average of 3 runs (3 random seeds).", "figure_data": "CategoryWithout ContextWith ContextReddit (Normvio) Twitch (Normvio-RT)RT→RTR→TRT→RTR→TALLALL0.99", "figure_id": "tab_11", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Inter-annotator Percent Agreement for Targets of HIB. Presents the agreement percentage of HIB after majority vote. The numbers in parentheses indicate the absolute number of events.", "figure_data": "CategoryPlatform Streamer TotalDiscrimination224HIB7360133Privacy000Inappropriate Contents000Off Topic101Spam252550Meta-Rules172542Incivility6971140Total187183370", "figure_id": "tab_13", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Data statistics of knowledge statements.", "figure_data": "", "figure_id": "tab_14", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "League of Legends) game streamer, but he seems to quit and play gamble.", "figure_data": "shows that most", "figure_id": "tab_15", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Knowledge statement examples.", "figure_data": "", "figure_id": "tab_16", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Example cases of Incivility.", "figure_data": "Table 11 presents examples of chat moderation bystreamers where the underlying reason for modera-tion is not apparent. The cases highlight potentiallyuncomfortable situations that streamers may en-counter.ChatActionI'm 11 so I'm really sadBanMy mum said she wants to marry youBan", "figure_id": "tab_17", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "", "figure_data": ", we found that theevenly distribution (1:1) shows the most stable per-formance with the lowest standard deviation under", "figure_id": "tab_18", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "±0.01 0.20 ±0.03 0.41 ±0.01 0.00 ±0.00 0.04 ±0.00 0.06 ±0.02 0.06 ±0.11 0.00 ±0.00 0.05 ±0.01 0.09 ±0.09 0.12 ±0.20 0.00 ±0.00 HIB 0.52 ±0.01 0.52 ±0.03 0.46 ±0.01 0.14 ±0.25 0.61 ±0.05 0.64 ±0.02 0.00 ±0.00 0.20 ±0.35 0.61 ±0.01 0.48 ±0.26 0.40 ±0.35 0.20 ±0.36 Privacy 0.05 ±0.03 0.05 ±0.03 0.11 ±0.07 0.00 ±0.00 0.00 ±0.00 0.02 ±0.01 0.08 ±0.02 0.00 ±0.00 0.01 ±0.00 0.07 ±0.09 0.07 ±0.02 0.00 ±0.00 Inapt. Contents 0.12 ±0.01 0.58 ±0.07 0.33 ±0.06 0.42 ±0.36 0.09 ±0.03 0.36 ±0.10 0.62 ±0.03 0.36 ±0.33 0.11 ±0.02 0.28 ±0.01 0.64 ±0.03 0.38 ±0.34 Off Topic 0.07 ±0.00 0.19 ±0.12 0.28 ±0.03 0.00 ±0.00 0.10 ±0.04 0.13 ±0.06 0.18 ±0.16 0.00 ±0.00 0.08 ±0.03 0.15 ±0.08 0.37 ±0.07 0.00 ±0.00 Spam 0.63 ±0.01 0.68 ±0.00 0.67 ±0.04 0.64 ±0.04 0.66 ±0.01 0.72 ±0.01 0.71 ±0.03 0.69 ±0.04 0.70 ±0.03 0.74 ±0.01 0.73 ±0.03 0.75 ±0.04 Meta-Rules 0.65 ±0.01 0.71 ±0.02 0.48 ±0.42 0.69 ±0.04 0.65 ±0.04 0.74 ±0.01 0.00 ±0.00 0.00 ±0.00 0.62 ±0.02 0.68 ±0.04 0.00 ±0.00 0.24 ±0.42 Incivility 0.28 ±0.04 0.09 ±0.15 0.09 ±0.16 0.00 ±0.00 0.24 ±0.12 0.31 ±0.20 0.08 ±0.14 0.00 ±0.00 0.45 ±0.03 0.46 ±0.04 0.00 ±0.00 0.00 ±0.00 Experimental Results on different Training data distribution. macro F1 score for each coarse-level norm category, and scores are average of 3 runs (3 random seeds). Excluding models with an F1 score of 0, the model with the lowest standard deviation is bold for each category and its context setting.", "figure_data": "CategoryNo ContextMulti-user Context (utterance)Multi-user Context (first)1:11:21:5Original1:11:21:5Original1:11:21:5OriginalDiscrimination 0.11 CategoryTrainStage 1 DevelopmentTestTrainStage 2 DevelopmentTestTrainStage 3 DevelopmentTest101010101010101010Discrimination83831085698638181985798638484985710862HIB75275210676088784949949121745118 754996996124742124 748Privacy111118651871121218651871121218651871Inapt. Contents474738634868424238634868424238634868Off Topic707088588864787888589863777788589863Spam78978992774103 769935935114752114 758926926114752114 758Meta-Rules183183218452384976276291774947787627629277494778Incivility1,607 1,607 191675198 67471871888778917816786788478284788ALL3,542 3,499 432434434 438 3,577 3,464 435431440 432 3,577 3,464 453431440 432", "figure_id": "tab_19", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Train/Dev/Test Statistics of Normvio-RT.", "figure_data": "CategoryTrainDevelopmentTest101010Incivility1,7871,7872524,9622304,901Harassment5,0485,0486054,6095464,585Spam3,6493,6494184,7964174,714Off Topic3,009300932648883314800Hate Speech4,9304,9306074,6076674,464Content20,614 20,614 2,773 2,441 2,618 2,513", "figure_id": "tab_20", "figure_label": "13", "figure_type": "table" }, { "figure_caption": "Train/Dev/Test Statistics of Normvio (Park et al., 2021).", "figure_data": "ArrangmentContextF1 Scoretext-0.703single_user0.747text + [SEP] + contextmulti_user (event) multi_user (utterance)0.701 0.908multi_user (first)0.955single_user0.708text + [SEP] + RAN D(context)multi_user (event) multi_user (utterance)0.671 0.881multi_user (first)0.952single_user0.767context + [SEP] + textmulti_user (event)0.671(Pavlopoulos et al., 2020)multi_user (utterance)0.867multi_user (first)0.951single_user0.777text + [SEP] + context + [SEP] + broadcast cat.multi_user (event) multi_user (utterance)0.671 0.904multi_user (first)0.941single_user0.781text + [SEP] + context + [SEP] + rule textmulti_user (event) multi_user (utterance)0.671 0.895multi_user (first)0.953", "figure_id": "tab_21", "figure_label": "14", "figure_type": "table" }, { "figure_caption": "Performance on moderation detection by different context arrangement. macro F1 score for \"All\" in Table6. Best models for each context are bold.", "figure_data": "", "figure_id": "tab_22", "figure_label": "15", "figure_type": "table" } ]
Jihyung Moon; Dong-Ho Lee; Hyundong Cho; Woojeong Jin; Chan Young Park; Minwoo Kim; Jonathan May; Jay Pujara; Sungjoon Park
[ { "authors": "Cristina Valerio Basile; Elisabetta Bosco; Debora Fersini; Viviana Nozza; Francisco Patti; Manuel Rangel; Paolo Pardo; Manuela Rosso; Sanguinetti", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "SemEval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter", "year": "2019" }, { "authors": "Luke Breitfeller; Emily Ahn; David Jurgens; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Finding microaggressions in the wild: A case for locating elusive phenomena in social media posts", "year": "2019" }, { "authors": "Eshwar Chandrasekharan; Mattia Samory; Shagun Jhaver; Hunter Charvat; Amy Bruckman; Cliff Lampe; Jacob Eisenstein; Eric Gilbert", "journal": "Proceedings of the ACM on Human-Computer Interaction", "ref_id": "b2", "title": "The internet's hidden rules: An empirical study of reddit norm violations at micro, meso, and macro scales", "year": "2018" }, { "authors": "Jithin Cheriyan; Tony Roy Bastin; Stephen Savarimuthu; Cranefield", "journal": "Springer", "ref_id": "b3", "title": "Norm violation in online communities-a study of stack overflow comments", "year": "2017" }, { "authors": "Srayan Datta; Eytan Adar", "journal": "", "ref_id": "b4", "title": "Extracting intercommunity conflicts in reddit", "year": "2019" }, { "authors": "Aida Mostafazadeh Davani; Mark Díaz; Vinodkumar Prabhakaran", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b5", "title": "Dealing with Disagreements: Looking Beyond the Majority Vote in Subjective Annotations", "year": "2022" }, { "authors": "Thomas Davidson; Dana Warmsley; Michael Macy; Ingmar Weber", "journal": "", "ref_id": "b6", "title": "Automated hate speech detection and the problem of offensive language", "year": "2017" }, { "authors": "Mai Elsherief; Caleb Ziems; David Muchlinski; Vaishnavi Anupindi; Jordyn Seybolt; Munmun De Choudhury; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Latent hatred: A benchmark for understanding implicit hate speech", "year": "2021" }, { "authors": "Casey Fiesler; Joshua Mccann; Kyle Frye; Jed R Brubaker", "journal": "", "ref_id": "b8", "title": "Reddit rules! characterizing an ecosystem of governance", "year": "2018" }, { "authors": "Maria Antigoni; Constantinos Founta; Despoina Djouvas; Ilias Chatzakou; Jeremy Leontiadis; Gianluca Blackburn; Athena Stringhini; Michael Vakali; Nicolas Sirivianos; Kourtellis", "journal": "", "ref_id": "b9", "title": "Large scale crowdsourcing and characterization of twitter abusive behavior", "year": "2018" }, { "authors": "Hongyu Gong; Alberto Valido; Katherine M Ingram; Giulia Fanti; Suma Bhat; Dorothy L Espelage", "journal": "", "ref_id": "b10", "title": "Abusive language detection in heterogeneous contexts: Dataset collection and the role of supervised attention", "year": "2021" }, { "authors": "Xiaochuang Han; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Fortifying toxic speech detectors against veiled toxicity", "year": "2020" }, { "authors": "Thomas Hartvigsen; Saadia Gabriel; Hamid Palangi; Maarten Sap; Dipankar Ray; Ece Kamar", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "ToxiGen: A large-scale machine-generated dataset for adversarial and implicit hate speech detection", "year": "2022" }, { "authors": "Tianxing He; Jun Liu; Kyunghyun Cho; Myle Ott; Bing Liu; James Glass; Fuchun Peng", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "Analyzing the forgetting problem in pretrain-finetuning of opendomain dialogue response models", "year": "2021" }, { "authors": "David Jurgens; Libby Hemphill; Eshwar Chandrasekharan", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "A just and comprehensive strategy for using NLP to address online abuse", "year": "2019" }, { "authors": "Brendan Kennedy; Mohammad Atari; Aida Mostafazadeh Davani; Leigh Yeh; Ali Omrani; Yehsong Kim; Kris Coombs; Shreya Havaldar; Gwenyth Portillo-Wightman; Elaine Gonzalez", "journal": "", "ref_id": "b15", "title": "The gab hate corpus: A collection of 27k posts annotated for hate speech", "year": "2018" }, { "authors": "Brendan Kennedy; Xisen Jin; Aida Mostafazadeh Davani; Morteza Dehghani; Xiang Ren", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Contextualizing hate speech classifiers with post-hoc explanation", "year": "2020" }, { "authors": "Srijan Kumar; Jure William L Hamilton; Dan Leskovec; Jurafsky", "journal": "", "ref_id": "b17", "title": "Community interaction and conflict on the web", "year": "2018" }, { "authors": "Dong-Ho Lee; Akshen Kadakia; Brihi Joshi; Aaron Chan; Ziyi Liu; Kiran Narahari; Takashi Shibuya; Ryosuke Mitani; Toshiyuki Sekiya; Jay Pujara", "journal": "", "ref_id": "b18", "title": "Xmd: An end-to-end framework for interactive explanation-based debugging of nlp models", "year": "2022" }, { "authors": "Alyssa Lees; Daniel Borkan; Ian Kivlichan; Jorge Nario; Tesh Goyal", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Capturing covertly toxic speech via crowdsourcing", "year": "2021" }, { "authors": "Alyssa Lees; Yi Vinh Q Tran; Jeffrey Tay; Jai Sorensen; Donald Gupta; Lucy Metzler; Vasserman", "journal": "", "ref_id": "b20", "title": "A new generation of perspective api: Efficient multilingual character-level transformers", "year": "2022" }, { "authors": "Han Li; Robert E Kraut; Haiyi Zhu", "journal": "Journal of Computer-Mediated Communication", "ref_id": "b21", "title": "Technical features of asynchronous and synchronous community platforms and their effects on community cohesion: A comparative study of forum-based and chatbased online mental health communities", "year": "2021" }, { "authors": "Todor Markov; Chong Zhang; Sandhini Agarwal; Tyna Eloundou; Teddy Lee; Steven Adler; Angela Jiang; Lilian Weng", "journal": "", "ref_id": "b22", "title": "A holistic approach to undesired content detection in the real world", "year": "2022" }, { "authors": "Binny Mathew; Punyajoy Saha; Seid Muhie Yimam; Chris Biemann; Pawan Goyal; Animesh Mukherjee", "journal": "", "ref_id": "b23", "title": "Hatexplain: A benchmark dataset for explainable hate speech detection", "year": "2021" }, { "authors": "Stefano Menini; Alessio Palmero Aprosio; Sara Tonelli", "journal": "", "ref_id": "b24", "title": "Abuse is contextual, what about nlp? the role of context in abusive language annotation and detection", "year": "2021" }, { "authors": "Courtney Miller; Sophie Cohen; Daniel Klug; Bogdan Vasilescu; Christian Kästner", "journal": "", "ref_id": "b25", "title": "did you miss my comment or what?\" understanding toxicity in open source discussions", "year": "2022" }, { "authors": "Nedjma Ousidhoum; Zizheng Lin; Hongming Zhang; Yangqiu Song; Dit-Yan Yeung", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Multilingual and multi-aspect hate speech analysis", "year": "2019" }, { "authors": "Chan Young; Park ; Julia Mendelsohn; Karthik Radhakrishnan; Kinjal Jain; Tushar Kanakagiri; David Jurgens; Yulia Tsvetkov", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Detecting community sensitive norm violations in online conversations", "year": "2021" }, { "authors": "Adam Paszke; Sam Gross; Francisco Massa; Adam Lerer; James Bradbury; Gregory Chanan; Trevor Killeen; Zeming Lin; Natalia Gimelshein; Luca Antiga; Alban Desmaison; Andreas Kopf; Edward Yang; Zachary Devito; Martin Raison; Alykhan Tejani; Sasank Chilamkurthy; Benoit Steiner; Lu Fang; Junjie Bai; Soumith Chintala", "journal": "", "ref_id": "b28", "title": "Pytorch: An imperative style, high-performance deep learning library", "year": "2019" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b29", "title": "", "year": "" }, { "authors": "John Pavlopoulos; Jeffrey Sorensen; Lucas Dixon; Nithum Thain; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Toxicity detection: Does context really matter?", "year": "2020" }, { "authors": "Patricia Rossini", "journal": "Communication Research", "ref_id": "b31", "title": "Beyond incivility: Understanding patterns of uncivil and intolerant discourse in online political talk", "year": "2022" }, { "authors": "Chinnadhurai Sankar; Sandeep Subramanian; Chris Pal; Sarath Chandar; Yoshua Bengio", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "Do neural dialog systems use the conversation history effectively? an empirical study", "year": "2019" }, { "authors": "Maarten Sap; Dallas Card; Saadia Gabriel; Yejin Choi; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "The risk of racial bias in hate speech detection", "year": "2019" }, { "authors": "Maarten Sap; Saadia Gabriel; Lianhui Qin; Dan Jurafsky; Noah A Smith; Yejin Choi", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "Social bias frames: Reasoning about social and power implications of language", "year": "2020" }, { "authors": "Maarten Sap; Swabha Swayamdipta; Laura Vianna; Xuhui Zhou; Yejin Choi; Noah A Smith", "journal": "Association for Computational Linguistics", "ref_id": "b35", "title": "Annotators with attitudes: How annotator beliefs and identities bias toxic language detection", "year": "2022" }, { "authors": "Leandro Silva; Mainack Mondal; Denzil Correa; Fabrício Benevenuto; Ingmar Weber", "journal": "", "ref_id": "b36", "title": "Analyzing the targets of hate in online social media", "year": "2016" }, { "authors": "Rohit Sridhar; Diyi Yang", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Explaining toxic text via knowledge enhanced text generation", "year": "2022" }, { "authors": "Jherez Taylor; Melvyn Peignon; Yi-Shin Chen", "journal": "", "ref_id": "b38", "title": "Surfacing contextual hate speech words within social media", "year": "2017" }, { "authors": "William Warner; Julia Hirschberg", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "Detecting hate speech on the world wide web", "year": "2012" }, { "authors": "Zeerak Waseem; Dirk Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b40", "title": "Hateful symbols or hateful people? predictive features for hate speech detection on Twitter", "year": "2016" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz", "journal": "", "ref_id": "b41", "title": "Huggingface's transformers: State-ofthe-art natural language processing", "year": "2019" }, { "authors": "Alexandros Xenos; John Pavlopoulos; Ion Androutsopoulos", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Context sensitivity estimation in toxicity detection", "year": "2021" } ]
[]
2023-05-18
[ { "figure_ref": [], "heading": "", "publication_ref": [ "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22" ], "table_ref": [], "text": "transmit and analyze functional information between distant regions of the brain for improving the brain's information processing speed and accuracy [2]. Recent studies have shown that the abnormal topological changes of BEC can reflect the underlying pathology of brain disease [3], [4]. These changes are probably accompanied by alterations in the brain's microstructure [5], [6], such as those associated with Alzheimer's disease (AD) and its early-stage mild cognitive impairment (MCI). Investigation of the BEC is able to better understand the underlying mechanisms associated with neurodegenerative diseases and develop potential treatments or new drugs for rehabilitation [7], [8]. Therefore, constructing BEC from functional magnetic resonance imaging (fMRI) becomes a very promising way to analyze cognitive disease and identify possible biomarkers for MCI diagnosis.\nConstructing BEC involves learning a mapping network to predict directional connectivities from one neural unit to another by analyzing their functional signals. Since the fMRI has the advantages of being noninvasive and having high temporal resolution, it has drawn great attention in exploring the complex connectivity features for cognitive disease diagnosis [9], [10], [11]. The brain functional connectivity (BFC) gives the temporal correlation between any pair of neural units, while the BEC is an asymmetric matrix representing the directional information of neural transmission. The directed graph can be used to analyze BEC, including a set of vertices (named regions-of-interest, or ROIs) and a set of directed edges (effective connections). Previous researchers focused on BEC learning by using traditional learning methods, including the dynamic causal models (DCM) [12], the Bayesian network (BN) method [13], the correlation analysis method [14], and so on. These methods utilize shallow network structure or prior knowledge and are unable to extract complex connectivity features from fMRI, which may bring about inaccuracies in BECs and deduce low performance in disease analysis.\nRecently, deep learning methods have been widely applied in the exploration of BEC learning [15], [16]. It not only achieves quite a good performance on image recognition tasks in Euclidean space but also shows excellent results on brain network generation in non-Euclidean space. The primary characteristic is the strong ability to perform high-level and complex feature extraction. Increasingly new methods based on deep learning have been explored to construct BEC from functional MRI data [17], [18]. However, the current methods heavily rely on the software toolkit to preprocess the fMRI before extracting empirical time-series data. The main drawback is that the manual parameter settings of these preprocessing procedures may result in large errors by different researchers.\nAs the most popular and powerful generative model, the generative adversarial network (GAN) [19] implicitly characterizes the distribution of synthetic samples through a twoplayer adversarial game. It can generate high-quality samples with efficient computation while producing homogeneous samples because of training instability and mode collapse. An alternative way to solve this issue is the emergence of diffusion denoising probabilistic models (DDPM) [20] that have received great attention in generating tasks [21], [22], [23]. The main advantage is the great generating ability for high-quality and diverse samples, and the disadvantage lies in the expensive computation. Inspired by the above observations, we combine the advantages of the GAN and the DDPM to improve generation performance. The novel multi-resolution spatiotemporal enhanced transformer denoising (MSETD) model is proposed to learn brain effective connectivity from the 4D fMRI for mild cognitive impairment (MCI) analysis. Specifically, the 4D fMRI is first transformed to ROI time-series features (also called a rough sample) by introducing an anatomical segmentation mask, and the rough sample is then treated as conditioning to gradually denoise the Gaussian sample and generate clean samples. Next, by splitting the generation processes into a few steps, adversarial learning is more stable and gets rid of mode collapse. Besides, the designed generator can capture hierarchical spatial-temporal features and enhance clean sample generation. Finally, the estimated BEC from the generators reflects a more sophisticated causal relationship between brain regions and captures the prominent MCI-related features. To the best of our knowledge, the proposed MSETD is the first work to translate 4D fMRI into BEC in an endto-end manner using functional diffusive GANs. The main contributions of this work are summarized as follows:\n• The proposed functional MSETD model is the first work that generates BEC from 4D fMRI in an end-to-end manner. It separates the diffusion denoising process into several successive steps and leverages the generative adversarial strategy to gradually translate the noise and conditioning fMRI into effective connectivity, which makes the generation process high-quality, diverse, and efficient. • The hierarchical hybrid enhanced transformer generator is designed to denoise the fMRI by first paying attention to global spatial connectivity features and then focusing on local temporal characteristics. The multi-scale spatiotemporal features are significantly enhanced, and the denoise quality is greatly improved. • The multi-resolution diffusive transformer discriminator is devised to capture the temporal patterns of the denoised fMRI at different scales, which ensures the denoising process is stable and the generation result is diverse.\nThe rest of this paper is structured as follows: The related works are introduced in Section II. The overall architecture of the proposed MSETD model is presented in Section III. The experimental results, including generation evaluation and classification performance, are described in Section IV. The reliability of our results is discussed in Section V, and Sec-tions VI draw the main conclusions." }, { "figure_ref": [], "heading": "II. RELATED WORK A. BEC learning methods", "publication_ref": [ "b11", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30", "b31" ], "table_ref": [], "text": "Functional connectivity network analysis is usually used to identify abnormal connectivity patterns that are associated with different cognitive functions, such as memory, attention, and emotion. The BEC belongs to the functional connectivity network and can bridge causal connections between brain regions. Many studies have focused on exploring the BEC for better diagnostic performance and good interpretability. There are two categories of methods for learning BEC from functional data: shallow learning methods and deep learning methods.\nThe frequently used shallow method is the dynamic causal model (DCM) [12]. For example, Park et al. [24] employed the parametric empirical Bayes method to model the directed effects of sliding windows. And the Granger causality (GC) [25], [26] is the most commonly used shallow method. For example, DSouza et al. [27] utilize the multiple regression algorithm to process historical information from functional time series for causal interaction prediction. These methods cannot extract deep and complex connectivity features from fMRI.\nTo explore the deep features of fMRI, deep learning methods show great success in causal modeling between brain regions. The work in [28] employed nonlinear causal relationship estimation with an artificial neural network to predict causal relations between brain regions by analyzing both linear and nonlinear components of EEG data. Also, Abbasvandi et al. [29] combined the recurrent neural network and Granger causality to estimate effective connectivity from EEG data. They greatly improved the prediction accuracy of simulation data and the epileptic seizures dataset. Considering the great ability of GANs to characterize data distribution, Liu et al. [30] designed a GAN-based network to infer directed connections from fMRI data. To capture temporal features, they [31] employed recurrent generative adversarial networks for effective connectivity learning. Presently, Zou et al. [32] introduced the graph convolutional network (GCN) to mine both temporal and spatial topological relationships among distant brain regions for learning BECs. Although the above deep learning methods achieved promising prediction performance in BEC estimation, they heavily rely on the software toolkit to preprocess the fMRI for extracting empirical time-series data. That may result in large errors due to different manual parameter settings during preprocessing procedures." }, { "figure_ref": [], "heading": "B. Generative learning models", "publication_ref": [ "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39" ], "table_ref": [], "text": "Generative adversarial networks (GANs) have ruled generative approaches since they were first proposed by Goodfellow. The primary advantage is implicitly modeling the distribution of the generated data through a two-player game. Many variants of GANs have been proposed and applied to many generation tasks [33], [34], [35], such as generating super-resolution, synthesizing cross-modal data, segmenting images, and so on. To satisfy specific generating tasks, conditional GAN [36] and related variants also achieve quite good performance efficiently [37]. The problem of instability and mode collapse in training has not been completely addressed yet. This may lead to homogeneously generated data and hinder its wider applications. Recently, denoising diffusion probabilistic models (DDPM) have attracted much attention in image generation [38], [39], [40] because of their ability to generate high-quality and diverse results. DDPM aims to denoise the noisy Gaussian data gradually and recover the clean data. However, the denoising process requires a Gaussian distribution assumption with only a small denoising step, which leads to slow reverse diffusion in about thousands of steps to approach clean data. Based on the above observations, we try to combine the advantages of GAN and DDPM in generation, such as efficiency, high quality, and diversity. We propose the novel MSETD model to precisely learn BEC from 4D fMRI for MCI analysis." }, { "figure_ref": [], "heading": "III. METHOD", "publication_ref": [], "table_ref": [], "text": "The proposed MSETD model jointly leverages denoising diffusion probabilistic models (DDPM) and generative adversarial networks (GANs) to translate a 4-dimensional fMRI into a one-dimensional time series and construct brain effective connectivities (BEC) for MCI analysis. To be specific, the fMRI is mapped into a one-dimensional time series (called the rough sample X) of N ROIs by incorporating the anatomical atlas mask. The rough sample guides the diffusion model to translate the gaussian noise into a clean sample using conditional diffusion models. We first introduce conventional diffusion models and then describe functional diffusive GANs with transformer-based generators and discriminators. Finally, hybrid objectives are devised to ensure effective diffusion and generate high-quality samples efficiently." }, { "figure_ref": [], "heading": "A. Conventional Denoising Diffusion Model", "publication_ref": [], "table_ref": [], "text": "The basic principle of the diffusion model is to learn the information attenuation caused by noise and then use the learned patterns to generate denoised data. It is usually divided into the forward process and the inverse process. In the diffusion process, Gaussian noise is constantly added to the input sample F 0 for sufficiently large T (hundreds or thousands) steps. Under the rules of the Markov chain, the probability distribution of a noisy sample F T will approach the stationary distribution (such as the Gaussian distribution) at the T -th step. Here is the diffusion formula:\nq (F 1:T | F 0 ) = T t=1 q (F t | F t-1 )(1)\nq (F t | F t-1 ) = N F t ; 1 -β t F t-1 , β t I(2)\nwhere t = {1, 2, ..., T }. β t is the noise variance that is defined before the model's training. N is the assumed Gaussian distribution, and I is an identity matrix. The reverse process also follows the Markov chain to translate the noisy sample F T to a cleaned sample F 0 . The assumption of a large step T and a small β t can model the denoising probability in a Gaussian distribution:\np θ (F 0:T ) = p (F T ) T t=1 p θ (F t-1 | F t )(3)\np θ (F t-1 | F t ) = N F t-1 ; µ θ (F t , t) , σ 2 t I(4)\nwhere µ θ (F t , t) and σ 2 t are the mean and variance of the denoised sample F t-1 . θ indicates the network's parameters.\nWe use the deep learning model p θ (F t-1 |F t ) to approximate the true distribution q(F t-1 |F t ), and get the reparameterization of µ and σ 2 t in the following form:\nµ θ (F t , t) = 1 √ α t F t - β t √ 1 -ᾱt θ (F t , t)(5)\nσ 2 t = (1 -ᾱt-1 ) (1 -ᾱt ) β t(6)\nhere,\nα t = 1 -β t , ᾱt = t i=1 α i .\nBy applying a variational evidence lower bound (ELBO) constraint, the added noise θ (F t , t)) can be calculated by minimizing the MSE loss:\nE t,F0, -θ √ ᾱt F 0 + √ 1 -ᾱt , t 2(7)\nwhere, ∼ N (0, I). After the model's optimization, new samples can be derived from Gaussian noise F T ∼ N (0, I) by gradually reversing diffusing T steps based on Eq.( 3) and Eq.( 4)." }, { "figure_ref": [ "fig_0", "fig_1", "fig_3" ], "heading": "B. Functional diffusive GANs", "publication_ref": [], "table_ref": [], "text": "Conventional DDPM can generate high-quality samples but suffers from low efficiency in sampling because of the thousands of denoising steps. While the GAN can make up for this shortcoming by having fast-generating ability, Thus, we propose novel functional diffusive GANs by combining both of them for efficient and high-fidelity sampling generation. Apart from this, there are two other differences compared with the conventional DDPM: (a) A rough sample is treated as conditioning to guide the denoising process. Since our goal is to denoise the rough sample X into a clean subjectspecific sample F 0 , the unconditional diffusion process likely generates diverse samples that cannot reflect subject-specific disease information; (2) the step size is reduced by a factor of s (s >> 1), which speeds the generation process and keeps the sample in high quality.\n1) Functional diffusion process: The proposed diffusive GANs divide the denoising process into a few steps by using generative adversarial networks, where each step is conditioned by a rough sample to make the process easy to learn. It should be stressed that our model advantages DDPM in efficiency and high-fidelity denoising, diminishing the training instability and mode collapse encountered at GANs optimization. As shown in Fig. 1, it consists of two parts: the forward process from the empirical sample to the Gaussian noisy sample and the reverse process from the Gaussian noisy sample to the empirical sample conditioned on the rough sample.\nIn the forward direction, the empirical sample F 0 is transformed to the normal Gaussian noisy sample F T by gradually Fig. 1. The architecture of the proposed denoising model. In the forward process, the empirical sample F 0 is transformed to the normal Gaussian noisy sample F T by gradually adding noise. In the reverse process, the rough sample X is considered as conditioning input to guide the generator G θ in synthesizing the denoised sample F 0 from F T ; meanwhile, a discriminator D θ distinguishes the actual or synthesized sample for the denoising optimization. adding noise. The computation formula is the same as Eq.( 2). In the reverse direction, we should consider the condition X in the denoising procedure. First, the rough sample X is considered as conditioning input to guide the generator G θ predict the initial sample F0 from F T , then the posterior sampling is utilized to synthesize the denoised sample F t-s ; meanwhile, the multi-resolution diffusive transformer discriminator D θ distinguishes the actual (F t-s ) or synthetic (F t-s ) for the denoising process. Specifically, at the t step, we aim to predicte\nF t-s from F t . Firstly, a generator G θ (F t , X, t) is utilized to predict the initial sample F[t/s] 0 , then F t-s is sampled using the posterior distribution q(F t-s |F t , F[t/s] 0\n) by giving F t and F[t/s] 0 . Finally, after T /s steps, the ultimate denoised sample F 0 (equal to F[1] 0 ) sampled from the estimated distribution p θ (F 0 | F s , X). The denoising process can be expressed as follows: as:\np θ (F t-s | F t , X) := q F t-s | F t , F[t/s] 0 = G [t/s] θ (F t , X, t)(8)\nq F t-s | F t , F[t/s] 0 = q F t | F t-s , F[t/s] 0 q F t-s | F[t/s] 0 q F t | F[t/s] 0 (9) The sampling probability from p θ (F t-s | F t , X) is defined\nF t-s = √ α t (1 -ᾱt-s ) 1 -ᾱt F t + √ ᾱt-s β t 1 -ᾱt G [t/s] θ + β t (10)\n2) Hierarchical hybrid enhanced transformer generator: The aim of the hierarchical hybrid enhanced transformer generator (G θ ) is to remove the noise from noisy samples to obtain clean samples by conditioning guidance. Specifically, at t-the step, the input is the noisy sample F t and the rough sample X, and the output is the initial denoised sample F0 . All of them share the same size, N ×d. The generator consists of four modules, including the multi-channel adaptor (MCA), the spatial-enhanced temporal-enhanced (SeTe) blocks, the temporal down-and up-sampling (TDS and TUS), and the brain effective connectivity (BEC) estimator.\nThe MCA adaptively fuses the noisy sample F t and the rough sample X, Different from the traditional way of concatenating the two samples, we designed a cross-channel attention mechanism when fusing them. As shown in Fig. 2, first, the sample F t is passed through the feed-forward network (FFN) and self-attention network; then, it is projected on the X to compute weighting scores for itself. We denote the input of cross-channel attention as E t and X, the output of this fusion operation can be expressed as follows: The SeTe blocks are designed to extract both spatial and temporal features. The conventional transformed-based method focuses on the spatial correlation between pairs of ROIs while ignoring the temporal continuity. The SeTe benefits from multi-head attention (MSA), which enhances long-term dependence both spatially and temporally. As shown in Fig. 3, it is comprised of a spatial multi-head attention (SMA) and a temporal multi-head attention (TMA). The difference between these two attention networks is that the former is operated in the spatial direction, and the latter is operated in the temporal direction. The input is a tensor with the size C × N × d/2, and the output F SeT e t can be defined by: \nE t = softmax E t X T √ d E t(\nF SeT e t = SMA\nF SeT e t = {F SeT e(1) t , F SeT e(2) t , • • • , F SeT e(d) t } (14\n)\nFSeT e(i)\nt = Att F SeT e(i) t W Q h , F SeT e(i) t W K h , F SeT e(i) t W V h = Att(Q i h , K i h , V i h ) = softmax Q i h (K i h ) T C/H V i h (15) F SeT e t = { FSeT e(1) t , FSeT e(2) t , • • • , FSeT e(d) t }(16)\nwhere, F\nSeT e(i) t ∈ R N ×C , i = {1, 2, ..., d}, h = {1, 2, ..., H}. Att means the attention operation,\nW Q h , W K h , W V h ∈ R C×(C/H) project each part of FSeT e t\nonto the matrices of queries (Q), keys (K), and values (V) for the h-th head, respectively. The outputs of all attention mechanisms are concatenated to get the final result of this module in this layer. The TMA also has a similar structure and definitions as described above.\nThe TUS is the reverse of the TDS, where the dimension is doubled and the channel is halved. Taking the last TUS as an example, with the concatenated sequence F SeT e t ∈ R C×N ×d , we apply several 1D transposed convolutions to reduce the channel and increase the dimension. At last, the output, F T U S t has the same size as F M CA t . The BEC aims to generate a denoised sample F0 and estimate the causal direction between pairs of ROIs. As shown in Fig. 4, the inputs are the noisy samples: F M CA t and F T U S t . After the element adding operation, we can obtain the denoised sample at t/s step:\nF0 = F M CA t + F T U S t (17)\nhere, we separate the F0 into multiple rows, where each row represents the corresponding ROI's feature. To mine the causal relationship among ROIs, we introduce the structural equation model (SEM) to predict the direction from one region to another. The causal parameters of SEM can be estimated by:\nz i = N j=1 A ji z j + n i(18)\nwhere i = {1, 2, ..., N } A ji indicates the causal effect on ith brain region from j-th brain region. n i is the independent random noise. The matrix A ∈ R N ×N is asymmetric, representing the causal parameters of SEM. The diagonal elements of A are set to 0 because it is meaningless to consider the effective connectivity of the brain region itself. Therefore, the BEC matrix A can represent the effective connectivity between any pair of brain regions.\n3) Multi-resolution diffusive Discrimination: To constrain the generated denoised sample F t-s to be consistent with the real sample F t-s in distribution, we downsample the input sample into four different resolutions and devise four corresponding sub-discriminators to distinguish the synthetic and actual samples. Each discriminator has the conventional transformer structure: layer normalization, self-attention, a feed-forward layer, and the classification head. The output of each discriminator is a scalar in the range of 0 ∼ 1. Averaging all the discriminator outputs is the final output of a multiresolution diffusive transformer discriminator." }, { "figure_ref": [], "heading": "C. Hybrid Loss Functions", "publication_ref": [], "table_ref": [], "text": "To guarantee the model generates high-quality denoised samples, adversarial loss is introduced to optimize the generator's parameters. There are five kinds of loss functions: the spatial-temporal enhanced generative loss (L SEG ), the multi-resolution diffusive discriminative loss (L M DD ), the reconstruction loss (L REC ), the sparse connectivity penalty loss (L SCP ), and the classification loss (L CLS ). We treat the generator as a conditional GAN; when inputting a noisy sample and conditioning sample, the generator G θ (F t , X, t) outputs a synthetic sample F t-s . And the discriminator D θ discriminates whether the sample is from the generator or the forward diffusion. Here are the non-saturating generative and discriminative losses (adversarial diffusive losses):\nL SEG = E t,q(Ft|F0,X),p θ (Ft-s|Ft,X) -log D θ F t-s (19) L M DD =E t,q(Ft|F0,X) E q(Ft-s|Ft,X) [-log (D θ (F t-s ))] + E p θ (Ft-s|Ft,X) -log 1 -D θ F t-s }(20\n) after T steps, the final denoised sample F 0 should recover the clean sample F 0 for every element. The reconstruction loss is defined by:\nL REC = E p θ (F0|Fs,X) [||F 0 -F 0 || 1 ](21)\nmoreover, the sparse effectivity connection between brain regions can be interpreted in brain functional activities. We introduce a penalty on the obtained A for sparse constraints. Besides, the obtained A is sent to the classifier C to predict the disease label y. The losses are expressed by:\nL SCP = γ || N i=1,j=1 A i,j ||(22)\nL CLS = E p θ (A|Fs,X),C(y|A) (-log(y|A))" }, { "figure_ref": [], "heading": "IV. EXPERIMENTS A. Datasets", "publication_ref": [ "b40", "b42", "b41" ], "table_ref": [], "text": "In our study, we tested our model on the public Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset1 for classification evaluation. There are 210 subjects scanned with functional magnetic resonance imaging (fMRI), including 61 subjects with late mild cognitive impairment (LMCI), 68 subjects with early mild cognitive impairment (EMCI), and 81 normal controls (NCs). The average ages of LMCI, EMCI, and NC range from 74 to 76. The sex ratio between males and females is nearly balanced. The fMRI is scanned by Siemens with the following scanning parameters: TR = 3.0 s, field strength = 3.0 Tesla, turning angle =80.0 degrees, and the EPI sequence is 197 volumes. There are two ways to preprocess the fMRI. Both of them require the anatomical automatic labeling (AAL90) atlas [41] for ROI-based time series computing. One is the routine precedence using the GRETNA software to obtain the functional time series, which is treated as the ground truth F 0 in the proposed model. The detailed computing steps using GRETNA [43] are described in the work [42]. Another one adopted in this paper is using the standard atlas file aal.nii to split each volume of fMRI into 90 brain regions and average all the voxels of each brain region. We discard the first 10 volumes and obtain a matrix with a size of 90 × 187, which is the rough sample X input in the model." }, { "figure_ref": [], "heading": "B. Training Settings and Evaluation Metrics", "publication_ref": [ "b43" ], "table_ref": [], "text": "In the training process, the input of our model is the 4D functional MRI, and the output is the ROI-based time series and the BECs. We set the parameters as follows: T = 1000, s = 250. C = 2. N = 90, d = 187, L i = 2 (i = 0, 1, 2, 3, 4), γ = 1.9. The Pytorch framework is used to optimize the model's weightings under an Ubuntu 18.04 system. The batch size is 16, and the total epochs are 600. The learning rates for the generator and discriminator are 0.001 and 0.0002, respectively.\nWe adopt 5-fold cross-validation in our model's validation. Specifically, the subjects in each category are randomly divided into five parts. The model is trained on the four parts of them and tested on the rest. The final accuracy is computed by averaging the results from the five parts. After obtaining the BECs, we conduct three binary classification tasks (i.e., NC vs. EMCI, NC vs. LMCI, and EMCI vs. LMCI) for the model's performance evaluation. Two commonly used classifiers are adopted to evaluate the classification performance, including the support vector machine (SVM) and the BrainNetCNN [44]. The evaluation metrics are the area under the receiver operating characteristic curve (AUC), the prediction accuracy (ACC), the positive sensitivity (SEN), and the negative specificity (SPE)." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "C. Prediction Results", "publication_ref": [ "b24", "b29", "b31", "b41", "b44" ], "table_ref": [ "tab_0" ], "text": "The proposed model's output is the BEC, which is an asymmetric matrix. As shown in Fig. 5, the left matrix is an example calculated by the GRETNA with symmetric patterns. In order to show the superior performance of the generated BECs, we introduce four other methods to compare the classification performance. (1) the empirical method; (2) the Granger causal connectivity analysis (GCCA) [25]; (3) the effective connectivity based on generative adversarial networks (EC-GAN) [30]; (4) the spatiotemporal graph convolutional models (STGCM) [32]. The classification results are shown in Table I. Compared with the empirical method, the other four methods achieve better classification performance in both classifiers by generating BECs. This may indicate that effective connectivity contains the causal information that is correlated with MCI. Among the four BEC-based methods, our model achieves the best value for ACC, SEN, SPE, and AUC with 86.58%, 85.29%, 87.65%, and 87.58% for NC vs. EMCI, respectively. The best values of ACC, SEN, SPE, and AUC for the NC vs. LMCI task are 94.37%, 95.08%, 93.83%, and 95.95%. The values of 92.25%, 91.80%, 92.65%, and 93.44% are obtained using our model in the EMC vs. LMCI prediction task. Since different brain regions have diverse impacts on the cause of MCI, in the binary classification tasks, we investigate the influence of each ROI on the accuracy prediction. It is meaningful to find MCI-related ROIs for early MCI detection and treatment. Here, we adopt the shielding method [42] to compute the important score for each ROI. The important score is calculated as one minus the mean prediction results. After sorting the important scores, we can obtain the top 10 percent ROIs for each scenario and display them using the BrainNet viewer [45]. As shown in Fig. 6, the nine important brain regions of EMCI vs. NC are: inferior frontal gyrus orbital part, superior frontal gyrus medial orbital, amygdala, superior occipital gyrus, middle occipital gyrus, inferior occipital gyrus, inferior parietal supramarginal and angular gyri, and angular gyrus. From EMCI to LMCI, the nine identified ROIs are the inferior frontal gyrus orbital part, olfactory cortex, superior frontal gyrus medial orbital, amygdala, posterior cingulate gyrus, angular gyrus, and Heschl gyrus. The important ROIs between NC and LMCI are as follows: superior frontal gyrus orbital part, middle frontal gyrus orbital part, olfactory cortex, posterior cingulate gyrus, parahippocampal gyrus, amygdala, supramarginal gyrus, and Heschl gyrus." }, { "figure_ref": [ "fig_7", "fig_8" ], "heading": "D. Effective Connectivity Analysis", "publication_ref": [], "table_ref": [], "text": "In addition to the fact that brain regions play an important role in disease diagnosis, the causal relationship between them can uncover the underlying pathological mechanism of MCI.\nIn this section, we analyze the generated BECs and predict the abnormal directional connections for further study. To make the results statistically significant, we average all the BECs for each category and filter out the values that fall below the threshold of 0.1. The averaged BEC at three different stages is shown in Fig. 7 by modifying the circularGraph packages 2 . We compute the altered effective connectivity by subtracting the averaged BEC matrix at the early stage from the later stage. As a result, a total of six altered effective connectivity matrices are obtained, consisting of the enhanced and diminished connectivities from NC to EMCI, from NC to LMCI, and from EMCI to LMCI. The altered effective connectivities are shown in Fig. 8. The top row represents the enhanced connections, and the bottom row represents the diminished connections. Each matrix is asymmetric, and the element values range from -0.35 ∼ 0.35. The positive value means the directional connection strength is enhanced, while the negative value means the directional connection strength is diminished.\nThese altered effective connections probably contribute to the cause of MCI. To find the important effective connections during the MCI progression, we sort the altered effective " }, { "figure_ref": [ "fig_12" ], "heading": "E. Ablation Study", "publication_ref": [], "table_ref": [], "text": "In our experiment, the BEC is obtained by optimizing the generator and the discriminator. To investigate whether the proposed generator and discriminator are effective, we design three variants of the proposed model and repeat ten times the 5-fold cross validations. (1) MSETD without hierarchical transformer (MSETD w/o HT). We removed the TDS and TUS, and only kept one SeTe block in the generator. (2) MSETD without SeTe blocks (MSETD w/o SeTe). In this case, we replace the SeTe with conventional 1D convolution with a kerner size of 1 × 3. (3) MSETD without multiresolution diffusive transformer (MSETD w/o MDT). We remove the downsampling operation in the discriminator and keep the D 1 . For each variant, we compute the ACCs, AUCs, SENs, and SPEs. The results are shown in Fig. 12. It can be observed that removing the hierarchical structure greatly reduces the characteristics that are correlated with MCI." }, { "figure_ref": [ "fig_12", "fig_14" ], "heading": "V. DISCUSSION", "publication_ref": [ "b45", "b46", "b47", "b48", "b49", "b50", "b51", "b52", "b53", "b54" ], "table_ref": [], "text": "The proposed model can generate BEC from 4D fMRI in an end-to-end manner for MCI analysis. The ROI mask of the AAL90 atlas helps to parcellate the whole 3D volume into 90 ROIs at each time point. This operation contains many linear interpolations and ignores voxels at the boundaries of adjacent As the denoising process needs thousands of steps to approach clean data, we introduce the adversarial strategy to speed the denoising process. As displayed in Fig. 12, when removing the adversarial strategy (MSETD w/o MDT), the classification performance shows a significant decrease. Also, the hierarchical structure and the SeTe block in the generator both ensure good generation results because they focus on the multi-scale spatiotemporal features and thus enhance the denoising quality. The altered effective connections detected are important for the three scenarios. These connections are partly correlated, which is essential for discovering MCI-related biomarkers. We focus on the same altered connections between NC vs. EMCI and NC vs. LMCI. The top 12 MCI-related effective connections are shown in Table . II. The enhanced and diminished effective connection-related ROIs are identified in previous studies [46], [47], [48]. For example, the HIP has characteristics of decreasing volume and diminishing connection strength as the disease progresses [49], [50]. Also, the AMYG has been reported to process both sensory information and punishment/reward-related learning memory [51]. Disruption of AMYG-related connections can bring cognitive decline [52].\nThe ANG can correlate visual information with language expression. Patients with MCI lose ANG-related connections and cannot read the visual signals [53]. We display two examples for visualizing the effective connection-strength-changing process. As shown in Fig. 14, the effective connection from SOG.R to PCG.R is becoming weaker as NC progresses to LMCI. This perhaps results in the memory problem and is consistent with clinical works [54], [55]. The effective connection from HIP.L to MOG.R becomes progressively stronger during disease progression.\nThe proposed model still has two limitations, as follows: One is that this work ignores the causal dynamic connections between paired brain regions. The dynamic characteristics are indicative of cognitive and emotional brain activities, which play an important role in detecting the early stage of AD and understanding the pathological mechanisms. In the next work, we will explore the time delay properties of fMRI among brain regions to bridge the dynamic causal connections with biological interpretability. Another is that the input data only concentrates on the fMRI while ignoring other complementary information. Since diffusion tensor imaging (DTI) can characterize the microstructural information, it can enhance the BEC's construction performance and make biological analyses. In the future, we will utilize the GCN and combine it with fMRI to extract complementary information for effective connectivity construction." }, { "figure_ref": [], "heading": "VI. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a multi-resolution spatiotemporal enhanced transformer denoising (MSETD) network to estimate the brain's effective connectivity from 4-dimensional fMRI in an end-to-end manner. The denoising network is based on the conditional diffusion model, which translates noise into clean time series with blurred time series. It separates the diffusion denoising process into several successive steps and leverages the generative adversarial strategy to make the generation process more high-quality, diverse, and efficient. The spatial/temporal multi-head attention mechanisms in the generator capture the global and local connectivity features at different scales for better denoising quality improvement. The multi-resolution diffusive transformer discriminator captures the phase patterns at different scales and guides the generation process to approximate empirical samples. Results from the ADNI datasets prove the feasibility and efficiency of the proposed model. The proposed model not only achieves superior prediction performance compared with other shallow and deep-learning methods but also identifies MCI-related causal connections for better understanding pathological deterioration and discovering potential MCI biomarkers." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the British Heart Foundation Accelerator Award, UK (AA\\18\\3\\34220), in part by Medical Research Council Confidence in Concept Award, UK (MC PC 17171), in part by Biotechnology and Biological Sciences Research Council, UK (RM32G0178B8)." } ]
Effective connectivity can describe the causal patterns among brain regions. These patterns have the potential to reveal the pathological mechanism and promote early diagnosis and effective drug development for cognitive disease. However, the current studies mainly focus on using empirical functional time series to calculate effective connections, which may not comprehensively capture the complex causal relationships between brain regions. In this paper, a novel Multi-resolution Spatiotemporal Enhanced Transformer Denoising (MSETD) network with an adversarially functional diffusion model is proposed to map functional magnetic resonance imaging (fMRI) into effective connectivity for mild cognitive impairment (MCI) analysis. To be specific, the denoising framework leverages a conditional diffusion process that progressively translates the noise and conditioning fMRI to effective connectivity in an end-to-end manner. To ensure reverse diffusion quality and diversity, the multiresolution enhanced transformer generator is designed to extract local and global spatiotemporal features. Furthermore, a multiscale diffusive transformer discriminator is devised to capture the temporal patterns at different scales for generation stability. Evaluations of the ADNI datasets demonstrate the feasibility and efficacy of the proposed model. The proposed model not only achieves superior prediction performance compared with other competing methods but also identifies MCI-related causal connections that are consistent with clinical studies.
Multi-resolution Spatiotemporal Enhanced Transformer Denoising with Functional Diffusive GANs for Constructing Brain Effective Connectivity in MCI analysis
[ { "figure_caption": "Fig. 2 .2Fig. 2. The structure of the multi-channel adaptor. The inputs are the noisy sample Ft and the rough sample X, and the output is the fused sample. The three samples share the same dimension.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 .3Fig. 3. The detailed structure of the SeTe block. The input and output share the same dimension. It passes successively through the spatial multi-head attention module and the temporal multi-head attention module.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "11 )11Next, the 1×3 convolutional kernel is applied for each ROI to extract local features. Finally, FFN is used to adjust temporal features among different channels. The output of every subnetwork has the same size, N × d. The output of the MCA module is F M CA t . The TDS is used to halve the feature dimension and add the channels. Taking the first TDS as an example, for each ROI feature, we apply a 1D convolutional kernel with the size 1×3 to extract local temporal features, and through C convolutional kernels with a step size of 2, each ROI feature is translated to a sequence of vectors with the length d/2. The final output F T DS t has the size C × N × d/2.", "figure_data": "", "figure_id": "fig_2", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. The structure of the BEC estimator. The outputs are a denoised sample and an asymmetric brain connectivity matrix.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "size, C × N × d/2. Specifically, the SMA splits the input F SeT e t into several parallel parts and applies MSA to each of them. We denote H as the head number. The detailed computation of SMA is expressed below:", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Qualitative comparison of brain networks generated by different methods. The left matrix is calculated by the empirical method, and the right matrix is calculated by the proposed model.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. The spatial distribution of the top nine ROIs in three scenarios using both axial and sagittal views.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The generated brain effective connectivities at NC, EMCI, and LMCI, respectively. The arcs with arrows are the effective connection, and the colors have no meaning. The circles on the outside represent the brain regions.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. The altered brain effective connectivities from NC to EMCI, from NC to LMCI, and from EMCI to LMCI, respectively. The top row represents the enhanced connections, and the bottom row represents the diminished connections.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. The top 10 enhanced and diminished effective connections from NC to EMCI.", "figure_data": "", "figure_id": "fig_9", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 10 .10Fig. 10. The top 10 enhanced and diminished effective connections from NC LMCI.", "figure_data": "", "figure_id": "fig_10", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "Fig. 11 .11Fig. 11. The top 10 enhanced and diminished effective connections from EMCI to LMCI.", "figure_data": "", "figure_id": "fig_11", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. The impact of the proposed generator and discriminator on the model's performance.", "figure_data": "", "figure_id": "fig_12", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 13 .13Fig. 13. The influence of SeTe structure on the model's classification performance.", "figure_data": "", "figure_id": "fig_13", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Fig. 14 .14Fig. 14. Visualization of ROI-based BOLD signals from two effective connections. The red arrow is the causal direction. The top row is the diminished connection, and the bottom row is the enhanced connection.", "figure_data": "", "figure_id": "fig_14", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "Classification results of the generated brain networks using different methods (%). .58 86.58 85.29 85.29 85.29 87.65 87.65 87.65 87.58 87.58 87.58 94.37 94.37 94.37 95.08 93.83 93.83 93.83 95.95 95.95 95.95 92.25 92.25 92.25 91.80 92.65 92.65 92.65 93.44 93.44 93.44", "figure_data": "MethodsClassifiersNC vs. EMCINC vs. LMCIEMCI vs. LMCIACCSENSPEAUCACCSENSPEAUCACCSENSPEAUCEmpirical69.13 73.53 65.43 72.0476.76 75.41 77.78 77.4376.74 75.41 77.94 74.78GCCA[25]75.17 76.47 74.07 76.6283.80 83.61 83.95 83.0482.17 83.61 80.88 82.91EC-GAN[30]SVM83.22 82.35 83.95 83.7990.14 90.16 90.12 90.4587.60 85.25 89.71 88.09STGCM[32]84.56 85.29 83.95 87.1392.25 91.80 92.59 92.0991.47 93.44 89.71 91.13Ours85.23 85.23 85.23 86.76 86.76 86.76 83.95 87.20 87.20 87.20 93.66 93.66 93.66 93.44 93.44 93.44 93.83 93.83 93.83 94.88 94.88 94.8891.47 91.80 91.18 91.18 91.18 92.53 92.53 92.53Empirical70.47 72.06 69.14 72.4078.17 78.69 77.78 78.8777.52 80.33 75.00 79.65GCCA[25]76.51 75.00 77.78 78.2383.10 83.61 82.72 85.0082.95 81.97 83.82 83.92EC-GAN[30]BrainnetCNN83.89 83.82 83.95 86.0990.85 90.16 91.36 89.6088.37 86.89 89.71 90.50STGCM[32]85.23 83.82 86.42 85.7893.66 95.08 92.59 94.2991.47 93.44 89.71 91.13Ours86.58 86", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" } ]
Qiankun Zuo; Chi-Man Pun; Yudong Zhang; Hongfei Wang; Jin Hong
[ { "authors": "A T Reid; D B Headley; R D Mill; R Sanchez-Romero; L Q Uddin; D Marinazzo; D J Lurie; P A Valdés-Sosa; S J Hanson; B B ", "journal": "Nature neuroscience", "ref_id": "b0", "title": "Advancing functional connectivity research from association to causation", "year": "2019" }, { "authors": "A Avena-Koenigsberger; B Misic; O Sporns", "journal": "Nature reviews neuroscience", "ref_id": "b1", "title": "Communication dynamics in complex brain networks", "year": "2018" }, { "authors": "S Rupprechter; L Romaniuk; P Series; Y Hirose; E Hawkins; A.-L Sandu; G D Waiter; C J Mcneil; X Shen; M A Harris", "journal": "Brain", "ref_id": "b2", "title": "Blunted medial prefrontal cortico-limbic reward-related effective connectivity and depression", "year": "2020" }, { "authors": "M Mijalkov; G Volpe; J B Pereira", "journal": "Cerebral cortex", "ref_id": "b3", "title": "Directed brain connectivity identifies widespread functional network abnormalities in parkinson's disease", "year": "2022" }, { "authors": "B M Hampstead; M Khoshnoodi; W Yan; G Deshpande; K Sathian", "journal": "Neuroimage", "ref_id": "b4", "title": "Patterns of effective connectivity during memory encoding and retrieval differ between patients with mild cognitive impairment and healthy older adults", "year": "2016" }, { "authors": "S Sami; N Williams; L E Hughes; T E Cope; T Rittman; I T Coyle-Gilchrist; R N Henson; J B Rowe", "journal": "Brain", "ref_id": "b5", "title": "Neurophysiological signatures of alzheimer's disease and frontotemporal lobar degeneration: pathology versus phenotype", "year": "2018" }, { "authors": "Y Shi; H.-I Suk; Y Gao; S.-W Lee; D Shen", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b6", "title": "Leveraging coupled interaction for multimodal alzheimer's disease diagnosis", "year": "2019" }, { "authors": "M Scherr; L Utz; M Tahmasian; L Pasquini; M J Grothe; J P Rauschecker; T Grimmer; A Drzezga; C Sorg; V Riedl", "journal": "Human brain mapping", "ref_id": "b7", "title": "Effective connectivity in the default mode network is distinctively disrupted in alzheimer's disease-a simultaneous resting-state fdg-pet/fmri study", "year": "2021" }, { "authors": "B Ibrahim; S Suppiah; N Ibrahim; M Mohamad; H A Hassan; N S Nasser; M I Saripan", "journal": "Human Brain Mapping", "ref_id": "b8", "title": "Diagnostic power of resting-state fmri for detection of network connectivity in alzheimer's disease and mild cognitive impairment: A systematic review", "year": "2021" }, { "authors": "W Yin; L Li; F.-X Wu", "journal": "Neurocomputing", "ref_id": "b9", "title": "Deep learning for brain disorder diagnosis based on fmri images", "year": "2022" }, { "authors": "Q Zuo; L Lu; L Wang; J Zuo; T Ouyang", "journal": "Frontiers in Neuroscience", "ref_id": "b10", "title": "Constructing brain functional network by adversarial temporal-spatial aligned transformer for early ad analysis", "year": "2022" }, { "authors": "K J Friston; J Kahan; B Biswal; A Razi", "journal": "Neuroimage", "ref_id": "b11", "title": "A dcm for resting state fmri", "year": "2014" }, { "authors": "J Liu; J Ji; X Jia; A Zhang", "journal": "IEEE journal of biomedical and health informatics", "ref_id": "b12", "title": "Learning brain effective connectivity network structure using ant colony optimization combining with voxel activation information", "year": "2019" }, { "authors": "N Xu; R N Spreng; P C Doerschuk", "journal": "Frontiers in neuroscience", "ref_id": "b13", "title": "Initial validation for the estimation of resting-state fmri effective connectivity by a generalization of the correlation approach", "year": "2017" }, { "authors": "A Al-Ezzi; N Yahya; N Kamel; I Faye; K Alsaih; E Gunaseli", "journal": "IEEE Access", "ref_id": "b14", "title": "Severity assessment of social anxiety disorder using deep learning models on brain effective connectivity", "year": "2021" }, { "authors": "S Bagherzadeh; M S Shahabi; A Shalbaf", "journal": "Computers in Biology and Medicine", "ref_id": "b15", "title": "Detection of schizophrenia using hybrid of deep learning and brain effective connectivity image from electroencephalogram signal", "year": "2022" }, { "authors": "J Liu; J Ji; G Xun; A Zhang", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b16", "title": "Inferring effective connectivity networks from fmri time series with a temporal entropy-score", "year": "2021" }, { "authors": "L Zhang; G Huang; Z Liang; L Li; Z Zhang", "journal": "Neurocomputing", "ref_id": "b17", "title": "Estimating scalefree dynamic effective connectivity networks from fmri using group-wise spatial-temporal regularizations", "year": "2022" }, { "authors": "I Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A Courville; Y Bengio", "journal": "Communications of the ACM", "ref_id": "b18", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b19", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Z Dorjsembe; S Odonchimed; F Xiao", "journal": "", "ref_id": "b20", "title": "Three-dimensional medical image synthesis with denoising diffusion probabilistic models", "year": "2022" }, { "authors": "Y Xie; Q Li", "journal": "Springer", "ref_id": "b21", "title": "Measurement-conditioned denoising diffusion probabilistic model for under-sampled medical image reconstruction", "year": "2022" }, { "authors": "C Saharia; J Ho; W Chan; T Salimans; D J Fleet; M Norouzi", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b22", "title": "Image super-resolution via iterative refinement", "year": "2022" }, { "authors": "H.-J Park; K J Friston; C Pae; B Park; A Razi", "journal": "NeuroImage", "ref_id": "b23", "title": "Dynamic effective connectivity in resting state fmri", "year": "2018" }, { "authors": "A K Seth", "journal": "Journal of neuroscience methods", "ref_id": "b24", "title": "A matlab toolbox for granger causal connectivity analysis", "year": "2010" }, { "authors": "A K Seth; A B Barrett; L Barnett", "journal": "Journal of Neuroscience", "ref_id": "b25", "title": "Granger causality analysis in neuroscience and neuroimaging", "year": "2015" }, { "authors": "A M Dsouza; A Z Abidin; L Leistritz; A Wismüller", "journal": "Journal of neuroscience methods", "ref_id": "b26", "title": "Exploring connectivity with large-scale granger causality on resting-state functional mri", "year": "2017" }, { "authors": "N Talebi; A M Nasrabadi; I Mohammad-Rezazadeh; R Coben", "journal": "IEEE transactions on medical imaging", "ref_id": "b27", "title": "Ncreann: Nonlinear causal relationship estimation by artificial neural network; applied for autism connectivity study", "year": "2019" }, { "authors": "Z Abbasvandi; A M Nasrabadi", "journal": "Computers in biology and medicine", "ref_id": "b28", "title": "A self-organized recurrent neural network for estimating the effective connectivity and its application to eeg data", "year": "2019" }, { "authors": "J Liu; J Ji; G Xun; L Yao; M Huai; A Zhang", "journal": "", "ref_id": "b29", "title": "Ec-gan: inferring brain effective connectivity via generative adversarial networks", "year": "2020" }, { "authors": "J Ji; J Liu; L Han; F Wang", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b30", "title": "Estimating effective connectivity by recurrent generative adversarial networks", "year": "2021" }, { "authors": "A Zou; J Ji; M Lei; J Liu; Y Song", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b31", "title": "Exploring brain effective connectivity networks through spatiotemporal graph convolutional models", "year": "2022" }, { "authors": "S You; B Lei; S Wang; C K Chui; A C Cheung; Y Liu; M Gan; G Wu; Y Shen", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b32", "title": "Fine perceptive gans for brain mr image superresolution in wavelet domain", "year": "2022" }, { "authors": "S Hu; B Lei; S Wang; Y Wang; Z Feng; Y Shen", "journal": "IEEE Transactions on Medical Imaging", "ref_id": "b33", "title": "Bidirectional mapping generative adversarial networks for brain mr to pet synthesis", "year": "2021" }, { "authors": "J Hong; Y.-D Zhang; W Chen", "journal": "Knowledge-Based Systems", "ref_id": "b34", "title": "Source-free unsupervised domain adaptation for cross-modality abdominal multi-organ segmentation", "year": "2022" }, { "authors": "M Mirza; S Osindero", "journal": "", "ref_id": "b35", "title": "Conditional generative adversarial nets", "year": "2014" }, { "authors": "Y Wang; B Yu; L Wang; C Zu; D S Lalush; W Lin; X Wu; J Zhou; D Shen; L Zhou", "journal": "Neuroimage", "ref_id": "b36", "title": "3d conditional generative adversarial networks for high-quality pet image estimation at low dose", "year": "2018" }, { "authors": "A Lugmayr; M Danelljan; A Romero; F Yu; R Timofte; L Van Gool", "journal": "", "ref_id": "b37", "title": "Repaint: Inpainting using denoising diffusion probabilistic models", "year": "2022" }, { "authors": "N G Nair; K Mei; V M Patel", "journal": "", "ref_id": "b38", "title": "At-ddpm: Restoring faces degraded by atmospheric turbulence using denoising diffusion probabilistic models", "year": "2023" }, { "authors": "O Özdenizci; R Legenstein", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b39", "title": "Restoring vision in adverse weather conditions with patch-based denoising diffusion models", "year": "2023" }, { "authors": "N Tzourio-Mazoyer; B Landeau; D Papathanassiou; F Crivello; O Etard; N Delcroix; B Mazoyer; M Joliot", "journal": "Neuroimage", "ref_id": "b40", "title": "Automated anatomical labeling of activations in spm using a macroscopic anatomical parcellation of the mni mri single-subject brain", "year": "2002" }, { "authors": "B Lei; Y Zhu; S Yu; H Hu; Y Xu; G Yue; T Wang; C Zhao; S Chen; P Yang", "journal": "Pattern Recognition", "ref_id": "b41", "title": "Multi-scale enhanced graph convolutional network for mild cognitive impairment detection", "year": "2023" }, { "authors": "J Wang; X Wang; M Xia; X Liao; A Evans; Y He", "journal": "Frontiers in human neuroscience", "ref_id": "b42", "title": "Gretna: a graph theoretical network analysis toolbox for imaging connectomics", "year": "2015" }, { "authors": "J Kawahara; C J Brown; S P Miller; B G Booth; V Chau; R E Grunau; J G Zwicker; G Hamarneh", "journal": "NeuroImage", "ref_id": "b43", "title": "Brainnetcnn: Convolutional neural networks for brain networks; towards predicting neurodevelopment", "year": "2017" }, { "authors": "M Xia; J Wang; Y He", "journal": "PloS one", "ref_id": "b44", "title": "Brainnet viewer: a network visualization tool for human brain connectomics", "year": "2013" }, { "authors": "S.-Y Lin; C.-P Lin; T.-J Hsieh; C.-F Lin; S.-H Chen; Y.-P Chao; Y.-S Chen; C.-C Hsu; L.-W Kuo", "journal": "NeuroImage: Clinical", "ref_id": "b45", "title": "Multiparametric graph theoretical analysis reveals altered structural and functional network topology in alzheimer's disease", "year": "2019" }, { "authors": "B Lei; N Cheng; A F Frangi; E.-L Tan; J Cao; P Yang; A Elazab; J Du; Y Xu; T Wang", "journal": "Medical image analysis", "ref_id": "b46", "title": "Self-calibrated brain network estimation and joint non-convex multi-task learning for identification of early alzheimer's disease", "year": "2020" }, { "authors": "B Chen", "journal": "Aging Clinical and Experimental Research", "ref_id": "b47", "title": "Abnormal cortical regions and subsystems in whole brain functional connectivity of mild cognitive impairment and alzheimer's disease: a preliminary study", "year": "2021" }, { "authors": "N Schuff; N Woerner; L Boreta; T Kornfield; L Shaw; J Trojanowski; P Thompson; C Jack; M Weiner; A D N Initiative", "journal": "Brain", "ref_id": "b48", "title": "Mri of hippocampal volume loss in early alzheimer's disease in relation to apoe genotype and biomarkers", "year": "2009" }, { "authors": "M Tahmasian; L Pasquini; M Scherr; C Meng; S Förster; S M Bratec; K Shi; I Yakushev; M Schwaiger; T Grimmer", "journal": "Neurology", "ref_id": "b49", "title": "The lower hippocampus global connectivity, the higher its local metabolism in alzheimer disease", "year": "2015" }, { "authors": "T Yang; K Yu; X Zhang; X Xiao; X Chen; Y Fu; B Li", "journal": "Nature", "ref_id": "b50", "title": "Plastic and stimulus-specific coding of salient events in the central amygdala", "year": "2023" }, { "authors": "M Ortner; L Pasquini; M Barat; P Alexopoulos; T Grimmer; S Förster; J Diehl-Schmid; A Kurz; H Förstl; C Zimmer", "journal": "Frontiers in neurology", "ref_id": "b51", "title": "Progressively disrupted intrinsic functional connectivity of basolateral amygdala in very early alzheimer's disease", "year": "2016" }, { "authors": "E.-S Lee; K Yoo; Y.-B Lee; J Chung; J.-E Lim; B Yoon; Y Jeong", "journal": "Alzheimer Disease & Associated Disorders", "ref_id": "b52", "title": "Default mode network functional connectivity in early and late mild cognitive impairment", "year": "2016" }, { "authors": "J Xue; H Guo; Y Gao; X Wang; H Cui; Z Chen; B Wang; J Xiang", "journal": "Frontiers in Aging Neuroscience", "ref_id": "b53", "title": "Altered directed functional connectivity of the hippocampus in mild cognitive impairment and alzheimer's disease: a resting-state fmri study", "year": "2019" }, { "authors": "D Berron; D Van Westen; R Ossenkoppele; O Strandberg; O Hansson", "journal": "Brain", "ref_id": "b54", "title": "Medial temporal lobe connectivity and its associations with cognition in early alzheimer's disease", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 106.45, 604.72, 193.58, 30.2 ], "formula_id": "formula_0", "formula_text": "q (F 1:T | F 0 ) = T t=1 q (F t | F t-1 )(1)" }, { "formula_coordinates": [ 3, 84.06, 646.4, 215.96, 9.68 ], "formula_id": "formula_1", "formula_text": "q (F t | F t-1 ) = N F t ; 1 -β t F t-1 , β t I(2)" }, { "formula_coordinates": [ 3, 359.95, 65.95, 203.09, 30.2 ], "formula_id": "formula_2", "formula_text": "p θ (F 0:T ) = p (F T ) T t=1 p θ (F t-1 | F t )(3)" }, { "formula_coordinates": [ 3, 348.21, 103.28, 214.82, 12.69 ], "formula_id": "formula_3", "formula_text": "p θ (F t-1 | F t ) = N F t-1 ; µ θ (F t , t) , σ 2 t I(4)" }, { "formula_coordinates": [ 3, 343.51, 189.89, 219.53, 23.23 ], "formula_id": "formula_4", "formula_text": "µ θ (F t , t) = 1 √ α t F t - β t √ 1 -ᾱt θ (F t , t)(5)" }, { "formula_coordinates": [ 3, 397.35, 221.82, 165.69, 22.31 ], "formula_id": "formula_5", "formula_text": "σ 2 t = (1 -ᾱt-1 ) (1 -ᾱt ) β t(6)" }, { "formula_coordinates": [ 3, 335.61, 251.45, 118.19, 14.11 ], "formula_id": "formula_6", "formula_text": "α t = 1 -β t , ᾱt = t i=1 α i ." }, { "formula_coordinates": [ 3, 346.42, 294.89, 216.61, 17.63 ], "formula_id": "formula_7", "formula_text": "E t,F0, -θ √ ᾱt F 0 + √ 1 -ᾱt , t 2(7)" }, { "formula_coordinates": [ 4, 48.96, 531.94, 251.06, 36.8 ], "formula_id": "formula_8", "formula_text": "F t-s from F t . Firstly, a generator G θ (F t , X, t) is utilized to predict the initial sample F[t/s] 0 , then F t-s is sampled using the posterior distribution q(F t-s |F t , F[t/s] 0" }, { "formula_coordinates": [ 4, 48.96, 650.9, 251.06, 25.36 ], "formula_id": "formula_9", "formula_text": "p θ (F t-s | F t , X) := q F t-s | F t , F[t/s] 0 = G [t/s] θ (F t , X, t)(8)" }, { "formula_coordinates": [ 4, 48.96, 690.58, 251.3, 58.15 ], "formula_id": "formula_10", "formula_text": "q F t-s | F t , F[t/s] 0 = q F t | F t-s , F[t/s] 0 q F t-s | F[t/s] 0 q F t | F[t/s] 0 (9) The sampling probability from p θ (F t-s | F t , X) is defined" }, { "formula_coordinates": [ 4, 316.96, 601.78, 246.08, 29.02 ], "formula_id": "formula_11", "formula_text": "F t-s = √ α t (1 -ᾱt-s ) 1 -ᾱt F t + √ ᾱt-s β t 1 -ᾱt G [t/s] θ + β t (10)" }, { "formula_coordinates": [ 5, 116.46, 426.68, 171.12, 25.24 ], "formula_id": "formula_12", "formula_text": "E t = softmax E t X T √ d E t(" }, { "formula_coordinates": [ 5, 350.79, 208.85, 64.04, 12.69 ], "formula_id": "formula_13", "formula_text": "F SeT e t = SMA" }, { "formula_coordinates": [ 5, 335.45, 325.08, 223.44, 13.75 ], "formula_id": "formula_14", "formula_text": "F SeT e t = {F SeT e(1) t , F SeT e(2) t , • • • , F SeT e(d) t } (14" }, { "formula_coordinates": [ 5, 558.89, 328.53, 4.15, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 319.98, 351.96, 243.06, 90.48 ], "formula_id": "formula_16", "formula_text": "t = Att F SeT e(i) t W Q h , F SeT e(i) t W K h , F SeT e(i) t W V h = Att(Q i h , K i h , V i h ) = softmax Q i h (K i h ) T C/H V i h (15) F SeT e t = { FSeT e(1) t , FSeT e(2) t , • • • , FSeT e(d) t }(16)" }, { "formula_coordinates": [ 5, 311.98, 463.84, 251.06, 25.7 ], "formula_id": "formula_17", "formula_text": "W Q h , W K h , W V h ∈ R C×(C/H) project each part of FSeT e t" }, { "formula_coordinates": [ 5, 393.09, 690.95, 169.95, 13.06 ], "formula_id": "formula_18", "formula_text": "F0 = F M CA t + F T U S t (17)" }, { "formula_coordinates": [ 6, 132.86, 85.3, 167.16, 30.32 ], "formula_id": "formula_19", "formula_text": "z i = N j=1 A ji z j + n i(18)" }, { "formula_coordinates": [ 6, 55.5, 537.99, 244.53, 56.68 ], "formula_id": "formula_20", "formula_text": "L SEG = E t,q(Ft|F0,X),p θ (Ft-s|Ft,X) -log D θ F t-s (19) L M DD =E t,q(Ft|F0,X) E q(Ft-s|Ft,X) [-log (D θ (F t-s ))] + E p θ (Ft-s|Ft,X) -log 1 -D θ F t-s }(20" }, { "formula_coordinates": [ 6, 100.67, 639.26, 199.35, 10.66 ], "formula_id": "formula_21", "formula_text": "L REC = E p θ (F0|Fs,X) [||F 0 -F 0 || 1 ](21)" }, { "formula_coordinates": [ 6, 120.06, 720.54, 179.97, 30.32 ], "formula_id": "formula_22", "formula_text": "L SCP = γ || N i=1,j=1 A i,j ||(22)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b11", "b17", "b31", "b19", "b47", "b20", "b41", "b44", "b7", "b6", "b14", "b45", "b43", "b45", "b8", "b10", "b43", "b45", "b8", "b43", "b13", "b45", "b16" ], "table_ref": [], "text": "In many real-world applications, including social networks, chemical molecules, and citation networks, data can be naturally modeled as graphs. Recently, the emerging Graph Neural Networks (GNNs) (Hamilton, Ying, and Leskovec 2017;Kipf and Welling 2016;Veličković et al. 2017;Wu et al. 2021bWu et al. , 2022b;;Liu, Gao, and Ji 2020;Zhou et al. 2020;Liu et al. 2022;Xia et al. 2022) have demonstrated their powerful capability to handle various graph-related tasks (Zhang and Chen 2018;Fan et al. 2019;Errica et al. 2019;Wu et al. 2021a). However, practical deployments of GNNs in the industry are still less popular due to inference efficiency and scalability challenges incurred by data dependency (Jia et al. 2020;Zhang et al. 2021). In other words, GNNs generally rely on message passing to aggregate features from the neighborhood, but fetching and aggregating these nodes during inference can burden latency-sensitive applications. In contrast, Multi-Layer Perceptrons (MLPs) involve no data dependence between pairs of nodes and infer much faster than GNNs, but often with less competitive performance. Motivated by these complementary strengths and weaknesses, one solution to reduce their gaps is to perform GNN-to-MLP knowledge distillation (Yang, Liu, and Shi 2021;Zhang et al. 2021;Ghorbani et al. 2021;Gou et al. 2021), which extracts knowledge from a well-trained teacher GNN and then distills it into a student MLP with the same network architecture (e.g., layer number and layer size).\nMost of the existing GNN-to-MLP distillation methods (Yang, Liu, and Shi 2021;Zhang et al. 2021;Ghorbani et al. 2021) focus on special designs on either student MLPs or teacher GNNs, but default to distill knowledge in a node-tonode fashion. For example, CPF (Yang, Liu, and Shi 2021) combines Label Propagation (LP) (Iscen et al. 2019) into the student MLPs to improve classification performance and thus still suffers from the neighborhood-fetching latency caused by label propagation, which defeats the original intention of MLPs to be inference-efficient. In contrast, GLNN (Zhang et al. 2021) directly distills knowledge from arbitrary GNNs to vanilla MLPs with the same network architecture. While the distilled MLPs of GLNN can be greatly improved by employing more powerful teacher GNNs, the process of distillation usually entails a loss of information (Kim et al. 2021), which may lead to sub-optimal student MLPs. In this paper, we look away from specific instantiations of teacher GNNs and student MLPs, but rather explore two fundamental questions: (1) Can existing GNN-to-MLP distillation ensure that sufficient knowledge is distilled from teacher GNNs to student MLPs? If not, (2) Which knowledge patterns of GNNs are more likely to be distilled into student MLPs?\nPresent Work. In this paper, we identify a potential information drowning problem for existing GNN-to-MLP distillation, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation. To illustrate this, we first factorize GNN knowledge into low-and high-frequency components using graph signal processing theory in the spectral domain and then derive their correspondence in the spatial domain. Moreover, we conduct a comprehensive investigation of the arXiv:2305.10758v2 [cs.LG] 4 Jun 2023 roles played by low-and high-frequency components in the distillation process and describe in detail what information drowning represents, how it arises, what impact it has, and how to deal with it. Extensive experiments have shown that high-frequency and low-frequency knowledge are complementary to each other, and they can further improve performance on top of each other. In this paper, we propose a novel Full-Frequency GNN-to-MLP (FF-G2M) distillation framework, which extracts both low-and high-frequency knowledge from teacher GNNs and injects it into student MLPs." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b5", "b5", "b17", "b11", "b31", "b19", "b40", "b47", "b18", "b25", "b15", "b46", "b42", "b43", "b45", "b39" ], "table_ref": [], "text": "Graph Neural Networks (GNNs). The early GNNs define graph convolution kernels in the spectral domain (Bruna et al. 2013;Defferrard, Bresson, and Vandergheynst 2016) based on the graph signal processing theory, known as ChebyNet (Defferrard, Bresson, and Vandergheynst 2016) and Graph Convolutional Networks (GCN) (Kipf and Welling 2016). The later GNNs directly define updating rules in the spatial space and focus on the design of neighborhood aggregation functions. For instance, GraphSAGE (Hamilton, Ying, and Leskovec 2017) employs a generalized induction framework to generate embeddings for previously unseen nodes by aggregating known node features. Moreover, GAT (Veličković et al. 2017) introduces the selfattention mechanism to assign different importance scores to neighbors for better information aggregation. We refer interested readers to the surveys (Liu, Gao, and Ji 2020;Wu et al. 2020;Zhou et al. 2020) for more GNN architectures. Graph Knowledge Distillation. Despite the great progress, most existing GNNs share the de facto design that relies on message passing to aggregate features from neighborhoods, which may be one major source of latency in GNN inference. To address this problem, there are previous works that attempt to distill knowledge from large teacher GNNs to smaller student GNNs, termed as GNN-to-GNN (Lassance et al. 2020;Wu et al. 2022a;Ren et al. 2021;Joshi et al. 2021). For example, the student model in RDD (Zhang et al. 2020) and TinyGNN (Yan et al. 2020) is a GNN with fewer parameters but not necessarily fewer layers than the teacher GNN, which makes both designs still suffer from the neighborhood-fetching latency caused by data dependency.\nTo enjoy the low-latency of MLPs and high-accuracy of GNNs, the other branch of graph knowledge distillation is to directly distill from large teacher GNNs to student MLPs, termed as GNN-to-MLP. The existing work on GNN-to-MLP distillation can be mainly divided into two branches: student MLPs-focused and teacher GNNs-focused. The former branch, such as CPF (Yang, Liu, and Shi 2021), directly improves student MLPs by adopting deeper and wider network architectures or incorporating label propagation, both of which burden the inference latency. The other branch, such as GLNN (Zhang et al. 2021), distills knowledge from teacher GNNs to vanilla MLPs with the same network architectures but without other computing-consuming operations; while the performance of their distilled MLPs can be indirectly improved by employing more powerful GNNs, they still cannot match their corresponding teacher GNNs. More-over, PGKD (Wu et al. 2023) proposes a Prototype-Guided Knowledge Distillation (PGKD) method, which does not require graph edges yet learns structure-aware MLPs. In this paper, we aim to develop a model-agnostic GNN-to-MLP distillation that is applicable to various GNN architectures." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b12", "b43", "b45", "b8", "b30", "b4", "b27", "b26", "b33" ], "table_ref": [], "text": "Notions. Let G = (V, E, X) be an attributed graph, where V is the set of N nodes with features X = [x 1 , x 2 , • • • , x N ] ∈ R N ×d and E denotes the edge set. Each node v i ∈ V is associated with a d-dimensional features vector x i , and each edge e i,j ∈ E denotes a connection between node v i and v j . The graph structure is denoted by an adjacency matrix A ∈ [0, 1] N ×N with A i,j = 1 if e i,j ∈ E and A i,j = 0 if e i,j / ∈ E. Consider a semi-supervised node classification task where only a subset of node V L with labels Y L are known, we denote the labeled set as D L = (V L , Y L ) and unlabeled set as D U = (V U , Y U ), where V U = V\\V L . The node classification aims to learn a mapping Φ : V → Y with labeled data, so that it can be used to infer the labels Y U . Graph Neural Networks (GNNs). Most existing GNNs rely on message passing to aggregate features from the neighborhood. A general GNN framework consists of two key computations for each node v i : (1) AGGREGATE: aggregating messages from neighborhood N i ; (2) UPDATE: updating node representation from its representation in the previous layer and aggregated messages. Considering a Llayer GNN, the formulation of the l-th layer is as follows\nm (l) i = AGGREGATE (l) h (l-1) j : vj ∈ Ni h (l) i = UPDATE (l) h (l-1) i , m (l) i (1)\nwhere 1 ≤ l ≤ L, h (0) i = x i is the input feature, and h (l) i is the representation of node v i in the l-th layer. Multi-Layer Perceptrons (MLPs). To achieve efficient inference, the vanilla MLPs (with the same network architecture as the teacher GNNs) are used as the student model by default in this paper. For a L-layer MLP, the l-th layer is composed of a linear transformation, an activation function σ = ReLu(•), and a dropout function Dropout(•), as\nz (l) i = Dropout σ z (l-1) i W (l-1) , z (0) i = x i (2)\nwhere W (0) ∈ R d×F and W (l) ∈ R F ×F (1 ≤ l < L) are weight matrices with the hidden dimension F . In this paper, the network architecture of MLPs, such as the layer number L and layer size F , is set the same as that of teacher GNNs. GNN-to-MLP Knowledge Distillation. The knowledge distillation is first introduced in (Hinton et al. 2015) to handle mainly image data, where knowledge is transferred from a cumbersome teacher model to a simpler student model. The later works on GNN-to-MLP distillation (Yang, Liu, and Shi 2021;Zhang et al. 2021;Ghorbani et al. 2021) extend it to the graph domain by imposing KL-divergence constraint D KL (•, •) between the softmax label distributions generated by teacher GNNs and student MLPs and directly optimizing the objective function as follows\nLKD = 1 |V| i∈V DKL softmax z (L) i , softmax h (L) i (3)\nKnowledge Factorization from the Perspective of Spectral and Spatial Domain\nIn this section, we first theoretically factorize the knowledge learned by GNNs into low-and high-frequency components in the spectral domain based on graph signal processing theory (Shuman et al. 2013). The normalized graph Laplacian matrix of graph G is defined as\nL = I N -D -1 2 A D -1 2 , where A = A + I N ∈ R N ×N is an adjacency matrix with self-loop, D ∈ R N ×N is a diagonal degree matrix with D i,i = j A i,j\n, and I N denotes the identity matrix. Since L is a real symmetric matrix, it can be eigendecomposed as (Chung and Graham 1997). According to graph signal processing theory, we can directly take the eigenvector {u l } N l=1 as bases. Given signal x ∈ R d , the graph Fourier transform and inverse Fourier transform (Sandryhaila and Moura 2013;Ricaud et al. 2019) are defined as x = U ⊤ x and x = U x. Thus, the convolutional * G between the signal x and convolution kernel F can be defined as follows\nL = UΛU ⊤ , where Λ = diag ([λ 1 , λ 2 , • • • , λ N ]) with each eigenvalue λ l ∈ [0, 2] corresponding to an eigenvectors u l in U\nF * G x = U U ⊤ F ⊙ U ⊤ x = Ug θ U ⊤ x (4)\nwhere ⊙ denotes the element-wise product and g θ is a parameterized diagonal matrix. Most of the existing GNNs architectures can be regarded as a special instantiation on the convolutional kernel F (i.e., the matrix g θ ). For example, GCN-Cheby parameterizes g θ with a polynomial expansion\ng θ = K-1 k=0 α k Λ k\n, and GCN defines the convolutional kernel as g θ = I N -Λ. Considering a special convolution kernel\nF A = I N , we have F A * G x = UI N U ⊤ x = U x = x, i.e.\n, this is an identity mapping, where all information can be preserved. Next, we decompose the graph knowledge into lowfrequency and high-frequency components (Bo et al. 2021a;Wu et al. 2019) by factorizing F A = I N as follows\nFA = IN = 1 2 IN + D -1 2 A D -1 2 Low-Pass Filter F M + IN -D -1 2 A D -1 2\nHigh-pass Filter F H For a given signal x ∈ R d , e.g., node feature, we have\nF A * G x = 1 2 (F M + F H ) * G x = 1 2 (F M * G x + F H * G x) =\nx, which means that any signal x can be decomposed into the average of two components F M * G x and F H * G x." }, { "figure_ref": [], "heading": "Analysis on the Spectral Domain. The Proposition 1 states what the two components", "publication_ref": [ "b24", "b48", "b3", "b17", "b11", "b31" ], "table_ref": [], "text": "F M * G x and F H * G x represent.\nProposition 1 The convolution kernel F M works as a lowpass filter, which filters out high-frequency information, and F M * G x represents low-frequency knowledge; F H works as a high-pass filter, which filters out low-frequency information, and F H * G x represents high-frequency knowledge.\nProof 1 For a L-layer GNN, the signal x is filtered by the L-order convolution kernel\nF L M = (I N + D -1 2 A D -1 2 ) L = (2I N -L) L to output F L M * G x = U(2I N -Λ) L U ⊤ x with g L θ (λ i ) = (2 -λ i ) L .\nAs shown in Fig. 1, g L θ (λ i ) decreases monotonically in the range λ i ∈ [0, 2] and reaches g L θ (λ i = 2) = 0 at λ i = 2, which mainly amplifies the low-frequency information and filters out the high-frequency information. Similarly, the L-order convolution kernel\nF L H has g L θ (λ i ) = λ L i .\nAs shown in Fig. 1, g L θ (λ i ) increases monotonically in the of range λ i ∈ [0, 2] and reaches g L θ (λ i = 0) = 0 at λ i = 0, which mainly filters out the low-frequency information but in turn amplifies the high-frequency information.\n1 0 1 2 3 4 Eigenvalue 0.0 2.5 5.0 7.5 g ( ) 2 (2 ) 2 2 (2 ) 3 2 H M 2 M 3 M Figure 1: Eigenvalues vs. Amplitudes\nCorrespondence on the Spatial Domain. We have derived that F M * G x and F H * G x represent mainly the low-and high-frequency components of signal x, and we next can derived their correspondences in the spatial domain, as follows\nF M * G x i → x (low) i = x i + j∈Ni x j |N i ||N j | F H * G x i → x (high) i = x i - j∈Ni x j |N i ||N j | (5)\nBy the derivation in Eq. ( 5), the low-frequency knowledge F M * G x is the sum of node feature and its neighborhood features in the spatial domain. On the other hand, the highfrequency knowledge F H * G x represents the differences between the target node feature with its neighborhood features. There have recently been some novel GNN models (Bo et al. 2021b;Pei et al. 2020;Zhu et al. 2020;Chien et al. 2021) that can capture both low-and high-frequency information simultaneously or adaptively. However, in this paper, we focus on the design of distillation objective functions and do not consider indirect performance improvements by employing these more powerful but complex GNNs. Instead, we consider the most commonly used GNNs, such as GCN (Kipf and Welling 2016), GraphSAGE (Hamilton, Ying, and Leskovec 2017), and GAT (Veličković et al. 2017), all of which rely on multi-layer message passing to aggregate features of neighboring nodes that are multiple hops away, i.e., they essentially work as a low-pass filter F L M or its variants." }, { "figure_ref": [], "heading": "Roles Played by Low-and High-Frequency Knowledge during Distillation", "publication_ref": [], "table_ref": [], "text": "Rethinking the Core of Knowledge Distillation\nWe rethink the core of knowledge distillation from three shallow-to-deep perspectives to highlight our motivations.\n• Firstly, knowledge distillation enables the representations of MLPs to \"mimic\" those of GNNs as closely as possible by imposing KL-divergence constraints between their softmax distribution probabilities. However, such a Epoch mimicking (or fitting) process is inevitably accompanied by a loss of information, especially high-frequency information, which explains why the performance of student MLPs is always hard to match with that of teacher GNNs.\n• Secondly, for a neural network framework, any change in the final representations is achieved indirectly by optimizing the mapping function, i.e., the network parameters. In this sense, knowledge distillation essentially optimizes the parameter matrices {W (l) } L-1 l=0 of the student MLPs to make it functionally approximates the convolution kernel of the teacher GNNs, which makes the student MLPs also serve as a low-pass filter F L M for graph data. • Finally, the low-pass filter in the spectral domain is equivalent to neighborhood aggregation in the spatial domain as derived in Eq. ( 5), which in essence can be considered as a special use of the graph topology.\nTo explore the roles played by graph topology during GNN-to-MLP distillation, we plot the mean cosine similarity of nodes with their first-order neighbors for vanilla GCNs, vanilla MLPs, and Distilled MLPs (GLNN) on the Cora dataset in Fig. 2(a), from which we observe that the mean similarity of GCNs and GLNN gradually increases with training, while that of vanilla MLPs gradually decreases, which indicates that knowledge distillation has introduced graph topology as an inductive bias (as GCNs has done), while vanilla MLPs do not. As a result, the distilled MLPs can enjoy the benefits of topology-awareness in training but without neighborhood-fetching latency in inference." }, { "figure_ref": [], "heading": "High-Frequency Information Drowning", "publication_ref": [], "table_ref": [], "text": "Next, we discuss a potential high-frequency information drowning problem from both spectral and spatial domains, i.e., the high-frequency information of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during the process of GNN-to-MLP knowledge distillation.\nHow information drowning arises? From the perspective of spectral domain, the knowledge distillation optimizes the network parameters of the student MLPs to make it functionally approximate the convolution kernel of the teacher GNNs, i.e., F L M ≈ F L M . The information loss induced by such approximation may be inconsequential for high-amplitude low-frequency information but can be catastrophic for those high-frequency information with very low amplitude, as shown in Fig. 2(c). As a result, compared to low-frequency information, high-frequency information is more likely to be drowned by these optimization errors.\nWhat impact does information drowning have? From the perspective of spatial domain, the information drowning may lead to distilled MLPs that, despite preserving neighborhood smoothing well, can easily neglect differences between nodes, such as pairwisde distances. To illustrate this, we consider a target node v i and its two neighboring nodes v j and v k in Fig. 2(d), where they are mapped closely by GNNs. In the process of knowledge distillation, the representations of these three nodes may be mapped around the representations of teacher GNNs, i.e., they are still mapped closely with most of their low-frequency information preserved; however, the relative distances between nodes, i.e., their high-frequency information, may be drowned dramatically. For example, node v i is adjacent to node v j but far from node v k in the representation space of teacher GNNs. However, in the representation space of student MLPs, node v i becomes closer to node v k and farther from node v j .\nThe curves of the pairwise distance differences between the teacher GCNs and the student MLP in Fig. 2(b) show that common knowledge distillation (e.g., GLNN) is not good at capturing high frequency information, compared to our proposed FF-G2M. Moreover, extensive qualitative and quantitative experiments have been provided to demonstrate the harmfulness of the identified high-frequency information drowning problem in the experimental section. The detailed experimental settings, including hyperparameters and evaluation metric definitions, are available in Appendix B&E." }, { "figure_ref": [ "fig_2" ], "heading": "Full-Frequency GNN-to-MLP (FF-G2M) Knowledge Distillation", "publication_ref": [], "table_ref": [], "text": "The above discussions reached two important insights: (1) the inductive bias of graph topology plays an important role, and (2) it is mainly the low-frequency knowledge of graph data that has been distilled from the teacher GNNs to the student MLPs. Inspired by these two insights, we propose Low-Frequency Distillation (LFD) and Hign-Frequency Distillation (HFD) to fully capture the low-frequency and highfrequency knowledge learned by GNNs, respectively. An high-level overview of the proposed Full-Frequency GNNto-MLP (FF-G2M) framework is shown in Fig. 3. Full-Frequency GNN-to-MLP Distillation " }, { "figure_ref": [], "heading": "Low-Frequency (LFD)", "publication_ref": [], "table_ref": [], "text": "representations of teacher GNNs are generated by explicit message passing, so it mainly captures the lowfrequency information of the graph data as analyzed earlier. Unlike aggregating features from neighborhoods as in GNNs, we directly distill (diffuse) knowledge from teacher GNNs into the neighborhoods of student MLPs in order to better utilize topological information and low-frequency knowledge captured by GNNs, formulated as follows\nL LFD = 1 |E| i∈V j∈Ni∪i D KL σ(z (L) j /τ 1 ), σ(h (L) i /τ 1 ) (6)\nwhere τ 1 is the low-frequency distillation temperature, and σ = softmax(•) denotes an activation function." }, { "figure_ref": [], "heading": "High-Frequency Distillation (HFD)", "publication_ref": [], "table_ref": [], "text": "As derived in Eq. ( 5), the high-frequency components in the spectral domain represent the differences between node feature and its neighborhood features in the spatial domain. Inspired by this, we propose High-Frequency Distillation (HFD), a GNN knowledge objective that trains student MLPs to preserve the neighborhood pairwise differences from the representation space of teacher GNNs. The neighborhood pairwise differences around node v i are defined as the differences between the target node feature s i and its neighborhood features {s j | j ∈ N i }, which can be computed by the kernel K (s i , s j ) = |s i -s j |, where | • | denotes the element-wise absolute values. The high-frequency distillation trains the student model to mimic the neighborhood pairwise differences from the teacher GNNs via KLdivergence constraints, which can be defined as follows\nL HFD = 1 |E| i∈V j∈Ni D KL σ K z (L) i , z (L) j /τ 2 , σ K h (L) i , h (L) j /τ 2 (7)\nwhere K (•, •) denotes the element-wise absolute values, and τ 2 is the high-frequency distillation temperature." }, { "figure_ref": [], "heading": "Training Strategy", "publication_ref": [], "table_ref": [], "text": "The pseudo-code of the FF-G2M framework is summarized in Appendix C. To achieve GNN-to-MLP knowledge dis-tillation, we first pre-train the teacher GNNs with the classification loss\nL label = 1 |V L | i∈V L H y i , σ(h (L) i )\n, where H(•) denotes the cross-entropy loss and y i is the groundtruth label of node v i . Finally, the total objective function to distill the low-and high-frequency knowledge from the teacher GNNs into the student MLPs is defined as follows\nL total = λ |V L | i∈V L H y i , σ(z (L) i ) + 1-λ L LFD +L HFD\nwhere λ is the weights to balance the influence of the classification loss and two knowledge distillation losses. The time complexity analysis of FF-G2M is available in Appendix D." }, { "figure_ref": [ "fig_2" ], "heading": "Discussion and Comparision", "publication_ref": [], "table_ref": [], "text": "In this subsection, we compare the proposed FF-G2M framework with the commonly used node-to-node distillation (e.g., GLNN) in Fig. 3. While the node-to-node distillation can map neighboring nodes closely in the representation space of MLPs, i.e., preserving low-frequency knowledge, it completely confounds the relative distance between node pairs, i.e., high-frequency knowledge is drowned, leading to a different (incorrect) class-boundary with the teacher GNNs. In terms of the proposed FF-G2M framework, the Low-Frequency Distillation distills (diffuses) the features aggregated from the neighborhood in the teacher GNNs back into their neighborhood of the student MLPs to better utilize the extracted low-frequency knowledge. Besides, High-Frequency Distillation directly distills the neighborhood pairwise differences from teacher GNNs into the student MLPs to better capture the high-frequency knowledge patterns, i.e., the relative positions between pairs of nodes." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b28", "b9", "b29", "b17", "b19" ], "table_ref": [], "text": "Datasets. The effectiveness of the FF-G2M framework is evaluated on six public real-world datasets, including Cora (Sen et al. 2008), Citeseer (Giles, Bollacker, andLawrence 1998), Pubmed (McCallum et al. 2000), Coauthor-CS, Coauthor-Physics, and Amazon-Photo (Shchur et al. 2018). For each dataset, following the data splitting settings of (Kipf and Welling 2016;Liu, Gao, and Ji 2020), we select 20 nodes per class to construct a training set, 500 nodes for validation, and 1000 nodes for testing. A statistical overview Table 1: Classificatiom accuracy ± std (%) on six real-world datasets, where we consider three different GNN architectures (GCN, GraphSAGE, and GAT) as the teacher and pure MLPs as the student model. The best metrics are marked by bold. " }, { "figure_ref": [], "heading": "GraphSAGE", "publication_ref": [ "b11", "b31", "b45", "b32" ], "table_ref": [], "text": "Vanilla SAGE 82.5 ± 0.6 70.9 ± 0.6 77.9 ± 0.4 92.0. ± 0.6 89.7 ± 1.0 92.2 ± 0.9 -Vanilla MLP 59.7 ± 1.0 60.7 ± 0.5 71.5 ± 0.5 77.4 ± 1.2 87.5 ± 1.4 89.2 ± 0.9 -GLNN 82.7 ± 0.8 70.5 ± 0.5 79.7 ± 0.6 91.0 ± 0.9 92.5 ± 1.0 93.0 ± 0.7 -FF-G2M 84.5 ± 0.8 73.1 ± 1.2 80.9 ± 0.5 94.1 ± 0.5 93.6 ± 0.5 94. Baselines. Three basic components in knowledge distillation are (1) teacher model, (2) student model, and (3) distillation loss. As a model-agnostic general framework, FF-G2M can be combined with any teacher GNN architecture. In this paper, we consider three types of teacher GNNs, including GCN (Kipf and Welling 2016), Graph-SAGE (Hamilton, Ying, and Leskovec 2017), and GAT (Veličković et al. 2017). As for the student model, we default to using pure MLPs (with the same network architecture as the teacher GNNs) as the student model for a fair comparison. Finally, the focus of this paper is on designing distillation objectives rather than powerful teacher and student models. Therefore, we only take GLNN (Zhang et al. 2021) as an important baseline to compare FF-G2M with the conventional node-to-node distillation approach. The experiments of all baselines and FF-G2M are based on the standard implementation in the DGL library (Wang et al. 2019) using PyTorch 1.6.0 library on NVIDIA V100 GPU. Each set of experiments is run five times with different random seeds, and the average are reported as metrics." }, { "figure_ref": [], "heading": "Classification Performance Comparison", "publication_ref": [], "table_ref": [], "text": "This paper aims to explore which knowledge patterns should and how to be distilled into student MLPs, rather than designing more powerful teacher GNNs. Therefore, we consider three classical GNNs, including GCN, GraphSAGE, and GAT, as the teacher models and distill their knowledge into MLPs with the same network architecture. The experimental results on six datasets are reported in Table . 1, from which we can make the following observations: (1) In general, more powerful teacher GNNs can lead to student MLPs with better classification performance. However, such improvements are usually very limited and do not work for all datasets and GNN architectures. For example, on the Citeseer dataset, the performance of GLNN drops over the vanilla implementation of teacher GNNs by 0.4% (Graph-SAGE) and 0.6% (GAT), respectively. (2) The proposed FF-G2M framework can consistently improve the performance of student MLPs across three GNN architectures on all six datasets. For example, FF-G2M can outperform vanilla teacher GNNs by 2.63% (GCN), 2.58% (GraphSAGE), and 2.55% (GAT) averaged over six datasets, respectively." }, { "figure_ref": [ "fig_3" ], "heading": "Qualitative and Quantitative Analysis", "publication_ref": [], "table_ref": [], "text": "Extensive qualitative and quantitative experiments are conducted to explore the existence and harmfulness of the information drowning problem and how to solve it by FF-G2M.\nQualitative Analysis on Visualizations. We consider GCNs as the teacher model and compare its visualization with that of vanilla MLPs, GLNN, and FF-G2M on the Cora dataset (due to space limitations, more results can be found in Appendix F). We select a target node (id 27 for Cora) and analyze its relative position relationship with its neighbors in Fig. 4, from which we observe that: (1) The vanilla MLPs map neighboring nodes apart, which indicates that it is not even good at capturing low-frequency information.\n(2) GLNN fails to capture the relative positions between the target node and its neighbors, i.e., highfrequency information.\n(3) FF-G2M well preserves the relative positions between nodes while mapping neighboring nodes closely, which suggests that it is good at capturing both low-and high-frequency information. For example, on the Cora dataset, the target node (id 27) is the closest to node (id 1810) and the farthest from node (id 2678) in the visualizations of both teacher GCNs and FF-G2M's student MLPs." }, { "figure_ref": [], "heading": "Quantitative Analysis on Evaluation Metrics.", "publication_ref": [], "table_ref": [], "text": "To study what knowledge patterns of GNNs are actually distilled into MLPs during GNN-to-MLP distillation, we consider both (1) low-frequency knowledge, measured by the mean cosine similarity of nodes with their first-order neighbors, and (2) " }, { "figure_ref": [], "heading": "Roles of Low-and High-frequency Knowledge", "publication_ref": [], "table_ref": [], "text": "To evaluate the roles played by low-and high-frequency knowledge in GNN-to-MLP distillation, we consider distil-lation with only L LFD and L HFD , in addition to the full FF-G2M model. The experiments (with GCNs as the teacher model) on six datasets are reported in Table . 2, from which we observe that: (1) The proposed low-frequency distillation L LFD makes fuller use of the graph topology and the low-frequency information from GNNs, and in turn outperforms GLNN that adopts node-to-node distillation on all six datasets. (2) While both low-and high-frequency distillation can work alone to improve the performance of vanilla MLPs, the former plays a primary role and the latter a secondary (auxiliary) role. More importantly, these two distillations are complementary to each other and can further improve performance on top of each other.\n(3) The FF-G2M (full model) considers both low-and high-frequency distillation and is capable of capturing full-frequency knowledge, and thus can far outperform GLNN on all six datasets." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we factorize the knowledge learned by GNNs into low-and high-frequency components in the spectral and spatial domains and then conduct a comprehensive investigation on their roles played in GNN-to-MLP distillation. Our key finding is existing GNN-to-MLP distillation may suffer from a potential information drowning problem, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the low-frequency knowledge during distillation. Therefore, we propose a novel Full-Frequency GNN-to-MLP (FF-G2M) knowledge distillation framework, which extracts both low-and high-frequency knowledge from GNNs and injects it into MLPs. As a simple but general framework, FF-G2M outperforms other leading methods across various GNN architectures and graph datasets. Limitations still exist; for example, this paper pays little attention to the special designs on teacher GNNs, and designing more expressive teachers to directly capture fullfrequency knowledge may be another promising direction. " }, { "figure_ref": [], "heading": "C. Pseudo-code", "publication_ref": [], "table_ref": [], "text": "The pseudo-code of the proposed FF-G2M framework is summarized in Algorithm. 1." }, { "figure_ref": [], "heading": "D. Complexity Analysis", "publication_ref": [], "table_ref": [], "text": "The gation of loss L total . 10: end for 11: Predicted labels Y U for those unlabeled nodes V U . 12: return Predicted labels Y U and the network parameters of student MLPs {W l } L-1 l=0 ." }, { "figure_ref": [], "heading": "E. Definitions on Evaluation Metrics", "publication_ref": [], "table_ref": [], "text": "As derived in Eq. ( 5), the low-frequency knowledge can be represented by the sum of the target node feature and its neighborhood features in the spatial domain, which essentially encourages neighborhood smoothing. In this paper, the low-frequency knowledge is measured by the mean cosine similarity of nodes with their 1-order neighbors, as follows\nmean cosine similarit = 1 |E| i∈V j∈Ni s i • s j |s i | |s j |\nwhere s i and s j are the representations of node v i and v j , and we set s i = h (L) i for teacher GNNs and s i = z (L) i for student MLPs. On the other hand, the high-frequency knowledge can be represented as the differences between the target node feature with its neighborhood features in the spatial domain. In this paper, the low-frequency knowledge is measured by the KL-divergence between the pairwise distances " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work is supported in part by Ministry of Science and Technology of the People's Republic of China (No. 2021YFA1301603) and National Natural Science Foundation of China (No. U21A20427)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "https://github.com/LirongWu/FF-G2M." }, { "figure_ref": [], "heading": "Appendix A. Dataset Statistics", "publication_ref": [ "b17", "b45", "b21" ], "table_ref": [], "text": "Six publicly available graph datasets are used to evaluate the FF-G2M framework. An overview summary of the statistical characteristics of datasets is given in Tab. A1. For the three small-scale datasets, namely Cora, Citeseer, and Pubmed, we follow the data splitting strategy in (Kipf and Welling 2016). For the three large-scale datasets, namely Coauthor-CS, Coauthor-Phy, and Amazon-Photo, we follow (Zhang et al. 2021;Luo et al. 2021) to randomly split the data, and each random seed corresponds to a different splitting." }, { "figure_ref": [], "heading": "B. Hyperparameters and Search Space", "publication_ref": [ "b32" ], "table_ref": [], "text": "All baselines and our approach are implemented based on the standard implementation in the DGL library (Wang et al. 2019) using the PyTorch 1.6.0 library with Intel(R) Xeon(R) Gold 6240R @ 2.40GHz CPU and NVIDIA V100 GPU. The following hyperparameters are set for all datasets: learning rate lr = 0.01 (lr = 0.001 for Amazon-Photo); weight decay decay = 5e-4; Maximum Epoch E = 500; Layer number L = 2 (L = 3 for Cora and Amazon-Photo); distillation temperature τ 1 = 1.0. The other dataset-specific hyperparameters are determined by a hyperparameter search tool -NNI for each dataset, including hidden dimension F , loss weight λ, dropout ratio R, and distillation temperature τ 2 . The hyperparameter search space is shown in Tab. A2, and the model with the highest validation accuracy is selected for testing. The best hyperparameter choices for each dataset and GNN architecture are available in the supplementary materials. of teacher GNNs and student MLPs, which is defined as\nF. More Qualitative Visualizations.\nWe consider GCNs as the teacher model and compare its visualization with that of vanilla MLPs, GLNN, and FF-G2M on the Citeseer dataset. We select a target node (id 557 for Citeseer) and analyze its relative position relationship with its neighbors in Fig. A1. The analyses presented in the paper for the Cora dataset still hold true for the Citeseer dataset." } ]
Recent years have witnessed the great success of Graph Neural Networks (GNNs) in handling graph-related tasks. However, MLPs remain the primary workhorse for practical industrial applications due to their desirable inference efficiency and scalability. To reduce their gaps, one can directly distill knowledge from a well-designed teacher GNN to a student MLP, which is termed as GNN-to-MLP distillation. However, the process of distillation usually entails a loss of information, and "which knowledge patterns of GNNs are more likely to be left and distilled into MLPs?" becomes an important question. In this paper, we first factorize the knowledge learned by GNNs into low-and high-frequency components in the spectral domain and then derive their correspondence in the spatial domain. Furthermore, we identified a potential information drowning problem for existing GNN-to-MLP distillation, i.e., the high-frequency knowledge of the pre-trained GNNs may be overwhelmed by the lowfrequency knowledge during distillation; we have described in detail what it represents, how it arises, what impact it has, and how to deal with it. In this paper, we propose an efficient Full-Frequency GNN-to-MLP (FF-G2M) distillation framework, which extracts both low-frequency and high-frequency knowledge from GNNs and injects it into MLPs. Extensive experiments show that FF-G2M improves over the vanilla MLPs by 12.6% and outperforms its corresponding teacher GNNs by 2.6% averaged over six graph datasets and three common GNN architectures.
Extracting Low-/High-Frequency Knowledge from Graph Neural Networks and Injecting it into MLPs: An Effective GNN-to-MLP Distillation Framework
[ { "figure_caption": "Figure 2: (a) Mean cosine similarity (the higher, the better) between nodes with their first-order neighbors on Cora. (b) Pairwise distance differences (the lower, the better) between teacher GCNs and student MLPs on Cora. (c)(d) Illustrations of how the high-frequency information drowning arises and what potential impact it has in the spectral and spatial domains, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure3: Illustration of the Full-Frequency GNN-to-MLP (FF-G2M) distillation framework, where the dotted red lines denote the predicted class-boundary, the solid black lines denote feature aggregation from the neighborhood, and the dashed black lines denote the distillation of knowledge (neighborhood features and pairwise distances) from teacher GNNs to student MLPs.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Representation 2D-Visualizations (by UMAP (McInnes, Healy, and Melville 2018)) of the teacher model and three student models on Cora. Each node is colored by its ground-truth label, and the numbers around the nodes denote the node ids.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: (a) Curves of mean cosine similarity (the higher, the better) between nodes with their first-order neighbors. (b) Curves of pairwise distance differences (the lower, the better) between the teacher GNNs and student MLPs.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Classificatiom accuracy ± std (%) on six real-world datasets. The best metrics are marked by bold.", "figure_data": "MethodCoraCiteseerPubmedAmazon-Photo Coauthor-CS Coauthor-PhyVanilla GCN82.2 ± 0.5 71.6 ± 0.4 79.3 ± 0.391.8 ± 0.689.9 ± 0.791.9 ± 1.2Vanilla MLP59.7 ± 1.0 60.7 ± 0.5 71.5 ± 0.577.4 ± 1.287.5 ± 1.489.2 ± 0.9GLNN (Zhang et al. 2021)82.8 ± 0.5 72.7 ± 0.4 80.2 ± 0.691.4 ± 1.092.7 ± 1.093.2 ± 0.5Low-Frequency KD w/ L LFD83.4 ± 0.9 73.7 ± 0.6 81.0 ± 0.592.1 ± 0.893.2 ± 0.893.7 ± 0.8High-Frequency KD w/ L HFD 68.5 ± 0.8 63.2 ± 0.7 74.4 ± 0.482.5 ± 1.389.3 ± 1.791.0 ± 1.6FF-G2M (full model)84.3 ± 0.4 74.0 ± 0.5 81.8 ± 0.494.2 ± 0.493.8 ± 0.594.4 ± 0.9high-frequency knowledge, measured by KL-divergence be-tween the pairwise distances of teacher GNNs and studentMLPs, respectively. The detailed mathematical definitionsof these two evaluation metrics are available in AppendixE. From the experimental results on the Cora dataset re-ported in Fig. 5, we make four observations: (1) The vanillaMLP does not consider the inductive bias of the graphtopology at all and thus fails to capture the low-and high-frequency knowledge in the graph data. (2) GLNN is capableof successfully capturing low-frequency information, i.e.,neighborhood smoothing, but is not good at capturing high-frequency knowledge, i.e., difference information betweenpairs of nodes. (3) The proposed low-and high-frequencydistillation has an advantage over GLNN in capturing onetype of individual frequency but lags behind in another fre-quency. (4) The proposed FF-G2M combines both the twodistillation and is better at capturing both low-and high-frequency knowledge than GLNN, especially the latter.0.9Cosine Similarity0.3 0.4 0.5 0.6 0.7 0.82550Epoch 75 100 125 Vanilla MLPs Vanilla GCNs GLNN Low-Frequency KD High-Frequency KD FF-G2M", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hyperparameter search space.", "figure_data": "HyperparametersSearch SpaceHidden Dimension F [128, 256, 512, 1024, 2048]Loss Weight λ[0.0, 0.1, 0.2, 0.3, 0.4, 0.5]Dropout Ratio R[0.3, 0.4, 0.5, 0.6]Temperature τ 2[1.0, 1.1, 1.2, 1.3, 1.4, 1.5]", "figure_id": "tab_3", "figure_label": "A2", "figure_type": "table" }, { "figure_caption": "training time complexity of FF-G2M mainly comes from two parts: (1) GNN training O(|V|dF + |E|F ) and (2) knowledge distillation O(|E|F ), where d and F are the dimensions of input and hidden spaces. The total time complexity O(|V|dF + |E|F ) is linear w.r.t the number of nodes |V| and edges |E|, which is in the same order as GCN and GLNN. Besides, the inference time of FF-G2M can be reduced from O(|V|dF + |E|F ) to O(|V|dF ), which is as fast as MLP, due to the removal of neighborhood aggregation. Algorithm 1 Algorithm for the Full-Frequency GNN-to-MLP knowledge distillation framework Input: Feature Matrix: X; Edge Set: E; # Epochs: E. Output: Predicted Labels Y U and network parameters of student MLPs {W l } L-1 l=0 . 1: Randomly initialize the parameters of GNNs and MLPs. 2: Compute GNN representations and pre-train the GNNs until convergence by L label . 3: for epoch ∈ {0, 1, • • • , E -1} do", "figure_data": "4:Compute GNN representations {h (L) i } N i=1 from the5:pre-trained GNNs and freeze it;6:Compute MLP representations {z(L) i } N i=1 and calcu-7: 8:late the total loss L total ; Update model parameters {W l } L-1 l=0 by back propa-", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistical information of the datasets.", "figure_data": "DatasetCora Citeseer Pubmed Amazon-Photo Coauthor-CS Coauthor-Phy# Nodes270833271971776501833334493# Edges527846144432411908181894247962# Features 1433370350074568058415# Classes7638155Label Rate 5.2%3.6%0.3%2.1%1.6%0.3%", "figure_id": "tab_5", "figure_label": "A1", "figure_type": "table" } ]
Lirong Wu; Haitao Lin; Yufei Huang; Tianyu Fan; Stan Z Li
[ { "authors": "D Bo; X Wang; C Shi; H Shen", "journal": "", "ref_id": "b0", "title": "Beyond Low-frequency Information in Graph Convolutional Networks", "year": "2021" }, { "authors": "D Bo; X Wang; C Shi; H Shen", "journal": "", "ref_id": "b1", "title": "Beyond Low-frequency Information in Graph Convolutional Networks", "year": "2021" }, { "authors": "J Bruna; W Zaremba; A Szlam; Y Lecun", "journal": "", "ref_id": "b2", "title": "Spectral networks and locally connected networks on graphs", "year": "2013" }, { "authors": "E Chien; J Peng; P Li; O Milenkovic", "journal": "", "ref_id": "b3", "title": "Adaptive Universal Generalized PageRank Graph Neural Network", "year": "2021" }, { "authors": "F R Chung; F C Graham", "journal": "American Mathematical Soc", "ref_id": "b4", "title": "Spectral graph theory", "year": "1997" }, { "authors": "M Defferrard; X Bresson; P Vandergheynst", "journal": "", "ref_id": "b5", "title": "Convolutional neural networks on graphs with fast localized spectral filtering", "year": "2016" }, { "authors": "F Errica; M Podda; D Bacciu; A Micheli", "journal": "", "ref_id": "b6", "title": "A fair comparison of graph neural networks for graph classification", "year": "2019" }, { "authors": "W Fan; Y Ma; Q Li; Y He; E Zhao; J Tang; D Yin", "journal": "", "ref_id": "b7", "title": "Graph neural networks for social recommendation", "year": "2019" }, { "authors": "M Ghorbani; M Bahrami; A Kazi; M Soleymani Baghshah; H R Rabiee; N Navab", "journal": "Springer", "ref_id": "b8", "title": "GKD: Semi-supervised Graph Knowledge Distillation for Graph-Independent Inference", "year": "2021" }, { "authors": "C L Giles; K D Bollacker; S Lawrence", "journal": "", "ref_id": "b9", "title": "Cite-Seer: An automatic citation indexing system", "year": "1998" }, { "authors": "J Gou; B Yu; S J Maybank; D Tao", "journal": "International Journal of Computer Vision", "ref_id": "b10", "title": "Knowledge distillation: A survey", "year": "2021" }, { "authors": "W Hamilton; Z Ying; J Leskovec", "journal": "", "ref_id": "b11", "title": "Inductive representation learning on large graphs", "year": "2017" }, { "authors": "G Hinton; O Vinyals; J Dean", "journal": "", "ref_id": "b12", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "A Iscen; G Tolias; Y Avrithis; O Chum", "journal": "", "ref_id": "b13", "title": "Label propagation for deep semi-supervised learning", "year": "2019" }, { "authors": "Z Jia; S Lin; R Ying; J You; J Leskovec; A Aiken", "journal": "", "ref_id": "b14", "title": "Redundancy-Free Computation for Graph Neural Networks", "year": "2020" }, { "authors": "C K Joshi; F Liu; X Xun; J Lin; C.-S Foo", "journal": "", "ref_id": "b15", "title": "On Representation Knowledge Distillation for Graph Neural Networks", "year": "2021" }, { "authors": "T Kim; J Oh; N Kim; S Cho; S.-Y Yun", "journal": "", "ref_id": "b16", "title": "Comparing kullback-leibler divergence and mean squared error loss in knowledge distillation", "year": "2021" }, { "authors": "T N Kipf; M Welling", "journal": "", "ref_id": "b17", "title": "Semi-supervised classification with graph convolutional networks", "year": "2016" }, { "authors": "C Lassance; M Bontonou; G B Hacene; V Gripon; J Tang; A Ortega", "journal": "IEEE", "ref_id": "b18", "title": "Deep geometric knowledge distillation with graphs", "year": "2020" }, { "authors": "M Liu; H Gao; S Ji", "journal": "", "ref_id": "b19", "title": "Towards deeper graph neural networks", "year": "2020" }, { "authors": "Z Liu; Y Luo; L Wu; S Li; Z Liu; S Z Li", "journal": "", "ref_id": "b20", "title": "Are Gradients on Graph Structure Reliable in Gray-box Attacks?", "year": "2022" }, { "authors": "Y Luo; A Chen; K Yan; L Tian", "journal": "", "ref_id": "b21", "title": "Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages", "year": "2021" }, { "authors": "A K Mccallum; K Nigam; J Rennie; K Seymore", "journal": "Information Retrieval", "ref_id": "b22", "title": "Automating the construction of internet portals with machine learning", "year": "2000" }, { "authors": "L Mcinnes; J Healy; J Melville", "journal": "", "ref_id": "b23", "title": "Umap: Uniform manifold approximation and projection for dimension reduction", "year": "2018" }, { "authors": "H Pei; B Wei; K C Chang; -C Lei; Y Yang; B ", "journal": "", "ref_id": "b24", "title": "Geom-gcn: Geometric graph convolutional networks", "year": "2020" }, { "authors": "Y Ren; J Ji; L Niu; M Lei", "journal": "", "ref_id": "b25", "title": "Multi-task Self-distillation for Graph-based Semi-Supervised Learning", "year": "2021" }, { "authors": "B Ricaud; P Borgnat; N Tremblay; P Gonc ¸alves; P Vandergheynst", "journal": "Comptes Rendus Physique", "ref_id": "b26", "title": "Fourier could be a data scientist: From graph Fourier transform to signal processing on graphs", "year": "2019" }, { "authors": "A Sandryhaila; J M Moura", "journal": "IEEE", "ref_id": "b27", "title": "Discrete signal processing on graphs: Graph fourier transform", "year": "2013" }, { "authors": "P Sen; G Namata; M Bilgic; L Getoor; B Galligher; T Eliassi-Rad", "journal": "AI magazine", "ref_id": "b28", "title": "Collective classification in network data", "year": "2008" }, { "authors": "O Shchur; M Mumme; A Bojchevski; S Günnemann", "journal": "", "ref_id": "b29", "title": "Pitfalls of graph neural network evaluation", "year": "2018" }, { "authors": "D I Shuman; S K Narang; P Frossard; A Ortega; P Vandergheynst", "journal": "IEEE signal processing magazine", "ref_id": "b30", "title": "The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains", "year": "2013" }, { "authors": "P Veličković; G Cucurull; A Casanova; A Romero; P Lio; Y Bengio", "journal": "", "ref_id": "b31", "title": "Graph attention networks", "year": "2017" }, { "authors": "M Wang; D Zheng; Z Ye; Q Gan; M Li; X Song; J Zhou; C Ma; L Yu; Y Gai; T He; G Karypis; J Li; Z Zhang", "journal": "", "ref_id": "b32", "title": "Deep Graph Library: A Graph-Centric, Highly-Performant Package for Graph Neural Networks", "year": "2019" }, { "authors": "F Wu; A Souza; T Zhang; C Fifty; T Yu; K Weinberger", "journal": "", "ref_id": "b33", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": " Pmlr", "journal": "", "ref_id": "b34", "title": "", "year": "" }, { "authors": "L Wu; H Lin; Z Gao; C Tan; S Li", "journal": "", "ref_id": "b35", "title": "GraphMixup: Improving Class-Imbalanced Node Classification on Graphs by Self-supervised Context Prediction", "year": "2021" }, { "authors": "L Wu; H Lin; Z Gao; C Tan; S Li", "journal": "", "ref_id": "b36", "title": "Selfsupervised on Graphs: Contrastive, Generative, or Predictive", "year": "2021" }, { "authors": "L Wu; H Lin; Y Huang; S Z Li", "journal": "", "ref_id": "b37", "title": "Knowledge Distillation Improves Graph Structure Augmentation for Graph Neural Networks", "year": "2022" }, { "authors": "L Wu; H Lin; J Xia; C Tan; S Z Li", "journal": "Neural Computing and Applications", "ref_id": "b38", "title": "Multilevel disentanglement graph neural network", "year": "2022" }, { "authors": "T Wu; Z Zhao; J Wang; X Bai; L Wang; N Wong; Y Yang", "journal": "", "ref_id": "b39", "title": "Edge-free but Structure-aware: Prototype-Guided Knowledge Distillation from GNNs to MLPs", "year": "2023" }, { "authors": "Z Wu; S Pan; F Chen; G Long; C Zhang; S Y Philip", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b40", "title": "A comprehensive survey on graph neural networks", "year": "2020" }, { "authors": "J Xia; L Wu; J Chen; B Hu; S Z Li", "journal": "", "ref_id": "b41", "title": "SimGRACE: A Simple Framework for Graph Contrastive Learning without Data Augmentation", "year": "2022" }, { "authors": "B Yan; C Wang; G Guo; Y Lou", "journal": "", "ref_id": "b42", "title": "TinyGNN: Learning Efficient Graph Neural Networks", "year": "2020" }, { "authors": "C Yang; J Liu; C Shi", "journal": "", "ref_id": "b43", "title": "Extract the Knowledge of Graph Neural Networks and Go Beyond it: An Effective Knowledge Distillation Framework", "year": "2021" }, { "authors": "M Zhang; Y Chen", "journal": "", "ref_id": "b44", "title": "Link prediction based on graph neural networks", "year": "2018" }, { "authors": "S Zhang; Y Liu; Y Sun; N Shah", "journal": "", "ref_id": "b45", "title": "Graph-less Neural Networks: Teaching Old MLPs New Tricks via Distillation", "year": "2021" }, { "authors": "W Zhang; X Miao; Y Shao; J Jiang; L Chen; O Ruas; B Cui", "journal": "", "ref_id": "b46", "title": "Reliable data distillation on graph convolutional network", "year": "2020" }, { "authors": "J Zhou; G Cui; S Hu; Z Zhang; C Yang; Z Liu; L Wang; C Li; M Sun", "journal": "AI Open", "ref_id": "b47", "title": "Graph neural networks: A review of methods and applications", "year": "2020" }, { "authors": "J Zhu; Y Yan; L Zhao; M Heimann; L Akoglu; D Koutra", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b48", "title": "Beyond Homophily in Graph Neural Networks: Current Limitations and Effective Designs", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 347.16, 369.42, 210.84, 32.34 ], "formula_id": "formula_0", "formula_text": "m (l) i = AGGREGATE (l) h (l-1) j : vj ∈ Ni h (l) i = UPDATE (l) h (l-1) i , m (l) i (1)" }, { "formula_coordinates": [ 2, 333.33, 504.49, 224.67, 14.07 ], "formula_id": "formula_1", "formula_text": "z (l) i = Dropout σ z (l-1) i W (l-1) , z (0) i = x i (2)" }, { "formula_coordinates": [ 2, 326.39, 683.02, 231.61, 23.59 ], "formula_id": "formula_2", "formula_text": "LKD = 1 |V| i∈V DKL softmax z (L) i , softmax h (L) i (3)" }, { "formula_coordinates": [ 3, 54, 128.14, 238.5, 52.71 ], "formula_id": "formula_3", "formula_text": "L = I N -D -1 2 A D -1 2 , where A = A + I N ∈ R N ×N is an adjacency matrix with self-loop, D ∈ R N ×N is a diagonal degree matrix with D i,i = j A i,j" }, { "formula_coordinates": [ 3, 54, 191.26, 238.5, 32.45 ], "formula_id": "formula_4", "formula_text": "L = UΛU ⊤ , where Λ = diag ([λ 1 , λ 2 , • • • , λ N ]) with each eigenvalue λ l ∈ [0, 2] corresponding to an eigenvectors u l in U" }, { "formula_coordinates": [ 3, 71.03, 298.46, 221.47, 11.72 ], "formula_id": "formula_5", "formula_text": "F * G x = U U ⊤ F ⊙ U ⊤ x = Ug θ U ⊤ x (4)" }, { "formula_coordinates": [ 3, 54, 372.39, 73.43, 14.11 ], "formula_id": "formula_6", "formula_text": "g θ = K-1 k=0 α k Λ k" }, { "formula_coordinates": [ 3, 54, 395.7, 218.88, 11.23 ], "formula_id": "formula_7", "formula_text": "F A = I N , we have F A * G x = UI N U ⊤ x = U x = x, i.e." }, { "formula_coordinates": [ 3, 54.3, 459.25, 223.66, 27.92 ], "formula_id": "formula_8", "formula_text": "FA = IN = 1 2 IN + D -1 2 A D -1 2 Low-Pass Filter F M + IN -D -1 2 A D -1 2" }, { "formula_coordinates": [ 3, 54, 496.96, 238, 22.55 ], "formula_id": "formula_9", "formula_text": "F A * G x = 1 2 (F M + F H ) * G x = 1 2 (F M * G x + F H * G x) =" }, { "formula_coordinates": [ 3, 158.15, 559.76, 134.35, 9.68 ], "formula_id": "formula_10", "formula_text": "F M * G x and F H * G x represent." }, { "formula_coordinates": [ 3, 54, 644.84, 238.5, 38.17 ], "formula_id": "formula_11", "formula_text": "F L M = (I N + D -1 2 A D -1 2 ) L = (2I N -L) L to output F L M * G x = U(2I N -Λ) L U ⊤ x with g L θ (λ i ) = (2 -λ i ) L ." }, { "formula_coordinates": [ 3, 319.5, 66.54, 238.5, 24.52 ], "formula_id": "formula_12", "formula_text": "F L H has g L θ (λ i ) = λ L i ." }, { "formula_coordinates": [ 3, 347.81, 141.93, 177.75, 121.04 ], "formula_id": "formula_13", "formula_text": "1 0 1 2 3 4 Eigenvalue 0.0 2.5 5.0 7.5 g ( ) 2 (2 ) 2 2 (2 ) 3 2 H M 2 M 3 M Figure 1: Eigenvalues vs. Amplitudes" }, { "formula_coordinates": [ 3, 334.33, 318.37, 223.67, 56.24 ], "formula_id": "formula_14", "formula_text": "F M * G x i → x (low) i = x i + j∈Ni x j |N i ||N j | F H * G x i → x (high) i = x i - j∈Ni x j |N i ||N j | (5)" }, { "formula_coordinates": [ 5, 55.29, 333.31, 237.21, 26.8 ], "formula_id": "formula_15", "formula_text": "L LFD = 1 |E| i∈V j∈Ni∪i D KL σ(z (L) j /τ 1 ), σ(h (L) i /τ 1 ) (6)" }, { "formula_coordinates": [ 5, 67.78, 581.82, 224.72, 47.37 ], "formula_id": "formula_16", "formula_text": "L HFD = 1 |E| i∈V j∈Ni D KL σ K z (L) i , z (L) j /τ 2 , σ K h (L) i , h (L) j /τ 2 (7)" }, { "formula_coordinates": [ 5, 377.13, 238.07, 146.68, 15.34 ], "formula_id": "formula_17", "formula_text": "L label = 1 |V L | i∈V L H y i , σ(h (L) i )" }, { "formula_coordinates": [ 5, 319.5, 302.79, 235.28, 27.42 ], "formula_id": "formula_18", "formula_text": "L total = λ |V L | i∈V L H y i , σ(z (L) i ) + 1-λ L LFD +L HFD" }, { "formula_coordinates": [ 10, 347.28, 478.28, 181.74, 26.83 ], "formula_id": "formula_20", "formula_text": "mean cosine similarit = 1 |E| i∈V j∈Ni s i • s j |s i | |s j |" } ]
2023-06-16
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b52", "b52", "b56", "b21", "b25", "b42", "b60", "b32", "b6", "b17", "b26", "b13", "b75", "b38", "b46", "b74", "b81", "b83", "b74", "b7", "b11" ], "table_ref": [], "text": "3D shape understanding has recently garnered a surge of interest driven by the growing demands in real-world applications, such as augmented/virtual reality, autonomous driving, and robotics. Despite significant advancements in 3D recognition and analysis, existing data-driven approaches are still greatly limited by the scale of 3D training datasets and tend to exhibit poor generalization when facing unseen shape categories, hindering the deployment of existing models in real-world applications.\nNote that 3D shapes and 2D images can be easily linked through rendering, and the dataset scale issue of 2D images has been remarkably addressed, as shown in recent works such as CLIP [53]. Therefore, many recent studies aim to utilize pre-trained 2D image-language models [53,57] to assist 3D tasks, such as 3D generation [22,26,43,61,33,7] and 3D scene-level segmentation [18,27,14,76,39,47]. Regarding 3D shape-level understanding, a straightforward idea is to project 3D data to the 2D + + + + Figure 1: Left: Zero-shot shape classification on the Objaverse-LVIS (1,156 categories) and Model-Net40 datasets. OpenShape outperforms previous methods by a large margin. We exclude shapes in Objaverse-LVIS during training, and we also retrain ULIP [75] on our ensembled training shapes for fair comparison. Right: Our shape representations encode a broad range of semantic and visual concepts. We input two 3D shapes and use their shape embeddings to retrieve the top three shapes whose embeddings are simultaneously closest to both inputs. See Section. 4.4 for more details.\ndomain through rendering and use CLIP to analyze the 2D images, thereby enabling zero-shot 3D shape classification [82,84]. However, these methods suffer from occlusion and information loss during projection, and unnecessary latency due to point cloud rendering and multiple CLIP inferences.\nTo overcome the limitations caused by projection, it is necessary to train a 3D-native model by distilling knowledge from pretrained 2D models. However, training a 3D-native model requires a set of 3D shapes, and the amount of knowledge that can be distilled is determined by the size of the 3D dataset. For example, ULIP [75] aims to learn a joint representation space between language, 2D images, and 3D shapes, but uses a small-scale 3D dataset ShapeNetCore [8] for knowledge distillation. Specifically, ULIP fixes the 2D CLIP text and image encoders and trains a dedicated 3D-native point cloud encoder to extract 3D shape representations. The 3D encoder strives to align the 3D shape embedding space with the CLIP image and language embedding spaces by utilizing contrastive learning across all three modalities. However, since ULIP is only trained on 52K shapes of 55 object categories, it still struggles with out-of-distribution shape categories and fails to demonstrate an impressive open-world understanding of 3D shapes.\nIn this work, we propose a novel method called OpenShape, which follows a similar paradigm as ULIP but aims to achieve a more generalized and scalable joint representation space encompassing language, 2D images, and 3D shapes. Our focus mainly lies on scaling up representation learning and addressing corresponding challenges. In OpenShape, we emphasize four key factors during the training process: (a) data scale: we significantly increase the scale of 3D training data by combining four public 3D shape datasets, resulting in 876k 3D shapes covering much more diverse categories; (b) text quality: the 3D shapes from our main dataset, Objaverse [12], is dominated with inaccurate or uninformative text descriptions. Given the data scale, we propose three strategies to automatically filter and enrich the text descriptions; (c) 3D backbone scaling: since most existing 3D backbones target small datasets, we find that it's important but non-trivial to scale up the 3D backbones; and (d) data resampling: since the ensembled dataset is highly unbalanced, we utilize hard negative mining to improve the model's discriminative ability.\nWe first evaluate OpenShape on the zero-shot 3D shape classification task. As shown in Figure 1, OpenShape outperforms previous zero-shot approaches on the ModelNet40 dataset by at least 20%. Moreover, OpenShape excels at handling long-tail categories. On the challenging Objaverse-LVIS dataset, which contains 1,156 categories, OpenShape achieves a 46.8% accuracy, significantly surpassing previous methods. Notably, this performance gap remains even when ULIP is retrained on our ensembled datasets, highlighting the superiority of our text enrichment and training strategies. Besides zero-shot classification, we present demos that showcase the wide range of visual and semantic concepts learned by OpenShape. For example, in Figure 1-right, we take two 3D shapes as input and use their OpenShape embeddings to retrieve the top three shapes whose embeddings are simultaneously closest to both inputs from our ensembled dataset. The retrieved shapes exhibit an interesting combination of the semantic and geometric elements from both input shapes. Furthermore, since we align our 3D shape embedding space with the CLIP language and image embedding space, we demonstrate that OpenShape embeddings can be easily integrated with other CLIP-based models to perform cross-modality tasks such as point cloud captioning and point cloud-conditioned image generation.\n2 Related Work" }, { "figure_ref": [], "heading": "CLIP for 3D Learning", "publication_ref": [ "b52", "b28", "b34", "b79", "b3", "b53", "b58", "b21", "b25", "b42", "b60", "b32", "b6", "b31", "b4", "b27", "b73", "b37", "b17", "b26", "b13", "b75", "b38", "b46", "b78", "b22", "b57", "b80", "b30", "b8", "b20", "b81", "b83", "b50", "b74", "b18", "b36", "b0", "b81", "b83", "b23" ], "table_ref": [], "text": "Image-language models like CLIP have achieved remarkable performance through large-scale imagetext pretraining [53,29,35,80,4,54,59]. As these models excel at capturing rich visual concepts and possess impressive zero-shot capabilities, they have been applied to various 3D vision tasks. For instance, numerous recent works utilize CLIP to facilitate zero-shot text-to-3D generation [22,26,43,61,33,7,32,5,28,74,38], typically through CLIP-guided per-scene optimization. From a recognition perspective, some works focus on scene-level representation, aiming to leverage CLIP priors for zero-shot 3D segmentation or detection in both indoor [18,27,14,76,39,47,79,23,58,81,31] and outdoor scenes [9,21]. Meanwhile, another line of work focuses on shape-level understanding, targeting zero-shot shape classification [82,84,51,75,19] and part segmentation [37,1]. There are two primary working paradigms for these methods. The first [82,84,24] involves using images as a medium representation, projecting 3D point clouds into 2D and employing 2D CLIP for inference. However, these methods typically suffer from occlusion and information loss during projection, along with unnecessary latency due to point cloud rendering and multiple 2D CLIP inferences. The second paradigm involves training a 3D-native encoder attempting to distill or fuse CLIP features into 3D representations. Our paper follows this paradigm." }, { "figure_ref": [], "heading": "3D Shape Representation Learning", "publication_ref": [ "b14", "b65", "b47", "b1", "b63", "b54", "b12", "b2", "b68", "b45", "b76", "b19", "b61", "b41", "b64", "b54", "b82", "b59", "b72", "b50", "b74", "b18", "b74", "b50" ], "table_ref": [], "text": "Various works have studied self-supervised pretraining for point clouds by designing pretext tasks [15,66,48,2,64] such as self-reconstruction [55,13,3,69], masked auto-encoding [46,77,20], distortion reconstruction [62,42,65], normal estimation [55], and contrastive learning [83,60,73]. These tasks enhance models' shape representations and improve their performance on downstream applications, although they do not involve multimodal semantic alignments during pretraining.\nRecently, some works [51,75,19], exemplified by ULIP [75], have explored learning multimodal joint representations for 3D shapes. They train 3D-native shape encoders by aligning 3D shape embeddings with CLIP's language and/or image embeddings through multimodal contrastive learning. Works like ReCon [51] further combines cross-modal contrastive learning with masked auto-encoding for added enhancement. While these methods allow for zero-shot 3D classification through the computation of 3D-text similarity, the amount of distilled knowledge and their model capability are heavily limited by the small-scale training datasets used. Our work follows this paradigm but aims to learn more generalizable and scalable representations to enable open-world 3D shape understanding." }, { "figure_ref": [ "fig_0" ], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "We propose a novel method, OpenShape, for learning generalizable and scalable multi-modal joint representation between language, 2D images, and 3D shapes, as shown in Figure 2. We first introduce the multi-modal contrastive learning framework we used for aligning representations of three modalities in Section 3.1. We then elaborate how we create our training sets and enrich our text data in Sections 3.2 and 3.3. In Section 3.4, we present how we scale up our 3D backbone models. Finally, we propose a hard negative mining strategy to enhance contrastive learning in Section 3.5." }, { "figure_ref": [ "fig_0" ], "heading": "Multi-Modal Representation Alignment", "publication_ref": [ "b50", "b74", "b18", "b74", "b24" ], "table_ref": [], "text": "We aim to learn 3D shape representations that are aligned with pretrained CLIP embedding spaces of language and image. As shown in Figure 2 (c), we train a 3D native encoder f P that takes a 3D point cloud as input and extracts 3D shape feature. Following previous works [51,75,19], such as ULIP [75], we utilize multi-modal contrastive learning for representation alignment. Since CLIP is pretrained on a much larger scale data, we freeze both its text encoder f T and its image encoder f I during feature alignment to preserve CLIP's feature priors and avoid model collapse. Specifically, given a sampled batch of triplets {(P i , T i , I i )}, where P i denotes a point cloud of a 3D shape, T i and I i denote corresponding text and image, the contrastive loss is calculated as:\n- 1 4n i log exp(h P i • h T i /τ ) j exp(h P i • h T j /τ ) + log exp(h T i • h P i /τ ) j exp(h T i • h P j /τ ) + log exp(h P i • h I i /τ ) j exp(h P i • h I j /τ ) + log exp(h I i • h P i /τ ) j exp(h I i • h P j /τ ) (1\n)\nwhere n is the number of shapes in a batch; τ is a learnable temperature;\nh P i = f P (P i )/|f P (P i )|, h T i = g T (f T (T i ))/|g T (f T (T i ))|,and\nh I i = g I (f I (I i ))/|g I (f I (I i ))\n| denote normalized projected features of P i , T i , and I i , where g T and g I are two learnable linear projections. Since f T and f I are frozen, we extract all f T (T i ) and f I (I i ) before training and cache them for acceleration. In most of our experiments, we utilize OpenCLIP ViT-bigG-14 [25] as the pretrained CLIP model." }, { "figure_ref": [ "fig_0" ], "heading": "Ensembling 3D Datasets", "publication_ref": [ "b7", "b15", "b10", "b11", "b74" ], "table_ref": [], "text": "Since the scale and diversity of training triplets play a crucial role in learning scalable shape representations, we ensemble four currently-largest public 3D datasets for training as shown in Figure 2 (a), resulting in 876k training shapes. Among these four datasets, ShapeNetCore [8], 3D-FUTURE [16] and ABO [11] are three popular datasets used by prior works. They contain human-verified high-quality 3D shapes, but only cover a limited number of shapes and dozens of categories. The Objaverse [12] dataset is a more recent dataset, containing many more 3D shapes and covering significantly more diverse categories. However, shapes in Objaverse are mainly uploaded by web users and not verified by experts, and thus have uneven quality and exhibit highly unbalanced distributions, necessitating further processing.\nTo create triplets for training, for each shape, we sample 10,000 points from the mesh surface and interpolate the point colors according to the mesh textures. We also render 12 color images from the preset camera poses that uniformly cover the whole shape. For datasets providing thumbnails, we include them as part of image candidates, since they typically capture the shape from a better camera view. For the Objaverse dataset, we use the model name as the raw text for each shape. For other datasets, we utilize provided metadata to create raw texts (see supplementary for details). During each pretraining iteration, we randomly sample one rendered image or thumbnail for each shape, and apply standard augmentation to the point clouds [75]." }, { "figure_ref": [ "fig_0", "fig_2", "fig_2", "fig_2" ], "heading": "Text Filtering and Enrichment", "publication_ref": [ "b35", "b66", "b11", "b44", "b33", "b62", "b5", "b74" ], "table_ref": [], "text": "We find that only applying contrastive learning between 3D shapes and 2D images is insufficient to fuel zero-shot 3D classification, even when training on large-scale datasets. We conjecture that this is caused by the inherent domain gap in CLIP's language and image embedding spaces, which is also observed by previous studies [36,67]. Consequently, 3D-text alignment is not guaranteed even if we obtain good 3D-image alignments via contrastive learning. Therefore, we need to explicitly align 3D shapes with text. Along this process, to facilitate better 3D-text alignment, we introduce 3 techniques to improve the text quality: filtering, captioning, and image retrieval, as shown in Figure 2 (b).\nFiltering. As shown in Figure 3, the 3D shapes from our main dataset, Objaverse, is dominated with noisy text descriptions (\"names\") uploaded by web users. Many of the problematic texts can be identified from the text itself without seeing the corresponding 3D shape. We thus leverage a powerful Figure 4: Accuracy on Objaverse-LVIS [12] when scaling up the parameters of different models.\nlarge language model, GPT-4 [45], to filter out inaccurate or uninformative text descriptions. We find that GPT-4 excels at recognizing irrelevant contents, such as timestamps, pure model numbers, incomprehensible descriptions, random filenames (e.g., new project), and random characters. Through GPT-4, we filter out about 30% of raw user texts. Note that we only filter the texts, and still keep all shapes for training. More details, such as the prompts we used, are presented in the supplementary.\nCaptioning. We utilize BLIP [34] and the Azure cognition services to caption the 2D thumbnails (if present, or images rendered from a fixed frontal view) of the 3D models, obtaining two texts for each shape. As shown in Figure 3, the captioning models can usually produce meaningful and descriptive captions that either enhance user-uploaded texts or replace low-quality ones. We also notice that the two caption models complement each other, leading to better performance.\nImage Retrieval. In addition to image captioning, we also perform image retrieval to obtain additional descriptions of 3D models. We retrieve k-NN images of shape renderings from the LAION-5B dataset [63] using the CLIP ViT-L retrieval index [6]. We then take the captions of the k-NN images as the retrieved texts for our 3D models. Compared with captioning model generations, retrieved texts cover a wider range of text styles. They can also include more fine-grained semantics than both the user texts and the generated captions (e.g., \"Labrador\" in Figure 3).\nIn each iteration of pretraining, for each shape, we first randomly sample a text source category among the raw text (if unfiltered), the captions, and the retrieved texts. We then select a text candidate from the selected category. We also apply the template-based prompt engineering technique used in ULIP [75] to both training texts and test-time category names. Specifically, we extend a word or a phrase to a collection of templated simple sentences and take their average embedding." }, { "figure_ref": [], "heading": "Scaling Up 3D Point Cloud Backbones", "publication_ref": [ "b71", "b77", "b9" ], "table_ref": [ "tab_0" ], "text": "Previous works on 3D point cloud learning have primarily focused on smaller-scale datasets like ShapeNet. These techniques may not be directly applicable to our larger-scale ensembled dataset and need to be scaled up accordingly. We find that different 3D backbones may exhibit distinct behavior and scalability when trained on datasets with varying sizes. Specifically, we compare six popular backbones trained on ShapeNet or our ensembled dataset by evaluating their zero-shot classification performance on ModelNet40 [72] and Objaverse-LVIS datasets (for now, these backbones are trained with their original configurations and without scaling up model sizes). Objaverse-LVIS is a subset of Objaverse dataset with human-verified category labels. With 1,156 categories, it serves as a suitable dataset for evaluating zero-shot long-tail classification, and we exclude all shapes of Objaverse-LVIS from this experiment. Results are shown in Table 1. We find that when trained on ShapeNet, all backbones share similar performances. However, when trained on our ensembled dataset, the performance gap between backbones increases significantly. This suggests that while the original versions of these backbones share a similar number of parameters, some may have been saturated when trained on small datasets, while others do not.\nWe also explore the performance and scalability of these backbones when scaling up the model sizes and training on our ensembled dataset. Please refer to the supplementary for details on how we scale up each model. As shown in Figure 4, we observe that all 3D backbones benefit significantly from model scaling. However, traditional backbones without a shrinking hierarchical structure, such as DGCNN and PointNet, require operating completely on dense points or modeling the relationships (e.g., through kNN) between dense points. As a result, they become more time-consuming and memory-intensive when scaled up compared to more modern backbones. We therefore select PointBERT [78] (Transformer-based) and SparseConv [10] (convolution-based) as our 3D backbones for the remaining experiments, as they exhibit strong performance and scalability." }, { "figure_ref": [], "heading": "Hard Negative Mining", "publication_ref": [ "b55", "b29" ], "table_ref": [], "text": "Our ensembled dataset exhibits a high degree of class imbalance. Certain common categories, such as building, may occupy tens of thousands of shapes, while many other categories, such as walrus and wallet, are underrepresented with only a few dozen or even fewer shapes. Consequently, when randomly constructing batches, it is unlikely that shapes from two confusing categories (e.g., apples and cherries) will be contrasted within the same batch. Inspired by some previous works [56,30], we propose an offline hard negative mining strategy for improving the training efficiency and performance. Specifically, in the first round of training, we train our model with random batches until it is about to converge. We then compute the kNN for each shape in the learned 3D embedding space. In the second round of training, for each iteration, we randomly select s seed shapes and then obtain m neighbors from the kNN results of each seed shape, resulting s × m shapes per batch. In this way, confusing pairs are more likely to be selected in a single batch. However, this may also introduce false negative pairs (e.g., two apples) into contrastive learning. To mitigate this issue, we leverage image and text embeddings to filter out pairs sharing similar texts when calculating the contrastive loss. Specifically, for two shapes i and j selected from the same seed shape, if\nh T j • h I i + δ > h T i • h I i\n, where h T and h I are text and image embeddings, and δ is a small threshold, we believe that the text embeddings of i and j are very close to each other, and we remove j from i's negative examples when calculating contrastive loss. By employing this strategy to construct batches, we observe faster and better model learning." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Zero-Shot Shape Classification", "publication_ref": [ "b71", "b67", "b11", "b11", "b16", "b81", "b83", "b50", "b18", "b23", "b74", "b81", "b83", "b11", "b71", "b66" ], "table_ref": [], "text": "We evaluate the zero-shot classification performances of our models on three benchmarks: the traditional ModelNet40 [72] and ScanObjectNN [68], as well as a new benchmark, Objaverse-LVIS [12]. ModelNet40 and ScanObjacetNN consist of 40 and 15 common categories, respectively. Objaverse-LVIS is an annotated subset of Objaverse [12] and comprises 46,832 shapes among 1,156 LVIS [17] categories. With a much larger base of classes than other benchmarks, Objaverse-LVIS presents a challenging long-tailed distribution, making it a better reflection on models' performance in open-world scenarios. We compare OpenShape with existing zero-shot approaches, including PointCLIP [82], PointCLIPv2 [84], ReCon [51], CG3D [19], CLIP2Point [24], and ULIP [75]. Among them, PointCLIP [82] and PointCLIPv2 [84] project point clouds into 2D images and directly utilize 2D CLIP for inference, while other methods leverage the CLIP embedding spaces for Table 2: Zero-shot classification on Objaverse-LVIS [12], ModelNet40 [72], and ScanObjectNN [67]." }, { "figure_ref": [], "heading": "Method training shape", "publication_ref": [ "b11", "b71", "b67", "b11", "b71", "b66", "b74", "b7", "b71", "b40", "b70" ], "table_ref": [], "text": "Objaverse-LVIS [12] ModelNet40 [72] ScanObjectNN [68] source Top1 Top3 Top5 Top1 Top3 Top5 Top1 Top3 Top5\nPointCLIP [ Figure 5: Few-shot linear probing on Objaverse-LVIS [12], ModelNet40 [72], and ScanOb-jectNN [67]. We report the average performance over 10 random seeds. alignment and require 3D shapes for training. We report results on these baselines using their released checkpoints. To better analyze the source of our performance gains, we also retrain the baseline ULIP [75] on our ensembled shape dataset, but we use the original texts in the four constituent datasets along with the official codebase without backbone scaling. We train OpenShape and ULIP on three different sets of training shapes: \"Ensembled\" denotes using all shapes from the four datasets; \"Ensembled (no LVIS)\" is the same but excludes all shapes from the Objavserse-LVIS subset; \"ShapeNet\" only includes shapes from the ShapeNet [8] dataset. Note that even when LVIS shapes are included in the training shapes (i.e., the \"Ensembled\" dataset), their test-time category labels are probably not included in their training texts. Please refer to the supplementary for more training and evaluation details.\nTable 2 shows the results. We observe that OpenShape consistently outperforms prior approaches, even when trained only on ShapeNet. When models are trained on our larger-scale ensembled dataset, they receive a significant performance boost. In this case, OpenShape still surpasses retrained ULIP by a significant margin, demonstrating the advantages of our text enrichment, backbone scaling, and other training strategies. Specifically, OpenShape greatly improves the classification accuracy on the long tail categories in Objaverse-LVIS from a dull < 10% to 46.8%, outperforming the retrained ULIP by about 20 points and reaching a decent top-5 accuracy of 77.0%. These results demonstrate OpenShape's capability to recognize open-world objects effectively. As for ModelNet40, OpenShape achieves a 85.3% accuracy, surpassing previous methods by a substantial margin of at least 20 percent. OpenShape also achieves impressive top-3 and top-5 accuracies of 96.5% and 98.0%. To the best of our knowledge, this is the first time zero-shot methods have matched the performance of a fullysupervised 3D learning method on ModelNet40, where OpenShape outperforms fully-supervised 3D ShapeNets [72] and VoxNet [41]. In addition, on ScanObjectNN, which contains challenging real scans with noise and occlusion, OpenShape exhibits decent sim-to-real transfer capabilities. To contextualize, OpenShape-SparseConv achieves 56.7% zero-shot accuracy on ScanObjectNN without specific sim-to-real training, which surpasses 52.7% reported by SKPConv [71], a recent method specially designed for sim-to-real transfer in point cloud classification tasks. " }, { "figure_ref": [], "heading": "Few-Shot Linear Probing", "publication_ref": [ "b11", "b71", "b67", "b74", "b83" ], "table_ref": [], "text": "In the literature, linear probing is a common way to assess the representation learning capabilities of a model. To perform linear probing, we gather and freeze the representation vectors from all samples in a dataset. Subsequently, we train a linear classifier using these fixed vectors and few-shot class labels. We evaluate the accuracy of the linear classifier on three benchmarks: Objaverse-LVIS [12], ModelNet40 [72], and ScanObjectNN [68]. Figure 5 summarizes the performance of OpenShape in comparison with ULIP [75] (official release and our retrained versions) and PointCLIPv2 [84].\nOn the most challenging Objaverse-LVIS benchmark, OpenShape outperforms all other methods by a large margin. Notably, zero-shot OpenShape beats few-shot linear probes of other methods. On ModelNet40 and ScanObjectNN, we do not see a large performance margin between OpenShape and retrained ULIP. We hypothesize that for few-shot ModelNet40, the error is dominated by in-category sample bias rather than the representation quality; while for ScanObjectNN, the domain gap plays a major role. Since both OpenShape and retrained ULIP are exposed to the same source domain of training objects, their few-shot out-of-domain generalization performances tend to be similar." }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Ablation Study", "publication_ref": [ "b9", "b11", "b71" ], "table_ref": [ "tab_2" ], "text": "We perform various ablations by training a scaled version of SparseConv [10] on the ensembled dataset and then evaluate it on the Objaverse-LVIS [12] and ModelNet40 [72] zero-shot classification benchmarks, unless otherwise specified. The results are shown in Table 3 and Figures 6 and7.\nData and Model Scaling. We investigate the impact of training data by ablating (1) without or with only Objaverse shapes (Tab. 3) and ( 2) with different ratios of our ensembled dataset (Fig. 6). We observe that training with 1% of our ensembled dataset (about 8.8k shapes) achieves similar or better zero-shot performance than training without Objaverse shapes (about 77.1k shapes), indicating that the diversity of training data is sometimes more crucial than the scale. In addition, we compare the performances between scaled-up and non-scaled-up backbones. From Tab. 3, we demonstrate that model scaling plays an essential role when training on our large-scale ensembled dataset (also Fig. 4).\nText Filtering and Enrichment. As shown in Tab. 3, both text filtering and text enrichment are beneficial for performance. We also investigate the specific text enrichment strategies to use for the SparseConv and PointBERT backbones. In Fig. 7, we observe that both image captioning and text retrieval are helpful, and including both yield the best results. Notably, PointBERT improves more than 10 points from text enrichment, highlighting the significance of enhancing text quality.\nOther Aspects. We also conduct additional ablation studies on color information, contrastive loss components, and our hard-negative mining strategy in Tab. 3. We observe that OpenShape performs well with only xyz coordinates as input and no RGB color. While 3D-image contrastive loss is also helpful, we observe that 3D shape-text alignment plays a very essential role for model zero-shot generalization, which necessitates our text filtering and text enrichment strategies that significantly + \"in a large desert\"\n+ \"in the woods\"\n1. 2." }, { "figure_ref": [], "heading": "4.", "publication_ref": [], "table_ref": [], "text": "Cap�ons:\n1. The chair in the style of the 1920s. enhance text quality. Lastly, by employing our hard negative mining strategy, OpenShape effectively addresses the issue of unbalanced data distribution, leading to further improvements in performance." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Cross-Modal Applications", "publication_ref": [ "b43", "b53" ], "table_ref": [], "text": "Multi-modal 3D Shape Retrieval. Through OpenShape multi-modal representations, we can index and retrieve 3D shapes from images, texts, or point clouds. In this section, we retrieve 3D shapes from our ensembled dataset by calculating the cosine similarity between input embedding(s) and 3D shape embeddings and performing kNN. As shown in Figure 8, OpenShape is capable of retrieving visually or semantically similar shapes from a single image or point cloud input. OpenShape embeddings encode a wide range of visual and semantic concepts. In Figure 9, we show that OpenShape supports retrieving 3D shapes from detailed text descriptions, which include fine-grained subcategories, attributes, and their combinations. Note that these input texts are typically not present in the raw texts of the retrieved shapes, indicating that OpenShape effectively learns generalizable concepts across shapes. In Figure 1, we provide a demo which takes two 3D shapes as inputs and retrieves the shapes that are simultaneously closest to both inputs. This is achieved by finding arg max i min(h P i • h P a , h P i • h P b ), where h P a and h P b denote normalized shape embeddings of the two input shapes. We can see that the retrieved shapes integrate visual or semantic elements in an interesting manner, highlighting the rich concepts and priors encoded in OpenShape embeddings.\nShape-Conditioned Multimodal Generation. As OpenShape's 3D shape representations are aligned with CLIP's image and text embedding spaces, they can serve as inputs into other CLIP-based models to facilitate various multimodal generation applications. For example, we show that by feeding our 3D shape embeddings into ClipCap [44], an off-the-shelf image captioning model, along with Stable unCLIP [54], a text-to-image diffusion model, we can perform point cloud captioning and point cloud-conditioned image generation (optional text prompt supported) without extra training or finetuning. Qualitative results are shown in Figure 10. Please refer to the supplementary for more results and details." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [], "table_ref": [], "text": "We introduce OpenShape, a novel approach for learning scalable and generalizable multi-modal joint representations for 3D shapes. OpenShape representations effectively capture a wide range of semantic and visual concepts, enabling superior capabilities for open-world 3D shape recognition. By aligning OpenShape with CLIP's embedding space, our shape embeddings can be integrated with off-the-shelf CLIP-based models for various cross-modality applications. Moving forward, there are several directions worth further exploration: (a) More 3D data. While we utilized 876k 3D shapes during training, this is still quite limited compared to the 2D counterparts. We hope that our work inspires future investments in more resources to build even more powerful 3D representations. (b) Part-level information. Our current shape representations mainly focus on global semantic and visual features, and it would be beneficial to add more part-level supervision during training. (c) Sim-to-real domain gap. Our model is mainly trained on synthetic data, and it's challenging but crucial to explore explicit designs for reducing the domain gap with real-world shapes.\n6 Appendix" }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "More Examples of Multi-Modal 3D Shape Retrieval", "publication_ref": [ "b11", "b11" ], "table_ref": [], "text": "In Figures 11 and12, we showcase more examples of multi-modal 3D shape retrieval.\nFigure 11: Image-input 3D shape retrieval. In each triplet, we present the input image and two 3D shapes retrieved using OpenShape embeddings from the Objaverse [12] dataset. Input images are from unsplash.com.\nFigure 12: Point cloud-input 3D shape retrieval. In each triplet, we present the input point cloud and two 3D shapes retrieved using OpenShape embeddings from the Objaverse [12] dataset." }, { "figure_ref": [ "fig_9", "fig_10" ], "heading": "More Examples of Shape-Conditioned Multimodal Generation", "publication_ref": [], "table_ref": [], "text": "In Figure 13 and Figure 14 " }, { "figure_ref": [], "heading": "Details on Raw Text Generation and Filtering", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Raw Text Generation", "publication_ref": [], "table_ref": [], "text": "We leverage the metadata from the four datasets to generate the raw texts. Although the original datasets may contain numerous attributes for each shape, we carefully choose the most informative ones to compose the text, ensuring its quality and relevance.\nObjaverse:We utilize the name associated with each shape to serve as the text.\nShapeNetCore: For each shape, we generate three types of texts: (a) the name, (b) the category name (with a total of 55 categories), and (c) the concatenation of the sub-category names (with a total of 336 sub-categories), separated by commas.\n3DFuture: For each shape, we generate two types of texts: (a) the category, and (b) the concatenation of category, style, theme, and material, separated by commas.\nABO: For each shape, we generate two types of texts: (a) the item_name, and (b) the product_type.\nIn this way, we generate one or more raw texts for each shape." }, { "figure_ref": [], "heading": "Raw Text Filtering", "publication_ref": [ "b44" ], "table_ref": [], "text": "We employ GPT-4 [45] to filter out uninformative raw texts. To accomplish this, we divide all the raw texts into batches, each containing 256 entries, and process each batch independently using GPT-4.\nHere is an example illustrating the prompt we used and the corresponding response generated by GPT-4.\nI am analyzing a 3D dataset with various text descriptions for the 3D models. However, many of these texts are inaccurate or uninformative, and therefore, not suitable as descriptions for 3D models. I need your help to identify such incorrect texts. Specifically, if a text primarily consists of irrelevant or uninformative content, such as timestamps, model numbers, incomprehensible descriptions, random filenames (e.g., \"my project\"), random characters, etc., please respond with \"N\". If a text contains a clear noun (or noun phrase) that could potentially describe a 3D object, please respond with \"Y\". You will find a list of texts below, and each line contains a three-digit ID and associated text. For each text, please respond with \"Y\" or \"N\", following the ID number (e.g., \"001 Y\" or \"002 N\"). Please evaluate all 256 texts. 000 New project ( Fine-tuning CLIP Text and Image Encoders? After training OpenShape-PointBERT, we conducted experiments to unfreeze and finetune the CLIP text encoder for a single epoch. However, the results obtained did not demonstrate any noticeable improvement on the benchmarks. Moreover, we observed that finetuning the CLIP text encoder could potentially undermine the generalization capabilities of CLIP and hinder the integration of OpenShape embeddings into existing CLIP-based models. As a result, we choose to freeze the CLIP encoders throughout the entire training process." }, { "figure_ref": [], "heading": "Evaluation Details", "publication_ref": [ "b74", "b71", "b67", "b74", "b11", "b71", "b67" ], "table_ref": [], "text": "We evaluated all baselines using their publicly released pretrained checkpoints. Additionally, we retrained ULIP [75] on our ensembled training shapes using their official code base and backbone networks. Note that the retrained ULIP model utilized the original raw texts from the four datasets during training (prompt engineering is also applied), rather than our filtered and enriched texts. For ModelNet40 [72], the evaluation is conducted on the test split with 2,468 shapes. Regarding ScanObjectNN [68], we follow ULIP [75] to evaluate on the OBJ_ONLY version, which contains 581 test shapes. For Objaverse-LVIS [12], the input is 10,000 sampled points with point colors. For ModelNet40 [72], the input is 10,000 sampled points without color. For ScanObjectNN [68], we utilize the official 2,048 points without color as input. All methods use the same input during evaluation. The forward inference time on an A100 GPU for a 10,000-point point cloud is approximately 0.9ms for OpenShape-SparseConv and 3.8ms for OpenShape-PointBERT." }, { "figure_ref": [], "heading": "Details on Shape-Conditioned Multimodal Generation", "publication_ref": [ "b43", "b53" ], "table_ref": [], "text": "Point Cloud Captioning CLIPCap [44] utilizes a 10-token prefix generated from CLIP image embeddings to enable GPT-2 for captioning. In order to align with the off-the-shelf CLIPCap model, we trained a variant of OpenShape-PointBERT that employs CLIP ViT-B/32 embeddings instead of OpenCLIP ViT-G/14 used in other experiments. Consequently, we directly input the point cloud encoding, without normalization, into CLIPCap for captioning.\nPoint Cloud Conditioned Image Generation We take the Stable Diffusion v2.1 unCLIP model [54] for image generation and replace the CLIP image condition encoder with our OpenShape encoder to perform image generation conditioned on point clouds (and optionally text prompts). The unCLIP model takes CLIP ViT-L/14 embeddings without normalization as input. To match the embedding space, we trained a variant of OpenShape-PointBERT with CLIP ViT-L/14 embeddings. Additionally, we noticed a significant mismatching of scales (L 2 -norm of embedding vectors) between ViT-L/14 image embeddings and OpenShape embeddings. To mitigate this issue, we perform a re-normalization on OpenShape embeddings to a L 2 -norm of 1 " }, { "figure_ref": [], "heading": "Details on the Backbone Scaling Experiment", "publication_ref": [ "b77", "b51", "b49", "b69", "b48" ], "table_ref": [], "text": "In Figure 4 of the main paper, we investigate the performance and scalability of various backbones when scaling up their model sizes. For this experiment, we employ a default resolution of 10,000 points for input point clouds, a batch size of 200, and conduct the experiment on a single A100 GPU. In general, if instructions are given in the original paper of a backbone, we scale up the model as instructed. Otherwise, we scale up the model by expanding width or depth (i.e., stacking blocks or layers). Specifically, we scale up each backbone as follow:\nPointBERT [78] The scaling parameters are shown in Table 4. We scaled PointBERT to 72.1M parameters beyond the 32.3M version reported in Figure 4 of the main paper. However, at this scale, the model dramatically overfits on the training data and performs worse on all benchmarks than the 32.3M version. PointNeXt [52] PointNeXt is proposed as a scalable version of PointNet++ [50], and includes S/B/L/XL variants in the original paper. We simply adopt these official configurations.\nDGCNN [70] and PointNet [49] For these two backbones without a hierarchical structure, we increase the width of each layer proportionally to scale up to 4xPointNet and 2xDGCNN before we hit the GPU memory limit. As the models operate completely on dense points, it is impractical to use the default 10k-point resolution. We thus reduce the input resolution for the two backbones, resulting in 1k points for DGCNN and 4k points for PointNet." }, { "figure_ref": [], "heading": "Details on Training and Evaluation", "publication_ref": [], "table_ref": [], "text": "Training Details We freeze the CLIP text and image encoders and train the 3D encoder and two projection heads on our ensembled dataset using the cross-modal contrastive loss. We train the model on a single A100 GPU with a batch size of 200. Since we precache the text and image CLIP embeddings of all shapes, the training is greatly accelerated and takes about 300 A100 hours for convergence. We utilize an exponential learning rate schedule, and employ an range test to find the initial learning rate. For 32.3M version of PointBERT, we utilize a learning rate of 5e -4; for 72.1M version of PointBERT, we utilize a learning rate of 4e -4; and for other models, we utilize a learning rate of 1e -3. For hard-negative mining, the number of seed shapes s is set to 40, and the number of neighbors m is set to 5 per shape, and the threshold δ is set to 0.1." } ]
We introduce OpenShape, a method for learning multi-modal joint representations of text, image, and point clouds. We adopt the commonly used multi-modal contrastive learning framework for representation alignment, but with a specific focus on scaling up 3D representations to enable open-world 3D shape understanding. To achieve this, we scale up training data by ensembling multiple 3D datasets and propose several strategies to automatically filter and enrich noisy text descriptions. We also explore and compare strategies for scaling 3D backbone networks and introduce a novel hard negative mining module for more efficient training. We evaluate OpenShape on zero-shot 3D classification benchmarks and demonstrate its superior capabilities for open-world recognition. Specifically, OpenShape achieves a zero-shot accuracy of 46.8% on the 1,156-category Objaverse-LVIS benchmark, compared to less than 10% for existing methods. OpenShape also achieves an accuracy of 85.3% on ModelNet40, outperforming previous zero-shot baseline methods by 20% and performing on par with some fully-supervised methods. Furthermore, we show that our learned embeddings encode a wide range of visual and semantic concepts (e.g., subcategories, color, shape, style) and facilitate fine-grained text-3D and image-3D interactions. Due to their alignment with CLIP embeddings, our learned shape representations can also be integrated with off-the-shelf CLIPbased models for various applications, such as point cloud captioning and point cloud-conditioned image generation.
OpenShape: Scaling Up 3D Shape Representation Towards Open-World Understanding
[ { "figure_caption": "Figure 2 :2Figure 2: (a) We ensemble four public 3D shape datasets, resulting in 876k shapes that encompass diverse categories and concepts. (b) We propose three strategies to automatically filter and enrich the noisy texts in the original datasets. (c) We train a 3D point cloud encoder to align the 3D shape embedding space with the CLIP's text and image embedding spaces. We perform cross-modal contrastive learning with scaled 3D backbones and hard negative mining. (d) OpenShape embeddings can be easily integrated with other CLIP-based models, enabling various cross-modality tasks.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "name: \"homework xyz detailing\" GPT4: Remove \"Steampunk Goggles by MonoFlow on …\" \"Steampunk Goggles Made from hand ...\" azure: \"a pair of steampunk goggles\" blip: \"steampunk goggles 3d model\" name: \"Tue, 09 Oct 2018 17:12:39\" GPT4: Remove \"armchair\" \"some of the other props done Chair10\" azure: \"a blue plastic chair on a pink ...\" blip: \"a 3d model of a blue shaped object\" name: \"DOG A -1of6 -for free …\" GPT4: Keep \"Black Labrador in front of a white …\" \"Black Labrador puppy Vinyl Wall Mural\" azure: \"a black dog sitting on a blue...\" blip: \"a black dog sitting on a blue...\" name: \"untitled\" GPT4: Remove \"Nike AirMax 1 (Red/White)\" azure: \"a close up of a shoe\" blip: \"nike air max 1 -white / red\"\"nike air max red and white\"", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Text Filtering & Enrichment Examples In each example, the left section features the thumbnail, model name, and GPT-4 filtering results. The upper right section shows image captions from two captioning models, while the lower right section displays retrieved images and their corresponding texts.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "#of labeled training samples per class Top-1 Acc. (%)", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Ablation study on different text enrichment strategies.", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: 3D shape retrieval from image (left, mid) and point cloud (right).", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Text-input 3D shape retrieval. In each row, we show input texts on the left and two retrieved shapes for each text on the right. OpenShape embedding encodes a wide range of visual and semantic concepts and enables (a) retrieval of fine-grained subcategories (first two rows), and (b) control of attributes (e.g., color, shape, style) and their combinations (last two rows).", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "2 .Figure 10 :210Figure 10: (a) Point cloud captioning. (b) Point cloud-conditioned image generation. Our learned 3D shape embeddings can be integrated with off-the-shelf pretrained CLIP-based models (e.g., captioning and image generation models) to support various cross-modal applications.", "figure_data": "", "figure_id": "fig_7", "figure_label": "210", "figure_type": "figure" }, { "figure_caption": ", we showcase more examples of point cloud captioning and point cloud-conditioned image generation.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 13 :13Figure 13: Point cloud captioning. In each row, we show the input point clouds on the left and the generated captions on the right.", "figure_data": "", "figure_id": "fig_9", "figure_label": "13", "figure_type": "figure" }, { "figure_caption": "Figure 14 :14Figure 14: Point cloud-conditioned image generation. Each row shows three examples (input point clouds and generated images).", "figure_data": "", "figure_id": "fig_10", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Train on ShapeNet [8] Train on Ens-no-LVIS2/9,6$FF'*&11Model#Param.MNet40 O-LVIS MNet40 O-LVIS3RLQW1HW 3RLQW1H;W3RLQW%(57PointNet [49]1.3M67.09.374.924.46SDUVH&RQYDGCNN [70]2.3M67.89.074.224.80000PointMLP [40] 9.3M73.512.982.936.63DUDPHWHUVPointNeXt [52] 2.8M72.612.281.633.8PointBERT [78] 5.1M70.310.884.537.0SparseConv [10] 5.3M70.710.678.831.7std. dev.2.31.43.95.1", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation study. Top 1 zeroshot accuracies on ModelNet40[72] and Objaverse-LVIS[12] are shown.", "figure_data": "3RLQW%(57Variant No Objaverse shapes Only Objaverse shapes No backbone scale up No caption & retrievalO-LVIS MNet40 13.9 75.5 41.6 79.2 31.7 78.7 37.0 82.92/9,6$FF0RGHO1HW$FF2/9,6$FF6SDUVH&RQYNo text filtering41.482.9No point rgb, only xyz No text contras. learning39.6 23.383.6 67.4%DVH &DS 5HWU )XOONo image contras. learning41.081.0Figure 6: Ablation study onFull Full + hard mining42.0 43.483.1 83.4using different ratios of train-ing data.", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Afterwards, we combine all the responses to create the final filtering results, effectively removing approximately 30% of the raw texts.", "figure_data": "19 )000 N001 3December -Chemestry001 Y002 Fake Brand Soda Can002 Y003 Spartan Shild003 Y004 Apple3d004 Y005 Landmine005 Y006 FaunveinB-S006 N007 FIGURA 5007 N008 Sphero Blue008 Y009 Sofa009 Y010 Maddox010 N011 A3 Complete011 N012 Suspension Bridge012 Y013 Maung013 N014 Captain-americas-shield014 Y015 sphorb4015 N............", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" } ]
Minghua Liu; Ruoxi Shi; Kaiming Kuang; Yinhao Zhu; Xuanlin Li; Shizhong Han; Hong Cai; Fatih Porikli; Hao Su
[ { "authors": "Ahmed Abdelreheem; Ivan Skorokhodov; Maks Ovsjanikov; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Satr: Zero-shot semantic segmentation of 3d shapes", "year": "2023" }, { "authors": "Idan Achituve; Haggai Maron; Gal Chechik", "journal": "", "ref_id": "b1", "title": "Self-supervised learning for domain adaptation on point clouds", "year": "2021" }, { "authors": "Panos Achlioptas; Olga Diamanti; Ioannis Mitliagkas; Leonidas Guibas", "journal": "PMLR", "ref_id": "b2", "title": "Learning representations and generative models for 3d point clouds", "year": "2018" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katie Millican; Malcolm Reynolds", "journal": "", "ref_id": "b3", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Shivangi Aneja; Justus Thies; Angela Dai; Matthias Nießner", "journal": "", "ref_id": "b4", "title": "Clipface: Text-guided editing of textured 3d morphable models", "year": "2022" }, { "authors": "Romain Beaumont", "journal": "", "ref_id": "b5", "title": "Clip retrieval: Easily compute clip embeddings and build a clip retrieval system with them", "year": "2022" }, { "authors": "Zehranaz Canfes; M Furkan Atasoy; Alara Dirik; Pinar Yanardag", "journal": "", "ref_id": "b6", "title": "Text and image guided 3d avatar generation and manipulation", "year": "2023" }, { "authors": "Thomas Angel X Chang; Leonidas Funkhouser; Pat Guibas; Qixing Hanrahan; Zimo Huang; Silvio Li; Manolis Savarese; Shuran Savva; Hao Song; Su", "journal": "", "ref_id": "b7", "title": "Shapenet: An information-rich 3d model repository", "year": "2015" }, { "authors": "Runnan Chen; Youquan Liu; Lingdong Kong; Xinge Zhu; Yuexin Ma; Yikang Li; Yuenan Hou; Yu Qiao; Wenping Wang", "journal": "", "ref_id": "b8", "title": "Clip2scene: Towards label-efficient 3d scene understanding by clip", "year": "2023" }, { "authors": "Christopher Choy; Junyoung Gwak; Silvio Savarese", "journal": "", "ref_id": "b9", "title": "4d spatio-temporal convnets: Minkowski convolutional neural networks", "year": "2019" }, { "authors": "Jasmine Collins; Shubham Goel; Kenan Deng; Achleshwar Luthra; Leon Xu; Erhan Gundogdu; Xi Zhang; Tomas F Yago Vicente; Thomas Dideriksen; Himanshu Arora", "journal": "", "ref_id": "b10", "title": "Abo: Dataset and benchmarks for real-world 3d object understanding", "year": "2022" }, { "authors": "Matt Deitke; Dustin Schwenk; Jordi Salvador; Luca Weihs; Oscar Michel; Eli Vanderbilt; Ludwig Schmidt; Kiana Ehsani; Aniruddha Kembhavi; Ali Farhadi", "journal": "", "ref_id": "b11", "title": "Objaverse: A universe of annotated 3d objects", "year": "2022" }, { "authors": "Haowen Deng; Tolga Birdal; Slobodan Ilic", "journal": "", "ref_id": "b12", "title": "Ppf-foldnet: Unsupervised learning of rotation invariant 3d local descriptors", "year": "2018" }, { "authors": "Runyu Ding; Jihan Yang; Chuhui Xue; Wenqing Zhang; Song Bai; Xiaojuan Qi", "journal": "", "ref_id": "b13", "title": "Languagedriven open-vocabulary 3d scene understanding", "year": "2022" }, { "authors": "Benjamin Eckart; Wentao Yuan; Chao Liu; Jan Kautz", "journal": "", "ref_id": "b14", "title": "Self-supervised learning on 3d point clouds by learning discrete generative models", "year": "2021" }, { "authors": "Huan Fu; Rongfei Jia; Lin Gao; Mingming Gong; Binqiang Zhao; Steve Maybank; Dacheng Tao", "journal": "International Journal of Computer Vision", "ref_id": "b15", "title": "3d-future: 3d furniture shape with texture", "year": "2021" }, { "authors": "Agrim Gupta; Piotr Dollar; Ross Girshick", "journal": "", "ref_id": "b16", "title": "Lvis: A dataset for large vocabulary instance segmentation", "year": "2019" }, { "authors": "Huy Ha; Shuran Song", "journal": "", "ref_id": "b17", "title": "Semantic abstraction: Open-world 3d scene understanding from 2d vision-language models", "year": "2022" }, { "authors": "Deepti Hegde; Maria Jose Jeya; Valanarasu; M Vishal; Patel", "journal": "", "ref_id": "b18", "title": "Clip goes 3d: Leveraging prompt tuning for language grounded 3d recognition", "year": "2023" }, { "authors": "Georg Hess; Johan Jaxing; Elias Svensson; David Hagerman; Christoffer Petersson; Lennart Svensson", "journal": "", "ref_id": "b19", "title": "Masked autoencoder for self-supervised pre-training on lidar point clouds", "year": "2023" }, { "authors": "Georg Hess; Adam Tonderski; Christoffer Petersson; Lennart Svensson; Kalle Åström", "journal": "", "ref_id": "b20", "title": "Lidarclip or: How i learned to talk to point clouds", "year": "2022" }, { "authors": "Fangzhou Hong; Mingyuan Zhang; Liang Pan; Zhongang Cai; Lei Yang; Ziwei Liu", "journal": "", "ref_id": "b21", "title": "Avatarclip: Zero-shot text-driven generation and animation of 3d avatars", "year": "2022" }, { "authors": "Rui Huang; Xuran Pan; Henry Zheng; Haojun Jiang; Zhifeng Xie; Shiji Song; Gao Huang", "journal": "", "ref_id": "b22", "title": "Joint representation learning for text and 3d point cloud", "year": "2023" }, { "authors": "Tianyu Huang; Bowen Dong; Yunhan Yang; Xiaoshui Huang; W H Rynson; Wanli Lau; Wangmeng Ouyang; Zuo", "journal": "", "ref_id": "b23", "title": "Clip2point: Transfer clip to point cloud classification with image-depth pre-training", "year": "2022" }, { "authors": "Gabriel Ilharco; Mitchell Wortsman; Ross Wightman; Cade Gordon; Nicholas Carlini; Rohan Taori; Achal Dave; Vaishaal Shankar; Hongseok Namkoong; John Miller; Hannaneh Hajishirzi; Ali Farhadi; Ludwig Schmidt; Openclip", "journal": "", "ref_id": "b24", "title": "", "year": "2021-07" }, { "authors": "Ajay Jain; Ben Mildenhall; Jonathan T Barron; Pieter Abbeel; Ben Poole", "journal": "", "ref_id": "b25", "title": "Zero-shot text-guided object generation with dream fields", "year": "2022" }, { "authors": "Krishna Murthy; Jatavallabhula ; Alihusein Kuwajerwala; Qiao Gu; Mohd Omama; Tao Chen; Shuang Li; Ganesh Iyer; Soroush Saryazdi; Nikhil Keetha; Ayush Tewari", "journal": "", "ref_id": "b26", "title": "Conceptfusion: Open-set multimodal 3d mapping", "year": "2023" }, { "authors": "Nikolay Jetchev", "journal": "", "ref_id": "b27", "title": "Clipmatrix: Text-controlled creation of 3d textured meshes", "year": "2021" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b28", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Yannis Kalantidis; Bulent Mert; Noe Sariyildiz; Philippe Pion; Diane Weinzaepfel; Larlus", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b29", "title": "Hard negative mixing for contrastive learning", "year": "2020" }, { "authors": "Justin Kerr; Chung ; Min Kim; Ken Goldberg; Angjoo Kanazawa; Matthew Tancik", "journal": "", "ref_id": "b30", "title": "Lerf: Language embedded radiance fields", "year": "2023" }, { "authors": "Tianhao Nasir Khalid; Eugene Xie; Tiberiu Belilovsky; Popa", "journal": "", "ref_id": "b31", "title": "Text to mesh without 3d supervision using limit subdivision", "year": "2022" }, { "authors": "Han-Hung Lee; Angel X Chang", "journal": "", "ref_id": "b32", "title": "Understanding pure clip guidance for voxel grid nerf models", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b33", "title": "Blip: Bootstrapping languageimage pre-training for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Liunian Harold; Li ; Pengchuan Zhang; Haotian Zhang; Jianwei Yang; Chunyuan Li; Yiwu Zhong; Lijuan Wang; Lu Yuan; Lei Zhang; Jenq-Neng Hwang", "journal": "", "ref_id": "b34", "title": "Grounded language-image pre-training", "year": "2022" }, { "authors": "Yuhui Victor Weixin Liang; Yongchan Zhang; Serena Kwon; James Y Yeung; Zou", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b35", "title": "Mind the gap: Understanding the modality gap in multi-modal contrastive representation learning", "year": "2022" }, { "authors": "Minghua Liu; Yinhao Zhu; Hong Cai; Shizhong Han; Zhan Ling; Fatih Porikli; Hao Su", "journal": "", "ref_id": "b36", "title": "Partslip: Low-shot part segmentation for 3d point clouds via pretrained image-language models", "year": "2022" }, { "authors": "Zhengzhe Liu; Peng Dai; Ruihui Li; Xiaojuan Qi; Chi-Wing Fu", "journal": "", "ref_id": "b37", "title": "Iss: Image as stetting stone for text-guided 3d shape generation", "year": "2022" }, { "authors": "Yuheng Lu; Chenfeng Xu; Xiaobao Wei; Xiaodong Xie; Masayoshi Tomizuka; Kurt Keutzer; Shanghang Zhang", "journal": "", "ref_id": "b38", "title": "Open-vocabulary point-cloud object detection without 3d annotation", "year": "2023" }, { "authors": "Xu Ma; Can Qin; Haoxuan You; Yun Haoxi Ran; Fu", "journal": "", "ref_id": "b39", "title": "Rethinking network design and local geometry in point cloud: A simple residual mlp framework", "year": "2022" }, { "authors": "Daniel Maturana; Sebastian Scherer", "journal": "IEEE", "ref_id": "b40", "title": "Voxnet: A 3d convolutional neural network for realtime object recognition", "year": "2015" }, { "authors": "Benedikt Mersch; Xieyuanli Chen; Jens Behley; Cyrill Stachniss", "journal": "PMLR", "ref_id": "b41", "title": "Self-supervised point cloud prediction using 3d spatio-temporal convolutional networks", "year": "2022" }, { "authors": "Oscar Michel; Roi Bar-On; Richard Liu; Sagie Benaim; Rana Hanocka", "journal": "", "ref_id": "b42", "title": "Text2mesh: Textdriven neural stylization for meshes", "year": "2022" }, { "authors": "Ron Mokady; Amir Hertz; Amit H Bermano", "journal": "", "ref_id": "b43", "title": "Clipcap: Clip prefix for image captioning", "year": "2021" }, { "authors": " Openai", "journal": "", "ref_id": "b44", "title": "", "year": "2023" }, { "authors": "Yatian Pang; Wenxiao Wang; Francis Eh Tay; Wei Liu; Yonghong Tian; Li Yuan", "journal": "Springer", "ref_id": "b45", "title": "Masked autoencoders for point cloud self-supervised learning", "year": "2022" }, { "authors": "Songyou Peng; Kyle Genova; Chiyu Jiang; Andrea Tagliasacchi; Marc Pollefeys; Thomas Funkhouser", "journal": "", "ref_id": "b46", "title": "Openscene: 3d scene understanding with open vocabularies", "year": "2022" }, { "authors": "Omid Poursaeed; Tianxing Jiang; Han Qiao; Nayun Xu; Vladimir G Kim", "journal": "IEEE", "ref_id": "b47", "title": "Self-supervised learning of point clouds via orientation estimation", "year": "2020" }, { "authors": "Hao Charles R Qi; Kaichun Su; Leonidas J Mo; Guibas", "journal": "", "ref_id": "b48", "title": "Pointnet: Deep learning on point sets for 3d classification and segmentation", "year": "2017" }, { "authors": "Charles Ruizhongtai; Qi ; Li Yi; Hao Su; Leonidas J Guibas", "journal": "Advances in neural information processing systems", "ref_id": "b49", "title": "Pointnet++: Deep hierarchical feature learning on point sets in a metric space", "year": "2017" }, { "authors": "Zekun Qi; Runpei Dong; Guofan Fan; Zheng Ge; Xiangyu Zhang; Kaisheng Ma; Li Yi", "journal": "", "ref_id": "b50", "title": "Contrast with reconstruct: Contrastive 3d representation learning guided by generative pretraining", "year": "2023" }, { "authors": "Guocheng Qian; Yuchen Li; Houwen Peng; Jinjie Mai; Hasan Hammoud; Mohamed Elhoseiny; Bernard Ghanem", "journal": "", "ref_id": "b51", "title": "Pointnext: Revisiting pointnet++ with improved training and scaling strategies", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b52", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b53", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Yongming Rao; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b54", "title": "Global-local bidirectional reasoning for unsupervised representation learning of 3d point clouds", "year": "2020" }, { "authors": "Joshua Robinson; Ching-Yao Chuang; Suvrit Sra; Stefanie Jegelka", "journal": "", "ref_id": "b55", "title": "Contrastive learning with hard negative samples", "year": "2020" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b56", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "David Rozenberszki; Or Litany; Angela Dai", "journal": "", "ref_id": "b57", "title": "Language-grounded indoor 3d semantic segmentation in the wild", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Burcu Karagol Ayan; Sara Mahdavi; Rapha Gontijo Lopes", "journal": "", "ref_id": "b58", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Aditya Sanghi", "journal": "Springer", "ref_id": "b59", "title": "Info3d: Representation learning on 3d objects using mutual information maximization and contrastive learning", "year": "2020" }, { "authors": "Aditya Sanghi; Hang Chu; Ye Joseph G Lambourne; Chin-Yi Wang; Marco Cheng; Kamal Fumero; Rahimi Malekshan", "journal": "", "ref_id": "b60", "title": "Clip-forge: Towards zero-shot text-to-shape generation", "year": "2022" }, { "authors": "Jonathan Sauder; Bjarne Sievers", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b61", "title": "Self-supervised deep learning on point clouds by reconstructing space", "year": "2019" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b62", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Charu Sharma; Manohar Kaul", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b63", "title": "Self-supervised few-shot learning on point clouds", "year": "2020" }, { "authors": "Chao Sun; Zhedong Zheng; Xiaohan Wang; Mingliang Xu; Yi Yang", "journal": "", "ref_id": "b64", "title": "Point cloud pre-training by mixing and disentangling", "year": "2021" }, { "authors": "Ali Thabet; Humam Alwassel; Bernard Ghanem", "journal": "", "ref_id": "b65", "title": "Self-supervised learning of local features in 3d point clouds", "year": "2020" }, { "authors": "Vishaal Udandarao", "journal": "", "ref_id": "b66", "title": "Understanding and fixing the modality gap in vision-language models", "year": "2022" }, { "authors": "Angelina Mikaela; Quang-Hieu Uy; Binh-Son Pham; Thanh Hua; Sai-Kit Nguyen; Yeung", "journal": "", "ref_id": "b67", "title": "Revisiting point cloud classification: A new benchmark dataset and classification model on real-world data", "year": "2019" }, { "authors": "Hanchen Wang; Qi Liu; Xiangyu Yue; Joan Lasenby; Matt J Kusner", "journal": "", "ref_id": "b68", "title": "Unsupervised point cloud pre-training via occlusion completion", "year": "2021" }, { "authors": "Yue Wang; Yongbin Sun; Ziwei Liu; Sanjay E Sarma; Michael M Bronstein; Justin M Solomon", "journal": "Acm Transactions On Graphics (tog)", "ref_id": "b69", "title": "Dynamic graph cnn for learning on point clouds", "year": "2019" }, { "authors": "Jean-Baptiste Weibel; Timothy Patten; Markus Vincze", "journal": "", "ref_id": "b70", "title": "Sim2real 3d object classification using spherical kernel point convolution and a deep center voting scheme", "year": "2021" }, { "authors": "Zhirong Wu; Shuran Song; Aditya Khosla; Fisher Yu; Linguang Zhang; Xiaoou Tang; Jianxiong Xiao", "journal": "", "ref_id": "b71", "title": "3d shapenets: A deep representation for volumetric shapes", "year": "2015" }, { "authors": "Saining Xie; Jiatao Gu; Demi Guo; Leonidas Charles R Qi; Or Guibas; Litany", "journal": "Springer", "ref_id": "b72", "title": "Pointcontrast: Unsupervised pre-training for 3d point cloud understanding", "year": "2020" }, { "authors": "Jiale Xu; Xintao Wang; Weihao Cheng; Yan-Pei Cao; Ying Shan; Xiaohu Qie; Shenghua Gao", "journal": "", "ref_id": "b73", "title": "Dream3d: Zero-shot text-to-3d synthesis using 3d shape prior and text-to-image diffusion models", "year": "2022" }, { "authors": "Le Xue; Mingfei Gao; Chen Xing; Roberto Martín-Martín; Jiajun Wu; Caiming Xiong; Ran Xu; Juan Carlos Niebles; Silvio Savarese", "journal": "", "ref_id": "b74", "title": "Ulip: Learning unified representation of language, image and point cloud for 3d understanding", "year": "2022" }, { "authors": "Jihan Yang; Runyu Ding; Zhe Wang; Xiaojuan Qi", "journal": "", "ref_id": "b75", "title": "Regionplc: Regional point-language contrastive learning for open-world 3d scene understanding", "year": "2023" }, { "authors": "Xumin Yu; Lulu Tang; Yongming Rao; Tiejun Huang; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b76", "title": "Point-bert: Pre-training 3d point cloud transformers with masked point modeling", "year": "2022" }, { "authors": "Xumin Yu; Lulu Tang; Yongming Rao; Tiejun Huang; Jie Zhou; Jiwen Lu", "journal": "", "ref_id": "b77", "title": "Point-bert: Pre-training 3d point cloud transformers with masked point modeling", "year": "2022" }, { "authors": "Yihan Zeng; Chenhan Jiang; Jiageng Mao; Jianhua Han; Chaoqiang Ye; Qingqiu Huang; Dit-Yan Yeung; Zhen Yang; Xiaodan Liang; Hang Xu", "journal": "", "ref_id": "b78", "title": "Clipˆ2: Contrastive language-image-point pretraining from real-world point cloud data", "year": "2023" }, { "authors": "Haotian Zhang; Pengchuan Zhang; Xiaowei Hu; Yen-Chun Chen; Liunian Harold Li; Xiyang Dai; Lijuan Wang; Lu Yuan; Jenq-Neng Hwang; Jianfeng Gao", "journal": "", "ref_id": "b79", "title": "Glipv2: Unifying localization and vision-language understanding", "year": "2022" }, { "authors": "Junbo Zhang; Runpei Dong; Kaisheng Ma", "journal": "", "ref_id": "b80", "title": "Clip-fo3d: Learning free open-world 3d scene representations from 2d dense clip", "year": "2023" }, { "authors": "Renrui Zhang; Ziyu Guo; Wei Zhang; Kunchang Li; Xupeng Miao; Bin Cui; Yu Qiao; Peng Gao; Hongsheng Li", "journal": "", "ref_id": "b81", "title": "Pointclip: Point cloud understanding by clip", "year": "2022" }, { "authors": "Zaiwei Zhang; Rohit Girdhar; Armand Joulin; Ishan Misra", "journal": "", "ref_id": "b82", "title": "Self-supervised pretraining of 3d features on any point-cloud", "year": "2021" }, { "authors": "Xiangyang Zhu; Renrui Zhang; Bowei He; Ziyao Zeng; Shanghang Zhang; Peng Gao", "journal": "", "ref_id": "b83", "title": "Pointclip v2: Adapting clip for powerful 3d open-world learning", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 108.79, 280.67, 392.97, 28.06 ], "formula_id": "formula_0", "formula_text": "- 1 4n i log exp(h P i • h T i /τ ) j exp(h P i • h T j /τ ) + log exp(h T i • h P i /τ ) j exp(h T i • h P j /τ ) + log exp(h P i • h I i /τ ) j exp(h P i • h I j /τ ) + log exp(h I i • h P i /τ ) j exp(h I i • h P j /τ ) (1" }, { "formula_coordinates": [ 4, 501.76, 302.69, 2.71, 6.05 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 4, 108, 309.76, 397.25, 24.3 ], "formula_id": "formula_2", "formula_text": "h P i = f P (P i )/|f P (P i )|, h T i = g T (f T (T i ))/|g T (f T (T i ))|,and" }, { "formula_coordinates": [ 4, 263.68, 321.73, 118.13, 12.32 ], "formula_id": "formula_3", "formula_text": "h I i = g I (f I (I i ))/|g I (f I (I i ))" }, { "formula_coordinates": [ 6, 415.07, 513.42, 86.7, 12.32 ], "formula_id": "formula_4", "formula_text": "h T j • h I i + δ > h T i • h I i" }, { "formula_coordinates": [ 9, 108, 303.19, 53.93, 6.13 ], "formula_id": "formula_5", "formula_text": "1. 2." } ]
10.1145/3539618.3591765
2023-08-12
[ { "figure_ref": [ "fig_1" ], "heading": "INTRODUCTION", "publication_ref": [ "b1", "b5", "b27", "b0", "b13", "b16", "b2", "b4", "b9", "b14", "b30", "b9", "b14", "b30", "b0" ], "table_ref": [], "text": "Heterogeneous graph-structured data widely exists in the real world, such as social networks, academic networks, user interaction networks, etc. In order to take advantage of the rich structural and semantic information in heterogeneous graphs, heterogeneous graph neural networks (HGNNs) have been increasingly used in information retrieval (IR) applications, ranging from search engines [2,6,28] and recommendation systems [1,14,17] to question answering systems [3,5].\nHGNNs can integrate structural and semantic information in heterogeneous graphs into node representations to meet downstream tasks. Existing HGNNs [10,15,31] usually deploy multiple layers of graph convolution (i.e., message passing) to capture the neighborhood information of low-order and high-order neighbors in a graph. For a particular node, each layer of convolutions represents it as one single vector, which is the input of the next higher layer. Consequently, the single vector incorporates mixed neighbor information from different relationships and distinct orders. That is, higher-level convolutions are incapable of distinguishing messages from various sources by a single vector which leads to structural information loss and difficulty in refining message passing strategy. Here we take a classic graph learning for instance, as shown in Figure 1, a sampled sub-graph contains target node 𝑡, two source nodes (𝑠 1 and 𝑠 2 ) and two 2-hop source nodes (𝑠𝑠 1 and 𝑠𝑠 2 ). The 𝑠𝑠 1 and 𝑠𝑠 2 are the source node of (𝑠 1 and 𝑠 2 ), respectively. Existing methods [10,15,31] usually conduct graph convolution operations twice to learn the node representation. Through the first layer of graph convolution, the target node 𝑡 and its neighbors H [ ] t (1) H [ ] t\n(1) 1" }, { "figure_ref": [], "heading": "H [ ] s", "publication_ref": [], "table_ref": [], "text": "(1) 2" }, { "figure_ref": [], "heading": "H [ ] s", "publication_ref": [ "b0" ], "table_ref": [], "text": "(0)\nH [ ] t\n(0) 1 H [ ] s (0) 2 H [ ] s (0) 1 H [ ] s (0) 1 H [ ] ss (0) 2 H [ ] s (0) 2 H [ ] ss 1 s 2 s 1 ss 2 ss (0) H [ ] t (0) 1 H [ ] s (0) 2 H [ ] s (0) 1 H [ ] s (0) 1 H [ ] ss (0) 2 H [ ] s (0) 2\nH [ ] ss (1) H [ ] t\n(1) 1" }, { "figure_ref": [], "heading": "H [ ] s", "publication_ref": [], "table_ref": [], "text": "(1) 2" }, { "figure_ref": [], "heading": "H [ ] s", "publication_ref": [], "table_ref": [], "text": "(2)\nH [ ] t (source nodes) are represented as H (1) [𝑡], H (1) [𝑠 1 ], and H (1) [𝑠 2 ], respectively, which are used as the input of the next layer of graph convolution computation. The information in 𝑠 1 and 𝑠𝑠 1 is mixed in H (1) [𝑠 1 ] and information of 𝑠 2 and 𝑠𝑠 2 is mixed in H (1) [𝑠 2 ]. Based on the H (1) [𝑡], H (1) [𝑠 1 ], and H (1) [𝑠 2 ], the second layer of graph convolution cannot distinguish the information from 𝑠𝑠 1 and 𝑠 1 and the information from 𝑠𝑠 2 and 𝑠 2 ." }, { "figure_ref": [ "fig_1" ], "heading": "Previous Works Ours", "publication_ref": [ "b12", "b8" ], "table_ref": [], "text": "Intuitively, the semantics learned from each layer and each relation can reflect different-grained features, which strongly correlate to the different tasks, while the mixtures of all information may lead to sub-optimal results for the downstream tasks.\nAlong this line, we propose a novel heterogeneous graph neural network with sequential node representation (Seq-HGNN), which learns representations of meta-paths and fuses them into highquality node representations. Specifically, we first propose a sequential node representation learning mechanism that performs message passing over all meta-paths within fixed hops and represents each node as a sequence of meta-path representation. As Figure 1 illustrates, after the calculation of two Seq-HGNN layers, Seq-HGNN can automatically capture the information of all meta-paths and their combinations within 2 hops, which are respectively stored in multiple independent vectors. These vectors then form a sequence as the representation of target 𝑡 (i.e. H (2) [𝑡]). The sequential representation enables higher Seq-HGNN layers to naturally distinguish messages from different meta-paths. Secondly, we design a heterogeneous representation fusion module to transform the sequence-based node representations into a compact representation, which can be used in various downstream tasks. Also, Seq-HGNN can benefit the discovery of effective entities and relations by estimating the importance of different meta-paths. Finally, we conduct extensive experiments on real-world datasets. The experimental results show that Seq-HGNN achieves the best performance compared with several state-of-the-art baselines.\nOur contributions can be summarized as follows:\n• We propose a novel heterogeneous graph representation learning model with sequential node representation, namely Seq-HGNN. To the best of our knowledge, the Seq-HGNN is the first work to represent nodes as sequences, which can provide better representations by recording messages passing along multiple meta-paths intact.\n• We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) [13] and Open Graph Benchmark (OGB) [9] to demonstrate the advantage of our model over state-of-the-art baselines. • Our model performs good interpretability by analyzing the attention weight of meta-paths in heterogeneous graphs." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the related work on heterogeneous graph neural networks and the applications of heterogeneous graph neural networks in the field of information retrieval." }, { "figure_ref": [], "heading": "Heterogeneous graph neural networks", "publication_ref": [ "b14", "b30", "b21", "b20", "b3", "b22", "b30", "b6", "b32", "b26", "b9", "b19", "b23", "b25", "b29", "b34" ], "table_ref": [], "text": "Heterogeneous graph neural networks (HGNNs) are proposed to deal with heterogeneous graph data. Some HGNNs apply graph convolution directly on original heterogeneous graphs. RGCN [15] is a widely-used HGNN, which sets different transfer matrices for different relations in heterogeneous graphs. R-HGNN [31] learned different node representations under each relation and fuses representations from different relations into a comprehensive representation. Other HGNNs used meta-paths to adopt homogeneous-graphbased methods on the heterogeneous graph. For instance, HAN [22] utilized GAT [21] to calculate node-level and semantic-level attention on meta-path-based sub-graphs. MAGNN [4] introduced intra-meta-path aggregation and inter-meta-path aggregation to capture information on the heterogeneous graph. HeCo [23] selected positive sample nodes based on meta-path on heterogeneous graph comparative learning. The meta-path-based methods require manual-designed meaningful meta-paths and can not be applied in large-scale heterogeneous graphs limited by the computational complexity [31]. To overcome the disadvantages of meta-path, Het-SANN [7] aggregated multi-relational information of projected nodes by attention-based averaging. GTN [33] and ie-HGCN [27] were designed to discover effective meta-paths for the target nodes. HGT [10] introduced the dot product attention mechanism [20] into heterogeneous graph learning, which can learn the implicit meta-paths. These methods represented each node as one single vector, which means confounding messages from different relations and orders, resulting in the loss of structural information.\nIn more recent years, in light of learning comprehensive node representations, some researchers adopted Simplified Graph Convolutional Network (SGC) [24]-based methods for heterogeneous graph processing [26,30,35]. The core points of them focused on subgraph division and preprocessing. To be specific, these methods first divided a heterogeneous graph into several relation-driven subgraphs based and then conducted simple message passing and pre-computation in the preprocessing stage. However, there are two main drawbacks with this design making them unsuitable for application scenarios: Firstly, multiple downstream tasks are needed to meet the requirements of different messaging passing. For instance, in link prediction tasks, models need to mask some links in the graph, while using SGC-based methods means performing multiple separate preprocessing pipelines, resulting in high computational consumption for various downstream tasks. Secondly, SGC-based methods necessitate learning a distinct set of model parameters for each class of nodes in a heterogeneous graph, with no correlation between parameters of different node types. Such approaches lack the capacity for transfer learning across diverse node types. Specifically, the training and optimization of a particular node type in a heterogeneous graph using SGC-based methods do not contribute to performance enhancement in predicting other node types.\nUnlike previous works, our model implements sequential node representation, which records messages from all meta-paths within a fixed step and achieves better performance and interpretability. Moreover, our model possesses end-to-end learning capabilities, enabling it to handle various downstream tasks with a more general and simplified workflow." }, { "figure_ref": [], "heading": "HGNNs applications in IR", "publication_ref": [ "b1", "b5", "b31", "b28", "b0", "b13", "b16", "b2", "b4" ], "table_ref": [], "text": "In recent years, heterogeneous graph neural networks (HGNNs) have emerged as a powerful tool for extracting rich structural and semantic information from heterogeneous graphs, and have consequently found numerous applications in information retrieval (IR) domains.\nIn the realm of search engines and matching, Chen et al. [2] proposed a cross-modal retrieval method using heterogeneous graph embeddings to preserve abundant cross-modal information, addressing the limitations of conventional methods that often lose modality-specific information in the process. Guan et al. [6] tackled the problem of fashion compatibility modeling by incorporating user preferences and attribute entities in their meta-path-guided heterogeneous graph learning approach. Yuan et al. [32] introduced the Spatio-Temporal Dual Graph Attention Network (STDGAT) for intelligent query-Point of Interest (POI) matching in location-based services, leveraging semantic representation, dual graph attention, and spatiotemporal factors to improve matching accuracy even with partial query keywords. Yao et al. [29] proposed a knowledgeenhanced person-job fit approach based on heterogeneous graph neural networks, which can use structural information to improve the matching accuracy of resumes and positions.\nRecommendation systems have also benefited from HGNNs. Cai et al. [1] presented an inductive heterogeneous graph neural network (IHGNN) model to address the sparsity of user attributes in cold-start recommendation systems. Pang et al. [14] proposed a personalized session-based recommendation method using heterogeneous global graph neural networks (HG-GNN) to capture user preferences from current and historical sessions. Additionally, Song et al. [17] developed a self-supervised, calorie-aware heterogeneous graph network (SCHGN) for food recommendation, incorporating user preferences and ingredient relationships to enhance recommendations.\nHGNNs have also garnered attention from scholars in the field of question-answering systems. For example, Feng et al. [3] proposed a document-entity heterogeneous graph network (DEHG) to integrate structured and unstructured information sources, enabling multi-hop reasoning for open-domain question answering. Gao et al. [5] introduced HeteroQA, which uses a question-aware heterogeneous graph transformer to incorporate multiple information sources from user communities." }, { "figure_ref": [], "heading": "PRELIMINARIES", "publication_ref": [], "table_ref": [], "text": "Heterogeneous Graph: Heterogeneous graph is defined as a directed graph 𝐺 = (𝑉 , 𝐸), with node type mapping 𝜏 : 𝑉 → 𝐴 and edge type mapping 𝜙 : 𝐸 → 𝑅, where 𝑉 is the node set, 𝐸 is the edge set, 𝐴 and 𝑅 represent the set of node types and edge types respectively, and |𝐴| + |𝑅| > 2.\nRelation: For an edge 𝑒 = (𝑠, 𝑡) linked from source node 𝑠 to target node 𝑡, the corresponding relation is 𝑟 =< 𝜏 (𝑠), 𝜙 (𝑒), 𝜏 (𝑡) >. A heterogeneous graph can be considered a collection of triples consisting of source nodes 𝑠 linked to the target nodes 𝑡 through edges 𝑒.\nRelational Bipartite Graph: Given a heterogeneous graph 𝐺 and a relation 𝑟 , the bipartite graph 𝐺 𝑟 is defined as a graph composed of all the edges of the corresponding type of the relation 𝑟 . In other words, 𝐺 𝑟 contains all triples < 𝑠, 𝑒, 𝑡 >, where the relation 𝜙 (𝑒) = 𝑟 . Meta-path: Meta-path 𝑃 is defined as a path with the following form:\n𝐴 1 𝑟 1 --→ 𝐴 2 𝑟 2 --→ • • • 𝑟 𝑙 -1 ---→ 𝐴 𝑙 (abbreviated as 𝐴 1 𝐴 2 • • • 𝐴 𝑙 )\n, where 𝐴 𝑖 ∈ 𝐴, 𝑟 𝑖 ∈ 𝑅. The meta-path describes a composite relation between node types 𝐴 1 and 𝐴 𝑙 , which expresses specific semantics.\nGraph Representation Learning: Given a graph 𝐺 = (𝑉 , 𝐸), graph representation learning aims to learn a function 𝑉 → R 𝑑 , 𝑑 ≪ |𝑉 | to map the nodes in the graph to a low-dimensional vector space while preserving both the node features and the topological structure information of the graph. These node representation vectors can be used for a variety of downstream tasks, such as node classification and link prediction." }, { "figure_ref": [ "fig_3" ], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "The overview of the proposed Seq-HGNN is shown in Figure 2. The Seq-HGNN is composed of multiple Seq-HGNN Layers and a Heterogeneous Representation Fusion module. The Seq-HGNN Layers aggregate the information provided by the source node 𝑠, and update the representation of the target node 𝑡. We denote the output representation of the 𝑙-th layer as H (𝑙 ) , which is also the input of the (𝑙 + 1)-th layer (1 ≤ 𝑙 ≤ 𝐿). By stacking 𝐿 Seq-HGNN Layers, each target node 𝑡 can receive higher-order neighbor information. The Seq-HGNN Layer consists of three modules: Sequential Node Representation, Transformer-based Message Passing and Sequential Node Path Representation Update. Among them, the Sequential Node Representation transforms each node into a set of representation vectors. The Transformer-based Message Passing generates neighbor messages for the target node by aggregating the information of neighbors (source nodes). The Sequential Node Representation Update computes a new representation for 𝑡 based on the representation from the previous layer and the received neighbor messages. Finally, the Heterogeneous Representation Fusion module estimates the importance of meta-paths and fuses the representations of meta-paths to a single vector as node representation, which can be utilized in downstream tasks.\nL F Meta-Paths L  Sampled Sub-Graph 1 Enc r W 2 Enc r W 3 Enc r W F Q W F K W F V W 1 K 1 D 1 A M 2 A 3 A 2 K 1 K 2 K 1 D M 1 A 2 A 3 A 1 { D M } r = → 2 { K M } r = → 3 { A M } r = → 1 K 2 K M 1 D M 1 A 2 A 3 A M 1 ( ) [ M ] l r M s g 2 ( ) [ M ] l r M s g 3 ( ) [ M ] l r M s g ( 1) [M] l - H ( ) (M) l Adopt  W D M → K M → A M → ( ) [M] L H [M] H fus [M] A ( ) [M]" }, { "figure_ref": [], "heading": "Sequential Node Representation", "publication_ref": [ "b9", "b14", "b30" ], "table_ref": [], "text": "In heterogeneous graphs, the nodes often have multiple attributes and receive messages from multiple types of nodes. For example, in a heterogeneous graph from a movie review website, a Movie node usually contains multiple description attributes such as Storyline, Taglines, Release date, etc. Existing methods only support representing each node as a single vector, which implies that the multiple properties of each node are confused into one vector. This causes information loss of node representation. Different from the above-mentioned graph representation learning methods [10,15,31], we represent each node as one sequence of vectors, which can record multiple properties of node and messages from multiple meta-paths intact. Concretely, given a node 𝑖, we first design a type-specific transform matrix 𝑊 𝜏 (𝑖 ) to convert features 𝑥 𝑖 of node 𝑖 to the same space:\n𝐻 (0) 𝑓 [𝑖] = 𝑊 𝜏 (𝑖 ) 𝑓 • 𝑥 𝑖 𝑓 + 𝑏 𝜏 (𝑖 ) 𝑓 ,(1)\nwhere 𝜏 (𝑖) is the node type of node 𝑖; 1 ≤ 𝑓 ≤ 𝐹 \nH (0) [𝑖] = 𝐹 (0) 𝜏 (𝑖 ) 𝑓 𝐻 (0) 𝑓 [𝑖] ,(2)\nwhere is the concatenation operation and ×𝑑 is a sequence with the length of 𝐹\nH (0) [𝑖] ∈ R 𝐹 (0) 𝜏 (𝑖 )\n𝜏 (𝑖 ) . It is worth noting that our proposed sequential node representation is independent of time series. During the message passing, our model always represents each node as one sequence of vectors. Each vector in the sequence can represent either the meta-path information or a specific feature attribute of the node. For a detailed description, please refer to Section 4.2 and 4.3." }, { "figure_ref": [], "heading": "Transformer-based Message Passing", "publication_ref": [], "table_ref": [], "text": "The message-passing module aggregates the information of neighbors (source nodes) on each relational bipartite graph to generate neighbor messages for the target node." }, { "figure_ref": [], "heading": "Neighbor Importance Estimation.", "publication_ref": [ "b9", "b19" ], "table_ref": [], "text": "Before the neighbor message generation, we first estimate the importance of these neighbors. We utilize the mutual attention [10,20] to calculate the importance of source nodes to the target node. Specifically, we first project the representations of the target node 𝑡 and its neighbors (source nodes 𝑠) to multiple Query vectors Q and Key vectors K, respectively.\nQ (𝑙 ) [𝑡] = 𝐹 (𝑙 -1) 𝜏 (𝑡 ) 𝑓 W Query (𝑙 ) 𝜏 (𝑡 ) 𝐻 (𝑙 -1) 𝑓 [𝑡] + 𝑏 Query (𝑙 ) 𝜏 (𝑡 ) ,(3)\nK (𝑙 ) [𝑠] = 𝐹 (𝑙 -1) 𝜏 (𝑠 ) 𝑓 W Key (𝑙 )\n𝜏 (𝑠 ) 𝐻 (𝑙 -1) 𝑓\n[𝑠] + 𝑏\nKey (𝑙 ) 𝜏 (𝑠 ) ,(4)\nwhere W\nQuery (𝑙 ) 𝜏 (𝑡 ) ∈ R 𝑑 ×𝑑 and W Key (𝑙 )\n𝜏 (𝑠 ) ∈ R 𝑑 ×𝑑 are type-specific trainable transformation matrices for source node 𝑠 and target node 𝑡; 𝑏\nQuery (𝑙 ) 𝜏 (𝑡 )\nand 𝑏 Key (𝑙 ) 𝜏 (𝑠 ) are bias vectors. The shapes of Q (𝑙 ) [𝑡] and K (𝑙 ) [𝑠] are 𝐹 represent the length of sequence representations of 𝑡 and 𝑠 in the (𝑙 -1) layer, respectively.\nWe regard the attention weights of the source node 𝑠 to the target node 𝑡 as the importance of 𝑠 to 𝑡. Since the nodes would play different roles in different relations, we calculate the attention weights on each bipartite graph separately. More specifically, we denote the set of source nodes connected by the target node 𝑡 in the bipartite graph 𝐺 𝑟 as 𝑁 𝑟 (𝑡), where 𝑟 ∈ R. Then, the attention weights can be formulated as:\nAttn (𝑙 ) 𝑟 [𝑠, 𝑡] = Softmax ∀𝑠 ∈𝑁 𝑟 (𝑡 ) K (𝑙 ) [𝑠]𝑊 ATT (𝑙 ) 𝑟 Q (𝑙 ) [𝑡] ⊤ • 1 √ 𝑑 ,(5)\nwhere Attn " }, { "figure_ref": [], "heading": "Neighbor Message Generation.", "publication_ref": [], "table_ref": [], "text": "According to the importance of neighbors, the Seq-HGNN aggregates the neighbor information and treats it as the neighbor messages for 𝑡.\nFirst, Seq-HGNN extracts features of the source node 𝑠 in each bipartite graph 𝐺 𝑟 separately as follows:\nExt (𝑙 ) 𝑟 [𝑠] = 𝐹 (𝑙 -1) 𝜏 (𝑠 ) 𝑓 𝑊 EXT (𝑙 ) 𝑟 W Value (𝑙 ) 𝜏 (𝑠 ) 𝐻 (𝑙 -1) 𝑓 [𝑠] + 𝑏 Value (𝑙 ) 𝜏 (𝑠 ) ,(6)\nwhere Ext is the bias;\n𝑊 EXT (𝑙 )" }, { "figure_ref": [], "heading": "𝑟", "publication_ref": [], "table_ref": [], "text": "is the transform matrix for the relation 𝑟 . Then, we can obtain the neighbor messages for 𝑡 under relation 𝑟 as follows:\nMsg (𝑙 ) 𝑟 [𝑡] = ∑︁ ∀𝑠 ∈𝑁 𝑟 (𝑡 ) Attn (𝑙 ) 𝑟 [𝑠, 𝑡] ⊤ Ext (𝑙 ) 𝑟 [𝑠] ,(7)\nwhere Msg\n(𝑙 ) 𝑟 [𝑡] ∈ R 𝐹 (𝑙 -1)\n𝜏 (𝑡 ) ×𝑑 is a sequence with the same shape as the node representation H (𝑙 -1) [𝑡], and 𝑁 𝑟 (𝑡) is the set of neighbors (source nodes) of the target node 𝑡 in the bipartite graph 𝐺 𝑟 ." }, { "figure_ref": [], "heading": "Sequential Node Representation Update", "publication_ref": [], "table_ref": [], "text": "After the message passing process, the target node 𝑡 receives messages Msg (𝑙 ) 𝑟 [𝑡] from multiple relations 𝑟 ∈ 𝑅. Based on the received messages and the representations from the previous layer H ( 𝑙 -1) [𝑡], we get the updated node representation of 𝑡.\nFirst, we concatenate the message sequences from different relation types with relation-aware encoding as follows:\nH (𝑙 ) [𝑡] = ∥ ∀𝑟 ∈𝑅 (𝑡 ) Msg (𝑙 ) 𝑟 [𝑡],(8)" }, { "figure_ref": [], "heading": "Msg", "publication_ref": [], "table_ref": [], "text": "(𝑙 )\n𝑟 [𝑡] = Msg (𝑙 ) 𝑟 [𝑡] ⊕ 𝑊 Enc 𝑟 ,(9)\nwhere 𝑅(𝑡) is the set of relation types whose target node type is 𝜏 (𝑡); 𝑊 Enc 𝑟 ∈ R 𝑑 is the relation encoding for relation 𝑟 , which is a learnable vector to distinguish messages from different relation types; ⊕ represents that the relation encoding is added to each vector in the sequence.\nThen, we concatenate the representations of the target node from the last layer and encoded messages to obtain a new representation of the target node 𝑡:\nH (𝑙 ) [𝑡] = H (𝑙 -1) [𝑡] ∥ W Adopt (𝑙 ) 𝜏 (𝑡 ) H (𝑙 ) [𝑡] ,(10)\nwhere ×𝑑 is the updated representations of target node 𝑡; W Adopt (𝑙 ) 𝜏 (𝑡 ) ∈ R 𝑑 ×𝑑 is a transformation matrix corresponding to the 𝜏 (𝑡).\nH (𝑙 ) [𝑡] ∈ R 𝐹 (𝑙 ) 𝜏 (𝑡 )\nWe denote that the number of relation types connected to the target node 𝑡 is len(𝑅(𝑡)), then the length of the sequential representations for target node 𝑡 grows according to the following:\n𝐹 (𝑙 ) 𝜏 (𝑡 ) = 𝐹 (𝑙 -1) 𝜏 (𝑡 ) × (len(𝑅(𝑡)) + 1) ,(11)\nwhere 𝐹 (𝑙 -1) 𝜏 (𝑡 ) and 𝐹 (𝑙 ) 𝜏 (𝑡 ) represent the length of the sequential representation for node 𝑡 in the (𝑙 -1)-th and 𝑙-th layers, respectively. Referring to Equation 10 and Equation 11, we can summarize that in sequential node representation, information from a node itself and low-order neighbors is located at the beginning of the sequence, followed by high-order information. As deeper Seq-HGNN Layers are performed, information from higher-order neighbors is appended to the sequence." }, { "figure_ref": [], "heading": "Heterogeneous Representation Fusion", "publication_ref": [ "b19", "b9", "b20", "b30" ], "table_ref": [], "text": "After the 𝐿-layer Seq-HGNN computation, each target node 𝑡 is represented by a sequence with length 𝐹 (𝐿) 𝜏 (𝑡 ) , which are the representations of the 𝑡 from multiple meta-paths. We utilize the self attention [20] mechanism to fuse the sequential representations of the target node 𝑡 into a single vector. During the representation fusion, Seq-HGNN can identify the effective meta-paths for downstream tasks.\n𝑄 fus [𝑡] = mean H (0) [𝑡] 𝑊 FQ , 𝐾 fus [𝑡] = H (𝐿) [𝑡] 𝑊 FK , 𝑉 fus [𝑡] = H (𝐿) [𝑡] 𝑊 FV , 𝐴 fus [𝑡] = Softmax 𝑄 fus [𝑡]𝐾 fus [𝑡] ⊤ √ 𝑑 , H [𝑡] = 𝐴 fus [𝑡]𝑉 fus [𝑡],(12)\nwhere 𝜏 (𝑖 ) stands for the importance of each representation for node 𝑡, which is also the importance of meta-paths.\nH [𝑡] ∈ R 𝑑 is\nReferring to [10,21,31], we adopt the multi-head attention mechanism during the message passing and representation fusion. The output of the multi-head attention is concatenated into a 𝑑dimensional representation to enhance the stability of the model. In addition, we randomly drop out some fragments of the sequential representation of each node in training loops, which can help the Seq-HGNN model learn more meaningful node representations." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we evaluate the performance of Seq-HGNN by conducting experiments on multiple datasets." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b12", "b8" ], "table_ref": [ "tab_1" ], "text": "We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) [13] 1 and Open Graph Benchmark (OGB) [9] 2 . Specifically, three medium-scale datasets, DBLP, IMDB and ACM, are from HGB. A large-scale dataset MAG comes from OGB. Their statistics are shown in Table 1.\n• DBLP is a bibliography website of computer science 3 . This dataset contains four types of nodes: Author, Paper, Term and Venue. In this data set, models need to predict the research fields of authors. • IMDB is extracted from the Internet Movie Database (IMDb) 4 .\nIt contains four types of nodes: Movie, Director, Keyword and Actor. Models need to divide the movie into 5 categories: \"Romance\", \"Thriller\", \"Comedy\", \"Action, Drama\". The model needs to predict the venues in which the papers are published." }, { "figure_ref": [], "heading": "Results Analysis", "publication_ref": [ "b14", "b33", "b21", "b3", "b35", "b7", "b9", "b12", "b25" ], "table_ref": [ "tab_3", "tab_3" ], "text": "5.2.1 Results on HGB Benchmark. Table 3 shows the results of Seq-HGNN on the three datasets compared to the baselines in the HGB benchmark. Baselines are divided into two categories: meta-path-based methods and meta-path-free methods. Meta-path based methods include RGCN [15], HetGNN [34], HAN [22] and MAGNN [4]. The meta-path-free methods are RSHN [36], Het-SANN [8], HGT [10], HGB [13] and SeHGNN [26]. The results of the baselines are from HGB and their original papers. As shown in Table 3, our proposed method achieves the best performance on ACM and DBLP datasets. In detail, Seq-HGNN gains improvement beyond the best baseline on macro-f1 by (1.2%, 0.4%) and on mirco-f1 by (0.5%, 0.4%), respectively. On the IMDB dataset, our method achieves the best micro f1 scores and the second-best macro f1 scores. The performance difference between IMDB and the other two datasets may be due to the following two reasons: (1) Domain difference: DBLP and ACM are datasets in the academic domain while IMDB comes from the film domain. (2) Task difference: IMDB is a multiple-label classification task, but ACM and DBLP are not." }, { "figure_ref": [], "heading": "Results on OGB-MAG.", "publication_ref": [ "b18", "b10", "b17", "b24" ], "table_ref": [ "tab_3" ], "text": "Since some types of nodes in the MAG dataset have no initial features, existing methods usually utilize unsupervised representation methods to generate node embeddings (abbreviated as emb) as initial features. For a fair comparison, we also use the unsupervised representation learning method (Com-plEx [19]) to generate node embeddings. In addition, some baseline methods on the list also adopt multi-stage learning [11,18,25] (abbreviated as ms) tricks to improve the generalization ability of the model. Therefore, we also explored the performance of Seq-HGNN under the multi-stage training.\nAs shown in Table 3, Seq-HGNN achieves the best performance compared to the baseline on the ogb leaderboard 6 . It shows that our method can not only mine information in heterogeneous graphs more effectively, but also reflect good scalability to be applied to large-scale graphs." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "One of the core contributing components in Seq-HGNN is to explore how to effectively exploit the structural information in heterogeneous graphs. So we design three variants of our model to verify their effects, namely Seq-HGNN w/o seq, Seq-HGNN w/o fus, and Seq-HGNN w/o rel. The performance of these variants on the " }, { "figure_ref": [], "heading": "Experiment Setup Detail", "publication_ref": [ "b11", "b15", "b9", "b12", "b25" ], "table_ref": [ "tab_2", "tab_3" ], "text": "We use the PyTorch Geometric framework 2.07 to implement the Seq-HGNN. The source code is available at https://github.com/ nobrowning/SEQ_HGNN. We set the node embedding dimension 𝑑 = 512, and the number of attention heads to 8. The number of layers 𝐿 is set to 2 on the DBLP, IMDB and MAG datasets and to 3 on the ACM dataset. During the training process, we set the dropout rate to 0.5, and the maximum epoch to 150. We use the AdamW optimizer [12] with a maximum learning rate of 0.0005 and tune the learning rate using the OneCycleLR strategy [16]. For DBLP, ACM, and IMDB datasets, we use full batch training. For the large-scale dataset MAG, we use the HGTLoader8 subgraph sampling strategy [10], setting the batch size to 256, sampling depth to 3, sample number to 1800. We iterate 250 batches in each epoch.\nThe results of the baselines in Table 2 and Table 3 mainly come from previous works [13,26]. All experiments can be conducted on a Linux machine with Intel(R) Core(R) i7 8700 CPU, 32G RAM, and a single NVIDIA GeForce RTX 3090 GPU. " }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Training Efficiency", "publication_ref": [], "table_ref": [], "text": "In Seq-HGNN, sequential node representations are computed in parallel. Therefore, Seq-HGNN achieves decent computational efficiency. To further investigate the computational efficiency of Seq-HGNN, we conduct experiments to compare the training time of Seq-HGNN with a state-of-the-art baseline, i.e., SeHGNN.\nTo achieve a fair comparison, we subject all models to the same accuracy performance validation -making a test on the test set every one train epoch. The variation of test accuracy of the models with training time is shown in Figure 3.\nAs shown in Figure 3, Seq-HGNN performs the highest accuracy within the least training time. It verifies that Seq-HGNN has good computational efficiency when dealing with heterogeneous graphs. As a comparison, the baseline (SeHGNN) outputs nothing within 42 seconds of starting training. The reason is that SeHGNN cannot directly learn node representations on heterogeneous graphs. It requires a message-passing step before node representation generation. In the message passing step, SeHGNN collects the features of neighbor nodes of the target on all meta-paths. Therefore, the messaging step shows a high time-consuming." }, { "figure_ref": [ "fig_9", "fig_9" ], "heading": "Parameter Sensitivity Analysis", "publication_ref": [], "table_ref": [], "text": "We study the sensitivity analysis of parameters in Seq-HGNN. Specifically, we conduct experiments on the large-scale dataset OGB-MAG to explore the influence of the number of layers, the dropout rate, and the dimension of node representation. Since the model needs to conduct a sub-graph sampling on the large-scale dataset, we also explore the influence of sampling node numbers. To simplify the evaluation process, we opted not to employ a multistage training strategy in the parameter sensitivity experiment. The results are shown in Figure 4, where each subfigure shows the accuracy of classification on the y-axis and hyperparameters on the x-axis. It can be seen that as the dimension increases, the performance of Seq-HGNN gradually increases. After the dimension is higher than 256, the performance improvement slows down. 5.6.3 Dropout rate. We adjust the dropout rate during the model training and report the results in Figure 4 (c). We can observe that Seq-HGNN performs best when the dropout rate is 0.5. A high dropout rate would lead to underfitting and poor performance, while a low dropout rate may lead to overfitting." }, { "figure_ref": [ "fig_9" ], "heading": "Number of layers.", "publication_ref": [], "table_ref": [], "text": "We explore the performance of our model while stacking from 1 to 3 Seq-HGNN Layers. The experimental results are shown in Figure 4 (d). It can be seen that Seq-HGNN achieves the best performance when it is stacked with 2 layers. On this basis, the performance of Seq-HGNN becomes worse when more layers are stacked. This may be caused by over-smoothing issues." }, { "figure_ref": [ "fig_12", "fig_12", "fig_12" ], "heading": "Visualization of Effective Meta-Paths", "publication_ref": [], "table_ref": [], "text": "As mentioned in Section 4.4, 𝐴 fus in the Heterogeneous Representation Fusion module indicates the importance of different representations of a node, i.e., the importance of a node on different meta-paths. To visualize how the heterogeneous fusion module of Seq-HGNN identifies the most contributing meta-paths, we present the effective meta-paths in node representation learning on DBLP, IMDB, ACM and MAG datasets, respectively. The most important meta-paths for these target node representations are shown in Figure 5. It is noteworthy that our model can individually identify the significant metapaths characterizing each node. In order to simplify the visualization, we aggregate the propagation path weights of nodes by node type in Figure 5. Due to the large number of metapaths, here, we only show the top five important paths in each sub-figure . \nComparing the four sub-figure in Figure 5, we can find that the important paths for distinct nodes are obviously different. It verifies that the Seq-HGNN can estimate the path importance separately for different nodes, rather than treat them equally. " }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we proposed a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN. To avoid the information loss caused by the single vector node representation, we first design a sequential node representation learning mechanism to represent each node as a sequence of meta-path representations during the node message passing. Then we propose a heterogeneous representation fusion module, empowering Seq-HGNN to identify important meta-paths and aggregate their representations into a compact one. Third, we conducted extensive experiments on four widely-used datasets from open benchmarks and clearly validated the effectiveness of our model. Finally, we visualized and analyzed effective meta-path paths in different datasets, and verified that Seq-HGNN can provide deep insights into the heterogeneous graphs." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This research work is supported by the National Key Research and Development Program of China under Grant No. 2019YFA0707204, the National Natural Science Foundation of China under Grant Nos. 62176014, 62276015, the Fundamental Research Funds for the Central Universities." } ]
Recent years have witnessed the rapid development of heterogeneous graph neural networks (HGNNs) in information retrieval (IR) applications. Many existing HGNNs design a variety of tailor-made graph convolutions to capture structural and semantic information in heterogeneous graphs. However, existing HGNNs usually represent each node as a single vector in the multi-layer graph convolution calculation, which makes the high-level graph convolution layer fail to distinguish information from different relations and different orders, resulting in the information loss in the message passing. To this end, we propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN. To avoid the information loss caused by the single vector node representation, we first design a sequential node representation learning mechanism to represent each node as a sequence of meta-path representations during the node message passing. Then we propose a heterogeneous representation fusion module, empowering Seq-HGNN to identify important meta-paths and aggregate their representations into a compact one. We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB). Experimental results show that our proposed method outperforms state-ofthe-art baselines in both accuracy and efficiency. The source code is available at https://github.com/nobrowning/SEQ_HGNN.
Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph
[ { "figure_caption": "Figure 1 :1Figure 1: The comparison of node representation updates. The shapes of the nodes represent different node types.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "( 3 )3Sequential Node Representation Update (2) Transformer-based Message Passing", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "lHFigure 2 :2Figure 2: The overview of our proposed Seq-HGNN. Given a heterogeneous sub-graph containing a target node M and six source nodes, Seq-HGNN first learns a sequential node representation of M (i.e. H (𝐿) [M]), and then fuses the representation H (𝐿) [M] for multiple downstream tasks. In the sub-graph, M, K, A, and D represent node types Movie, Keyword, Actor, Director, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "( 0 )0𝜏 (𝑖 ) ; 𝐹(0) 𝜏 (𝑖 ) is the number of 𝑖's features; 𝑥 𝑖 𝑓 is the 𝑓 -th initialized feature in the feature sequence of 𝑖; 𝐻 (0) 𝑓 [𝑖] ∈ R 𝑑 is the node features after the transform; 𝑏 𝜏 (𝑖 ) 𝑓 is the bias; 𝑑 is the dimension of features. Next, we concatenate the 𝐹 (0) 𝜏 (𝑖 ) transformed representations of node 𝑖 to get an input sequence H (0) [𝑖] for the Seq-HGNN model:", "figure_data": "", "figure_id": "fig_4", "figure_label": "0", "figure_type": "figure" }, { "figure_caption": "(𝑙 - 1 )1𝜏 (𝑡 ) ×𝑑 and 𝐹 (𝑙 -1) 𝜏 (𝑠 ) ×𝑑, respectively. 𝐹 (𝑙 -1) 𝜏 (𝑡 ) and 𝐹 (𝑙 -1) 𝜏 (𝑠 )", "figure_data": "", "figure_id": "fig_5", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "𝑟 [𝑠, 𝑡] is the importance estimation of the source node 𝑠 to the target node 𝑡 on relation 𝑟 , and 𝑊 ATT (𝑙 ) 𝑟 ∈ R 𝑑 ×𝑑 is the transform matrix for relation 𝑟 .Unlike the existing attention-based approaches[10,21,31], the attention weight Attn (𝑙 ) 𝑟 [𝑠, 𝑡] is a matrix with the shape 𝐹 (𝑙 -1) 𝜏 (𝑠 ) × 𝐹 (𝑙 -1) 𝜏 (𝑡 ) rather than a scalar. Each element in Attn (𝑙 ) 𝑟 [𝑠, 𝑡] represents the attention weight of an item in the representation sequence of 𝑠 to an item in the representation sequence of 𝑡.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "𝜏 (𝑠 )×𝑑 is the extracted message from the source node 𝑠 under the relation 𝑟 ; W Value (𝑙 ) 𝜏 (𝑠 ) ∈ R 𝑑 ×𝑑 is the transformation matrix for for the node type 𝜏 (𝑠); 𝑏 Value (𝑙 )", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The comparison of training efficiency.", "figure_data": "", "figure_id": "fig_8", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Parameter Sensitivity of Seq-HGNN.", "figure_data": "", "figure_id": "fig_9", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "5. 6 . 161Number of node samples. Since Seq-HGNN uses HGT-Loader for sampling sub-graphs in the node classification task, we explore the effect of node sampling number on the performance of Seq-HGNN. As shown in Figure4(a), Seq-HGNN achieves the best performance when the number of samples is set as 1800. 5.6.2 Dimension of node representation. We report the experimental result varied with the dimension of node representation in Figure 4 (b).", "figure_data": "", "figure_id": "fig_10", "figure_label": "61", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of the significant meta-paths for representing the target nodes (Author, Movie, Paper, Paper) in respective datasets (DBLP, IMDB, ACM, MAG). In the figures, the nodes with superscripts 1 and 2 represent the direct neighbors and the second-order neighbors of the target node, respectively.", "figure_data": "", "figure_id": "fig_12", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "the final representation of the target node 𝑡; 𝑊 FQ , 𝑊 FK and 𝑊 FV are all learnable matrices of dimension 𝑑 × 𝑑; 𝑄 fus [𝑡] is generated by original features of target node 𝑡; 𝐴 fus [𝑡] ∈", "figure_data": "𝐹(𝑙 )R", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "• ACM is also a citation network. It contains four types of nodes: Paper, Author, Subject (Conference) and Term. The Paper nodes are divided into 3 categories: \"database\", \"wireless communication\" and \"data mining\". The model needs to predict the category the paper belongs to. • MAG is a heterogeneous academic network extracted from the Microsoft Academic Graph 5 , consisting of Paper, Author, Field and Institution. Papers are published in 349 different venues. Each paper is associated with a Word2Vec feature. The model needs to predict the category the paper belongs to. Statistics of datasets.", "figure_data": "name #Nodes#Node Types#Edges#Edge TypesTarget #ClassesDBLP26,1284239,5666author4IMDB21,420486,6426movie5ACM10,9424547,8728paper3MAG 1,939,743421,111,0074paper349", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experiment results on the three datasets from the HGB benchmark. The best results are in bold, and the second-best results are underlined.", "figure_data": "DBLPIMDBACMmacro-f1micro-f1macro-f1micro-f1macro-f1micro-f1RGCN91.52±0.50 92.07±0.50 58.85±0.26 62.05±0.15 91.55±0.74 91.41±0.75Metapath-basedHetGNN91.76±0.43 92.33±0.41 48.25±0.67 51.16±0.65 85.91±0.25 86.05±0.25methodsHAN91.67±0.49 92.05±0.62 57.74±0.96 64.63±0.58 90.89±0.43 90.79±0.43MAGNN93.28±0.51 93.76±0.45 56.49±3.20 64.67±1.67 90.88±0.64 90.77±0.65RSHN93.34±0.58 93.81±0.55 59.85±3.21 64.22±1.03 90.50±1.51 90.32±1.54Metapath-free methodsHetSANN HGT HGB78.55±2.42 80.56±1.50 49.47±1.21 57.68±0.44 90.02±0.35 89.91±0.37 93.01±0.23 93.49±0.25 63.00±1.19 67.20±0.57 91.12±0.76 91.00±0.76 94.01±0.24 94.46±0.22 63.53±1.36 67.36±0.57 93.42±0.44 93.35±0.45SeHGNN95.06±0.17 95.42±0.17 67.11±0.25 69.17±0.43 94.05±0.35 93.98±0.36Seq-HGNN 96.27±0.24 95.96±0.31 66.77±0.24 69.31±0.27 94.41±0.26 94.33±0.31Ours-w/o seq -w/o fus93.79±0.34 93.51±0.38 64.32±0.56 67.04±0.62 92.44±0.67 92.17±0.72 95.59±0.14 95.92±0.13 65.01±0.37 67.43±0.32 93.21±0.48 93.20±0.50-w/o rel95.49±0.23 95.64±0.18 64.78±0.41 69.09±0.39 93.76 ±0.43 93.67±0.46MethodsValidation accuracy Test accuracyRGCN48.35±0.3647.37±0.48HGT49.89±0.4749.27±0.61NARS51.85±0.0850.88±0.12SAGN52.25±0.3051.17±0.32GAMLP53.23±0.2351.63±0.22HGT+emb51.24±0.4649.82±0.13NARS+emb53.72±0.0952.40±0.16GAMLP+emb55.48±0.0853.96±0.18SAGN+emb+ms55.91±0.1754.40±0.15GAMLP+emb+ms57.02±0.4155.90±0.27SeHGNN+emb56.56±0.0754.78±0.17SeHGNN+emb+ms59.17±0.0957.19±0.12Seq-HGNN+emb56.93±0.1155.27±0.34Seq-HGNN+emb+ms59.21±0.0857.76±0.26", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Experiment results on the large-scale dataset MAG, where \"emb\" means using extra embeddings and \"ms\" means using multi-stage training. The best results are in bold, and the second-best results are underlined.HGB dataset is shown in Table2. The details of these variants are as follows: It works on the final representation of the node, in which it drops the heterogeneous representation fusion module, instead using the average representation sequence output sent by the last layer of Seq-HGNN.Comparing Seq-HGNN w/o fus and Seq-HGNN, it can be found that the performance decreases after removing the heterogeneous fusion module. It illustrates the importance of recognizing the most contributing meta-path.", "figure_data": "• Seq-HGNN w/o seq. It does not use the sequential noderepresentation. After each layer of graph convolution, mul-tiple node representations from different relationships areaggregated into a vector representation by the mean oper-ation. Finally, the Seq-HGNN w/o seq concatenates theoutput of each graph convolutional layer as the final outputfor the downstream tasks. Comparing Seq-HGNN w/o seqand Seq-HGNN, it can be found that after introducing se-quential node representation, the performance of the modelcan be significantly improved. It proves that sequential noderepresentations indeed retain richer and more effective nodeinformation.", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Chenguang Du; Kaichun Yao; Hengshu Zhu; Deqing Wang; Fuzhen Zhuang; Hui Xiong
[ { "authors": "Desheng Cai; Shengsheng Qian; Quan Fang; Jun Hu; Changsheng Xu", "journal": "ACM Trans. Inf. Syst", "ref_id": "b0", "title": "User Cold-Start Recommendation via Inductive Heterogeneous Graph Neural Network", "year": "2022" }, { "authors": "Dapeng Chen; Min Wang; Haobin Chen; Lin Wu; Jing Qin; Wei Peng", "journal": "ACM", "ref_id": "b1", "title": "Cross-Modal Retrieval with Heterogeneous Graph Embedding", "year": "2022-10-10" }, { "authors": "Yue Feng; Zhen Han; Mingming Sun; Ping Li", "journal": "", "ref_id": "b2", "title": "Multi-Hop Open-Domain Question Answering over Structured and Unstructured Knowledge", "year": "2022-07-10" }, { "authors": "Xinyu Fu; Jiani Zhang; Ziqiao Meng; Irwin King", "journal": "", "ref_id": "b3", "title": "MAGNN: Metapath Aggregated Graph Neural Network for Heterogeneous Graph Embedding", "year": "2020-04-20" }, { "authors": "Shen Gao; Yuchi Zhang; Yongliang Wang; Yang Dong; Xiuying Chen; Dongyan Zhao; Rui Yan", "journal": "ACM", "ref_id": "b4", "title": "HeteroQA: Learning towards Question-and-Answering through Multiple Information Sources via Heterogeneous Graph Modeling", "year": "2022-02-21" }, { "authors": "Fangkai Weili Guan; Xuemeng Jiao; Haokun Song; Chung-Hsing Wen; Xiaojun Yeh; Chang", "journal": "ACM", "ref_id": "b5", "title": "Personalized Fashion Compatibility Modeling via Metapathguided Heterogeneous Graph Learning", "year": "2022-07-11" }, { "authors": "Huiting Hong; Hantao Guo; Yucheng Lin; Xiaoqing Yang; Zang Li; Jieping Ye", "journal": "AAAI Press", "ref_id": "b6", "title": "An Attention-Based Graph Neural Network for Heterogeneous Structural Learning", "year": "2020-02-07" }, { "authors": "Huiting Hong; Hantao Guo; Yucheng Lin; Xiaoqing Yang; Zang Li; Jieping Ye", "journal": "AAAI Press", "ref_id": "b7", "title": "An Attention-Based Graph Neural Network for Heterogeneous Structural Learning", "year": "2020-02-07" }, { "authors": "Weihua Hu; Matthias Fey; Marinka Zitnik; Yuxiao Dong; Hongyu Ren; Bowen Liu; Michele Catasta; Jure Leskovec", "journal": "", "ref_id": "b8", "title": "Open Graph Benchmark: Datasets for Machine Learning on Graphs", "year": "2020-12-06" }, { "authors": "Ziniu Hu; Yuxiao Dong; Kuansan Wang; Yizhou Sun", "journal": "", "ref_id": "b9", "title": "Heterogeneous Graph Transformer", "year": "2020-04-20" }, { "authors": "Qimai Li; Zhichao Han; Xiao-Ming Wu", "journal": "AAAI Press", "ref_id": "b10", "title": "Deeper Insights Into Graph Convolutional Networks for Semi-Supervised Learning", "year": "2018-02-02" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b11", "title": "Decoupled Weight Decay Regularization", "year": "2019-05-06" }, { "authors": "Qingsong Lv; Ming Ding; Qiang Liu; Yuxiang Chen; Wenzheng Feng; Siming He; Chang Zhou; Jianguo Jiang; Yuxiao Dong; Jie Tang", "journal": "ACM", "ref_id": "b12", "title": "Are we really making much progress?: Revisiting, benchmarking and refining heterogeneous graph neural networks", "year": "2021-08-14" }, { "authors": "Yitong Pang; Lingfei Wu; Qi Shen; Yiming Zhang; Zhihua Wei; Fangli Xu; Ethan Chang; Bo Long; Jian Pei", "journal": "ACM", "ref_id": "b13", "title": "Heterogeneous Global Graph Neural Networks for Personalized Session-based Recommendation", "year": "2022-02-21" }, { "authors": "Sejr Michael; Thomas N Schlichtkrull; Peter Kipf; Rianne Bloem; Van Den; Ivan Berg; Max Titov; Welling", "journal": "Springer", "ref_id": "b14", "title": "Modeling Relational Data with Graph Convolutional Networks", "year": "2018-06-03" }, { "authors": "N Leslie; Nicholay Smith; Topin", "journal": "SPIE", "ref_id": "b15", "title": "Super-convergence: Very fast training of neural networks using large learning rates", "year": "2019" }, { "authors": "Yaguang Song; Xiaoshan Yang; Changsheng Xu", "journal": "ACM Trans. Multimedia Comput. Commun. Appl", "ref_id": "b16", "title": "Self-Supervised Calorie-Aware Heterogeneous Graph Networks for Food Recommendation", "year": "2022" }, { "authors": "Ke Sun; Zhouchen Lin; Zhanxing Zhu", "journal": "AAAI Press", "ref_id": "b17", "title": "Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labeled Nodes", "year": "2020-02-07" }, { "authors": "Théo Trouillon; Johannes Welbl; Sebastian Riedel; Éric Gaussier; Guillaume Bouchard", "journal": "", "ref_id": "b18", "title": "Complex Embeddings for Simple Link Prediction", "year": "2016-06-19" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Lukasz Kaiser; Illia Polosukhin", "journal": "", "ref_id": "b19", "title": "Attention is All you Need", "year": "2017-09" }, { "authors": "Petar Velickovic; Guillem Cucurull; Arantxa Casanova; Adriana Romero; Pietro Liò; Yoshua Bengio", "journal": "", "ref_id": "b20", "title": "Graph Attention Networks", "year": "2018-04-30" }, { "authors": "Xiao Wang; Houye Ji; Chuan Shi; Bai Wang; Yanfang Ye; Peng Cui; Philip S Yu", "journal": "ACM", "ref_id": "b21", "title": "Heterogeneous Graph Attention Network", "year": "2019-05-13" }, { "authors": "Xiao Wang; Nian Liu; Hui Han; Chuan Shi", "journal": "ACM", "ref_id": "b22", "title": "Self-supervised Heterogeneous Graph Neural Network with Co-contrastive Learning", "year": "2021-08-14" }, { "authors": "Felix Wu; Amauri H Souza; Tianyi Zhang; Christopher Fifty; Tao Yu; Kilian Q Weinberger", "journal": "PMLR", "ref_id": "b23", "title": "Simplifying Graph Convolutional Networks", "year": "2019-06" }, { "authors": "Han Yang; Xiao Yan; Xinyan Dai; Yongqiang Chen; James Cheng", "journal": "IEEE", "ref_id": "b24", "title": "Self-Enhanced GNN: Improving Graph Neural Networks Using Model Outputs", "year": "2021-07-18" }, { "authors": "Xiaocheng Yang; Mingyu Yan; Shirui Pan; Xiaochun Ye; Dongrui Fan", "journal": "", "ref_id": "b25", "title": "Simple and Efficient Heterogeneous Graph Neural Network", "year": "2022" }, { "authors": "Yaming Yang; Ziyu Guan; Jianxin Li; Wei Zhao; Jiangtao Cui; Quan Wang", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b26", "title": "Interpretable and efficient heterogeneous graph convolutional network", "year": "2021" }, { "authors": "Zuoxi Yang", "journal": "ACM", "ref_id": "b27", "title": "Biomedical Information Retrieval incorporating Knowledge Graph for Explainable Precision Medicine", "year": "2020-07-25" }, { "authors": "Kaichun Yao; Jingshuai Zhang; Chuan Qin; Peng Wang; Hengshu Zhu; Hui Xiong", "journal": "IEEE", "ref_id": "b28", "title": "Knowledge Enhanced Person-Job Fit for Talent Recruitment", "year": "2022-05-09" }, { "authors": "Lingfan Yu; Jiajun Shen; Jinyang Li; Adam Lerer", "journal": "", "ref_id": "b29", "title": "Scalable Graph Neural Networks for Heterogeneous Graphs", "year": "2020" }, { "authors": "Le Yu; Leilei Sun; Bowen Du; Chuanren Liu; Weifeng Lv; Hui Xiong", "journal": "IEEE Transactions on Knowledge and Data Engineering", "ref_id": "b30", "title": "Heterogeneous Graph Representation Learning with Relation Awareness", "year": "2022" }, { "authors": "Zixuan Yuan; Hao Liu; Yanchi Liu; Denghui Zhang; Fei Yi; Nengjun Zhu; Hui Xiong", "journal": "ACM", "ref_id": "b31", "title": "Spatio-Temporal Dual Graph Attention Network for Query-POI Matching", "year": "2020-07-25" }, { "authors": "Seongjun Yun; Minbyul Jeong; Raehyun Kim; Jaewoo Kang; Hyunwoo J Kim", "journal": "", "ref_id": "b32", "title": "Graph Transformer Networks", "year": "2019-12-08" }, { "authors": "Chuxu Zhang; Dongjin Song; Chao Huang; Ananthram Swami; Nitesh V Chawla", "journal": "ACM", "ref_id": "b33", "title": "Heterogeneous Graph Neural Network", "year": "2019-08-04" }, { "authors": "Wentao Zhang; Ziqi Yin; Zeang Sheng; Yang Li; Wen Ouyang; Xiaosen Li; Yangyu Tao; Zhi Yang; Bin Cui", "journal": "ACM", "ref_id": "b34", "title": "Graph Attention Multi-Layer Perceptron", "year": "2022-08-14" }, { "authors": "Shichao Zhu; Chuan Zhou; Shirui Pan; Xingquan Zhu; Bin Wang", "journal": "IEEE", "ref_id": "b35", "title": "Relation Structure-Aware Heterogeneous Graph Neural Network", "year": "2019-11-08" } ]
[ { "formula_coordinates": [ 2, 93.9, 174.74, 418.95, 49.77 ], "formula_id": "formula_0", "formula_text": "(0) 1 H [ ] s (0) 2 H [ ] s (0) 1 H [ ] s (0) 1 H [ ] ss (0) 2 H [ ] s (0) 2 H [ ] ss 1 s 2 s 1 ss 2 ss (0) H [ ] t (0) 1 H [ ] s (0) 2 H [ ] s (0) 1 H [ ] s (0) 1 H [ ] ss (0) 2 H [ ] s (0) 2" }, { "formula_coordinates": [ 3, 341.17, 457.49, 215.65, 12.5 ], "formula_id": "formula_1", "formula_text": "𝐴 1 𝑟 1 --→ 𝐴 2 𝑟 2 --→ • • • 𝑟 𝑙 -1 ---→ 𝐴 𝑙 (abbreviated as 𝐴 1 𝐴 2 • • • 𝐴 𝑙 )" }, { "formula_coordinates": [ 4, 71.9, 89.3, 434.9, 157.22 ], "formula_id": "formula_2", "formula_text": "L F Meta-Paths L  Sampled Sub-Graph 1 Enc r W 2 Enc r W 3 Enc r W F Q W F K W F V W 1 K 1 D 1 A M 2 A 3 A 2 K 1 K 2 K 1 D M 1 A 2 A 3 A 1 { D M } r = → 2 { K M } r = → 3 { A M } r = → 1 K 2 K M 1 D M 1 A 2 A 3 A M 1 ( ) [ M ] l r M s g 2 ( ) [ M ] l r M s g 3 ( ) [ M ] l r M s g ( 1) [M] l - H ( ) (M) l Adopt  W D M → K M → A M → ( ) [M] L H [M] H fus [M] A ( ) [M]" }, { "formula_coordinates": [ 4, 121.11, 656.37, 173.48, 13.79 ], "formula_id": "formula_3", "formula_text": "𝐻 (0) 𝑓 [𝑖] = 𝑊 𝜏 (𝑖 ) 𝑓 • 𝑥 𝑖 𝑓 + 𝑏 𝜏 (𝑖 ) 𝑓 ,(1)" }, { "formula_coordinates": [ 4, 392.06, 410.23, 166.68, 26.67 ], "formula_id": "formula_4", "formula_text": "H (0) [𝑖] = 𝐹 (0) 𝜏 (𝑖 ) 𝑓 𝐻 (0) 𝑓 [𝑖] ,(2)" }, { "formula_coordinates": [ 4, 480.64, 445.57, 58.47, 14.76 ], "formula_id": "formula_5", "formula_text": "H (0) [𝑖] ∈ R 𝐹 (0) 𝜏 (𝑖 )" }, { "formula_coordinates": [ 4, 349.39, 683.04, 209.35, 26.67 ], "formula_id": "formula_7", "formula_text": "Q (𝑙 ) [𝑡] = 𝐹 (𝑙 -1) 𝜏 (𝑡 ) 𝑓 W Query (𝑙 ) 𝜏 (𝑡 ) 𝐻 (𝑙 -1) 𝑓 [𝑡] + 𝑏 Query (𝑙 ) 𝜏 (𝑡 ) ,(3)" }, { "formula_coordinates": [ 5, 92.74, 83.69, 88.58, 26.67 ], "formula_id": "formula_8", "formula_text": "K (𝑙 ) [𝑠] = 𝐹 (𝑙 -1) 𝜏 (𝑠 ) 𝑓 W Key (𝑙 )" }, { "formula_coordinates": [ 5, 233.19, 90.09, 61.39, 16.47 ], "formula_id": "formula_9", "formula_text": "Key (𝑙 ) 𝜏 (𝑠 ) ,(4)" }, { "formula_coordinates": [ 5, 88.35, 123.1, 106.66, 16.47 ], "formula_id": "formula_10", "formula_text": "Query (𝑙 ) 𝜏 (𝑡 ) ∈ R 𝑑 ×𝑑 and W Key (𝑙 )" }, { "formula_coordinates": [ 5, 65.35, 150.16, 25.92, 16.47 ], "formula_id": "formula_11", "formula_text": "Query (𝑙 ) 𝜏 (𝑡 )" }, { "formula_coordinates": [ 5, 65.62, 292.75, 228.97, 20.15 ], "formula_id": "formula_12", "formula_text": "Attn (𝑙 ) 𝑟 [𝑠, 𝑡] = Softmax ∀𝑠 ∈𝑁 𝑟 (𝑡 ) K (𝑙 ) [𝑠]𝑊 ATT (𝑙 ) 𝑟 Q (𝑙 ) [𝑡] ⊤ • 1 √ 𝑑 ,(5)" }, { "formula_coordinates": [ 5, 58.73, 503.57, 235.86, 26.67 ], "formula_id": "formula_13", "formula_text": "Ext (𝑙 ) 𝑟 [𝑠] = 𝐹 (𝑙 -1) 𝜏 (𝑠 ) 𝑓 𝑊 EXT (𝑙 ) 𝑟 W Value (𝑙 ) 𝜏 (𝑠 ) 𝐻 (𝑙 -1) 𝑓 [𝑠] + 𝑏 Value (𝑙 ) 𝜏 (𝑠 ) ,(6)" }, { "formula_coordinates": [ 5, 52.99, 590.08, 29.17, 9.88 ], "formula_id": "formula_14", "formula_text": "𝑊 EXT (𝑙 )" }, { "formula_coordinates": [ 5, 85.38, 633.69, 209.21, 24.43 ], "formula_id": "formula_15", "formula_text": "Msg (𝑙 ) 𝑟 [𝑡] = ∑︁ ∀𝑠 ∈𝑁 𝑟 (𝑡 ) Attn (𝑙 ) 𝑟 [𝑠, 𝑡] ⊤ Ext (𝑙 ) 𝑟 [𝑠] ,(7)" }, { "formula_coordinates": [ 5, 94.26, 672.34, 54.4, 14.76 ], "formula_id": "formula_16", "formula_text": "(𝑙 ) 𝑟 [𝑡] ∈ R 𝐹 (𝑙 -1)" }, { "formula_coordinates": [ 5, 388.57, 194.06, 170.17, 20.02 ], "formula_id": "formula_17", "formula_text": "H (𝑙 ) [𝑡] = ∥ ∀𝑟 ∈𝑅 (𝑡 ) Msg (𝑙 ) 𝑟 [𝑡],(8)" }, { "formula_coordinates": [ 5, 396.26, 219.81, 162.48, 12.31 ], "formula_id": "formula_18", "formula_text": "𝑟 [𝑡] = Msg (𝑙 ) 𝑟 [𝑡] ⊕ 𝑊 Enc 𝑟 ,(9)" }, { "formula_coordinates": [ 5, 354.81, 356.06, 203.93, 16.73 ], "formula_id": "formula_19", "formula_text": "H (𝑙 ) [𝑡] = H (𝑙 -1) [𝑡] ∥ W Adopt (𝑙 ) 𝜏 (𝑡 ) H (𝑙 ) [𝑡] ,(10)" }, { "formula_coordinates": [ 5, 342.67, 391.62, 58.54, 14.76 ], "formula_id": "formula_20", "formula_text": "H (𝑙 ) [𝑡] ∈ R 𝐹 (𝑙 ) 𝜏 (𝑡 )" }, { "formula_coordinates": [ 5, 378.26, 485.16, 180.48, 14.83 ], "formula_id": "formula_21", "formula_text": "𝐹 (𝑙 ) 𝜏 (𝑡 ) = 𝐹 (𝑙 -1) 𝜏 (𝑡 ) × (len(𝑅(𝑡)) + 1) ,(11)" }, { "formula_coordinates": [ 6, 103, 100.43, 191.58, 92.62 ], "formula_id": "formula_22", "formula_text": "𝑄 fus [𝑡] = mean H (0) [𝑡] 𝑊 FQ , 𝐾 fus [𝑡] = H (𝐿) [𝑡] 𝑊 FK , 𝑉 fus [𝑡] = H (𝐿) [𝑡] 𝑊 FV , 𝐴 fus [𝑡] = Softmax 𝑄 fus [𝑡]𝐾 fus [𝑡] ⊤ √ 𝑑 , H [𝑡] = 𝐴 fus [𝑡]𝑉 fus [𝑡],(12)" }, { "formula_coordinates": [ 6, 79.11, 201.82, 48.55, 8.66 ], "formula_id": "formula_23", "formula_text": "H [𝑡] ∈ R 𝑑 is" } ]
2023-05-18
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6" ], "table_ref": [], "text": "In classification problems, real-world data often exhibits a long-tailed distribution: a few majority classes have large amounts of samples, while numerous minority classes are with only a few samples. This extreme imbalance class distribution leads to the model training dominated by head classes. As a result, the model performance for tail classes is severely degraded. Nowadays, it is still challenging to effectively train a model on long-tailed data in visual recognition tasks.\nTo address the issue of extreme data imbalance caused by long-tailed distribution, an intuitive way is to re-balance the model via class-balanced sampling [1,2] or loss function reweighting [3,4]. However, these methods result in overfitting to the tail classes, which invariably inhibits the performance Fig. 1. A schematic diagram of the influence of feature norm on decision margin in embedding space. With the increase of the feature norm of the tail class samples, the margin becomes clear and the separability of the samples can be enhanced, which in turn improves the model generalization towards the samples. of the model. Most recently, Cui et al. [5] have proposed to re-weight the loss function or re-sample the data based on the \"effective number\" of each class, which has been shown empirically effective. This \"effective number\" strategy, on the other hand, does not truly address the issue of uneven feature distribution for long-tailed data. Subsequently, Cao et al. [6] utilized the label-distribution-aware margin (LDAM) to re-weight the loss, which can improve the generalization performance of tail classes. Nevertheless, it calculates the predicted logit through the cosine distance, which neglects the significant influence of the feature norm.\nIn this paper, we address the long-tailed problem from a feature norm perspective and thereby proposing the featurebalanced loss (FBL). As shown in Fig. 1, it can be seen that training samples with less feature norm are difficult to classify because of the unclear margins between each class. The increase of feature norm can enlarge the margins between classes and enhance the separability of the samples. Based on this observation, we add a class-based stimulus to the predicted logit to encourage larger tail class feature norm to improve its generalization. Different from LDAM that utilizes hard margins to increase intra-class compactness, our FBL enlarges decision margin without compressing the embedding space distribution of each class. Furthermore, we adopt curriculum learning [7] strategy to gradually increase the classbased stimulus so that the network initially concentrates on the head classes, and then gradually shifts its attention to the tail classes as the training progresses. In this way, the classification accuracy of the tail classes can be improved while maintaining the performance of the head classes. We validate the proposed FBL on five popular benchmark datasets, i.e., CIFAT-10-LT, CIFAT-100-LT, ImageNet-LT, iNaturalist 2018 and Places-LT. We also conduct an additional experiment on feature norm visualization, which demonstrates that feature norm is one of the key factors for improving the accuracy of long-tailed data classification.\nOur main contributions are summarized as follows:\n• We propose the novel FBL for long-tailed visual recognition by adding an extra classes-based stimulus to the logit. The proposed FBL encourages larger feature norms for tail classes, thereby improving the generalization performance of these classes.\n• We propose to gradually increase the intensity of stimulus in the way of curriculum learning. This robust training strategy not only enhances the classification accuracy of tail classes to a large extent, but also maintains the performance of head classes.\n• We conduct extensive experiments on commonly used long-tailed datasets, which demonstrates the superiority of the proposed method in comparison with the state-of-theart methods." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "Long-tailed visual recognition has received increasing attention in computer vision because of the prevalence of data imbalance in the real world. This section will make an overview of the most related works." }, { "figure_ref": [], "heading": "Loss Modification", "publication_ref": [ "b7", "b8", "b8", "b3", "b4", "b9", "b4" ], "table_ref": [], "text": "Loss modification aims to re-balance the importance of different classes by tuning the loss values. It addresses the class imbalance problem from two perspectives: sample-wise and class-wise. Sample-wise methods [8,9] assign large relative weights to the difficult samples through the fine-grained parameters in the loss. For example, focal loss [9] utilizes the sample prediction hardness as the re-weighting coefficient of the loss function. However, the classification difficulty of a sample may not be directly related to its corresponding class size. Hence, the sample-wise method is incapable of handling the large-scale and severe imbalance data. Class-wise methods [4,5,10] assign the loss function with class-specific parameters that are negatively correlated to the label frequencies. For example, Cui et al. [5] proposed to re-weight the loss function by the \"effective number\" of each class instead of label frequency. Nevertheless, it does not completely alleviate the problem of biased feature distribution." }, { "figure_ref": [], "heading": "Logit Adjustment", "publication_ref": [ "b5", "b10", "b11", "b12" ], "table_ref": [], "text": "Logit adjustment addresses the class imbalance problem by calibrating the logit to the prior during inference or training. Typically, a number of approaches adjust the loss during training. Most recently, Cao et al. [6] have proposed label-distribution-aware margin accompanied with the deferred scheme (LDAM-DRW), which enforces tail classes to have large relative margins to increase their classification accuracy. Furthermore, DisAlign [11] adaptively aligns the logit to a balanced class distribution to adjust the biased decision boundary, which can re-balance the classifier well. Besides, another kind of method post-hoc shifts the predicted logits. For example, Menon et al. [12] proposed logit adjustment (LA) to post-process the logit based on the label frequencies of training data. In contrast, Hong et al. [13] proposed LADE, which post-adjusts logits with the label frequencies of testing data, allowing the distribution of the test set to be arbitrary." }, { "figure_ref": [], "heading": "PROPOSED METHOD", "publication_ref": [], "table_ref": [], "text": "To mitigate the training bias towards the head classes caused by long-tailed data, we propose the FBL as a more powerful supervised signal for optimizing deep neural networks (DNNs)." }, { "figure_ref": [], "heading": "Analysis of Softmax Loss Function in Classification", "publication_ref": [ "b13" ], "table_ref": [], "text": "Given a training sample x with the label y from the training set T with total C classes and N training samples. We use f ∈ R D to represent the feature of x obtained from the embedding layer with dimension D.\nW = {w 1 , w 2 , • • • , w C } ∈ R D×C\nrepresents the weight matrix of the classifier, where w i represents the weight vector of class i in the classifier. The predicted logit of class i is represented by z i , thus, z i = w T i f . We use the subscript y to represent the target class. That is, z y indicates the target logit and z i (i = y) is the non-target logit. The original softmax loss function for the given sample x is:\nL softmax (x) = -log e zy j e zj .(1)\nThe gradient of L softmax w.r.t. z i is:\n∂L softmax ∂z i = p i -1, i = y p i , i = y ,(2)\nwhere p i = e z i j e z j . In backward propagation, the gradients of the target class are negative, and those of the non-target classes are positive. Thus, the training samples punish the non-target class weights w i (i = y) by p i . The weights of tail classes which have fewer training instances always receive punishment signals. As a result, the weight norm of the classifier for tail classes is always reduced. Therefore, we obtain the following properties: Property 1. The weight norm w i of the classifier for class i is correlated with the class size n i .\nIn addition, we introduce additional property of softmax loss that was found by Yuan et al. [14]: Property 2. By fixing the weight vector and direction of feature vectors, softmax loss is a function that monotonically decreases with the increasing of feature L 2 -norm when features are correctly classified.\nProperty 1 indicates that the target logit z y = w T y f of tail class is usually suppressed because of the relatively small w T y . Meanwhile, Property 2 shows that feature norm is an important factor to achieve a lower loss, so that the features can be more separable. To improve the performance on tail classes, we can encourage larger feature norm for tail classes to diminish the bias towards the head classes." }, { "figure_ref": [], "heading": "FBL with Curriculum Learning", "publication_ref": [ "b6" ], "table_ref": [], "text": "To stimulate the large feature norm, we can add an additional constraint item to the original cross-entropy loss:\nL = -log e zy i e zi + α λ y f , (3\n)\nwhere α is the parameter used to adjust the strength of the constraint, and λ y controls the stimulus intensity towards different classes. Since Property 1 in Sec. 3.1 states that the weight norm of classifier for tail classes is usually suppressed, the logits of tail classes will be unfairly reduced. To diminish this bias, we can encourage large feature norms for tail classes and thus assign them stronger stimulation. Therefore, λ y is negatively correlated with the number of samples in class y.\nFor the sake of analysis of the loss function, we rewrite Eq. (3) as: L = -log e zy j e z j + log e λy f\n= -log e zy -λy\nf j e z j = -log p y ,(4)\nwhere p y = e zy -λy f j e z j . As the sum of the probabilities of all classes obtained by Eq. ( 4) is not equal to 1, i.e., C y=1 p y = 1, we further modify the logit to ensure that the total predicted probabilities of all classes are equal to 1. The featurebalanced logit z b j of class j is introduced and is expressed as:\nz b j = z j -α λ j f .(5)\nIn addition, λ j controls the intensity of the stimulus, which should be weak for head classes and strong for tail classes. Subsequently, we set λ j at: 8 end so that it is zero for the most frequent class and is much stronger for tail classes. Furthermore, the stronger the constraint on feature (i.e., λj f ) is, the more the model focuses on the tail classes. We can adopt the idea of curriculum learning [7], which makes the model initially focus on easy samples (i.e., head classes), and then gradually shift to learning difficult samples (i.e., tail classes). To achieve this, we can choose the learning strategy that gradually increases α as the training progresses. Therefore, we replace α by α(t) which is related to the training epoch t. We empirically select the parabolic increase learning strategy, which is expressed as:\nλ j = log n max -log n j ,(6)\nα(t) ∝ ( t T ) 2 , (7\n)\nwhere t is the training epoch and T is the total number of epochs. Sec. 4.5 also provides experimental results for different learning strategies. The final loss function L FBL is expressed as:\nL FBL = - 1 N i log e z b y i j e z b j .(8)\nThis loss function is named as FBL-feature-balanced loss, because it balances the logit of different classes based on feature norm. The algorithm of our proposed method is summarized in Algorithm 1." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b15", "b16", "b17", "b18" ], "table_ref": [ "tab_0" ], "text": "To demonstrate the effectiveness of our proposed FBL, we conduct the experiments on five benchmark datasets with the various scales. ImageNet-LT [16] is a large-scale long-tailed dataset for object classification through sampling a subset following the Pareto distribution with the power value α = 6 from ImageNet-2012 [17]. It includes 115.8K images with the class size ranging from 5 to 1,280, imitating the long-tailed distribution that regularly existed in the real world. Places-LT is a long-tailed version of the large-scale scene classification dataset Places-365 [18]. There are 184.5K images with class sizes ranging from 5 to 4,980. Moreover, the gap between the sizes of tail and head classes of this dataset is larger than that of ImageNet-LT. iNaturalist 2018 (iNat 2018) is the iNaturalist species classification and detection dataset [19], which is a massive realworld long-tailed dataset. In its 2018 version, iNaturalist comprises 437,513 training images from 8,142 classes. In the light of different classes, the numbers of the training samples follow an exponential decay.\nTable 1 summarizes the details of the above datasets." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b4", "b15" ], "table_ref": [], "text": "We use Pytorch to implement and train all the backbones with stochastic gradient descent with momentum. Backbone. Following the protocol of Cui et al. [5], ResNet-32 is adopted as the backbone for all CIFAR-10/100-LT datasets. For ImageNet-LT and iNat 2018, ResNet-50 is applied. For Places-LT, we follow Liu et al. [16] and start from a ResNet-152 pre-trained on the original balanced version of ImageNet. Except for ResNet-152, all the backbones are trained from scratch.\nTraining details. For CIFAR-10/100-LT, we train the backbone with 200 epochs and batch size of 64. The initial learning rate (lr) is set at 0.1, and we anneal lr by 100 at the 160th and 180-th epoch, respectively. For the three large-scale datasets, backbone is trained with 180 epochs, batch size of 512, and initial lr = 0.2. We divide lr by 10 at 120-th and 160-th epochs. " }, { "figure_ref": [], "heading": "Comparison Methods", "publication_ref": [ "b5", "b11", "b19" ], "table_ref": [], "text": "The vanilla training with cross-entropy (CE) loss is chosen as the baseline method. We compare the proposed method with the state-of-the-art ones, i.e., the logit modification methods including: LDAM-DRW [6] and LA [12], the most recently proposed two-stage method-BBN [20] on the small-scale datasets (CIFAR-10/100-LT) and decoupling on the largescale datasets (imageNet-LT, iNat 2018 and Places-LT)." }, { "figure_ref": [], "heading": "Long-Tailed Recognition Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Results on CIFAR-10/100-LT. We conduct the comparison experiments on CIFAR-10/100-LT with IF = {100, 50}. Table 2 summarizes the top-1 accuracy. Our FBL outperforms the other competing methods by noticeable margins across all the datasets. For example, FBL outperforms the state-of-theart method -LA by 1.54% and 1.33% with IF = 100 on CIFAR-10-LT and CIFAR-100-LT, respectively.\nResults on large-scale datasets. FBL yields good performance on all large-scale datasets, which is consistent with that CIFAR-10/100-LT. Table 3 shows the comparison results. The proposed FBL that can be trained end-to-end not only achieves better results than LA, but also is superior to the two-stage method, i.e., LDAM-DRW and decoupling. For example, on ImageNet-LT, FBL outperforms LDAM-DRW and decoupling by 1.90% and 3.00%, respectively." }, { "figure_ref": [ "fig_1", "fig_1", "fig_1" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "We conduct an ablation study to investigate the effectiveness of different learning strategies adopted by α(t). The results are shown in Fig. 2. The corresponding per-class accuracy is presented in Table 5 and 6, respectively. The following phenomena can be seen:\n• The capability of the model to learn from different classes of samples is diverse. Specifically, in Fig. 2 • Compared with CE loss, our FBL encourages larger feature norms of tail class samples to eliminate representation bias towards head classes. The area (S * area (class index)) enclosed by the curve of CE loss and FBL becomes larger as the number of class samples decreases, e.g. S tail area (9) > S tail area (8) > S head area (1) > S head area (0) in Fig. 2 (a), which is in line with our motivation.\nThese observations not only justify our intuition about the influence of feature norm on decision margin, but also offer a new promising way to investigate long-tailed visual recognition." }, { "figure_ref": [], "heading": "CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "In this work, we have proposed a novel FBL to address the long-tailed classification from feature space. FBL encourages larger feature norms of tail classes by adding relatively stronger stimuli to the logits of tail classes, which can mitigate the representation bias towards head classes in the feature space. In addition, a curriculum learning strategy has been adopted to gradually increase the stimuli in training, which can keep the good accuracy of the model for the head classes and improve the performance of the tail classes. FBL allows DNNs to be trained end-to-end without the risk of a performance drop from head classes. Extensive experiments have demonstrated the superiority of the proposed FBL." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "part of the experiments and literature review. This work was supported in part by NSFC/RGC JRS Grant: N HKBU214/21, GRF Grant: 12201321, NSFC Grant: 61672444, and HKBU Grant: RC-FNRA-IG/18-19/SCI/03. Source code is available at https://github.com/juyongjiang/FBL." } ]
Deep neural networks frequently suffer from performance degradation when the training data is long-tailed because several majority classes dominate the training, resulting in a biased model. Recent studies have made a great effort in solving this issue by obtaining good representations from data space, but few of them pay attention to the influence of feature norm on the predicted results. In this paper, we therefore address the long-tailed problem from feature space and thereby propose the feature-balanced loss. Specifically, we encourage larger feature norms of tail classes by giving them relatively stronger stimuli. Moreover, the stimuli intensity is gradually increased in the way of curriculum learning, which improves the generalization of the tail classes, meanwhile maintaining the performance of the head classes. Extensive experiments on multiple popular long-tailed recognition benchmarks demonstrate that the feature-balanced loss achieves superior performance gains compared with the state-of-the-art methods.
FEATURE-BALANCED LOSS FOR LONG-TAILED VISUAL RECOGNITION
[ { "figure_caption": "Algorithm 1 : 4 5 6 714567FBL with curriculum learning Input: Training dataset S Output: Predicted labels 1 Initialize the DNN model φ((x, y); θ) randomly, where θ is the parameter of the model; 2 for t = 1 to T do 3 Sample mini-batch training samples B from the long-tailed data S with batch size of b;Obtain the constraint strength parameter α: α ← α(t);Obtain the stimulus intensity parameter λ j :λ j ← log n max -log n j ;Calculate the loss by Eq. (8):L((x, y); θ) = 1 b (x,y)∈B L FBL (x, y);Update model parameters: θ ← θ -α ∇ θ L((x, y); θ);", "figure_data": "", "figure_id": "fig_0", "figure_label": "14567", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Top: The changes of feature norm on head classes (class index-{0, 1}) and tail classes (class index-{8, 9}) with respect to training epochs (left) and the feature norm distribution of classes over test dataset (right) on CIFAR-10 with IF = 100 (a) and 50 (b). Bottom: The changes of feature norm on head classes (class index-{9, 19}) and tail classes (class index-{79, 89}) with respect to training epochs (left) and the feature norm distribution of classes over test dataset (right) on CIFAR-100 with IF = 100 (c) and 50 (d).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "(a) and (b), the feature norms of head classes samples (class index-{0, 1}) reach stable in very early training epochs due to enough training samples. Differently, on CIFAR-100-LT (as shown in Fig. 2 (c) and (d)), the feature norms of the samples from all classes including head (class index-{9, 19}) and tail classes (class index-{79, 89}) are constantly changing as the epoch increases, which have a similar phenomenon to the tail classes in CIFAR-10-LT (class index-{8, 9} in Fig. 2 (a) and (b)) because they all suffer from insufficient training samples.", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "An Overview of Long-Tailed Datasets Nmax Nmin (where N max and N min are the numbers of training samples in the most and the least frequent classes, respectively). CIFAR-10-LT and CIFAR-100-LT have two typical variants, namely, with IF = {100, 50}.", "figure_data": "DatasetCIFAR-10-LTCIFAR-100-LTImageNet-LTPlaces-LTiNat 2018# Classes101001,0003658,142IF1005010050256996500# Train. img.12,40613,99610,84712,608115,84662,500437,513Tail class size50100510552Head class size5,0005,0005005001,2804,9801,000# Val. img.----20,0007,30024,426# Test img.10,00010,00010,00010,00050,00036,500-CIFAR-10/100-LT [6] down-samples the original balancedversion of CIFAR-10/100 [15] per class by an imbalanced fac-tor IF =", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison results on CIFAR-10/100-LT. Top-1 accuracy (%) are reported. The best results are shown in underline bold.", "figure_data": "DatasetCIFAR-10-LT CIFAR-100-LTBackbone NetResNet-32IF1005010050CE loss (baseline)71.07 75.31 39.43 44.20LDAM-DRW [6] (NeurIPS 2019) 77.03 81.03 42.04 47.62BBN [20] (CVPR 2020)79.82 81.18 42.56 47.02LA [12](ICLR 2021)80.92-43.89-FBL (ours)82.46 84.30 45.22 50.65", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison results on ImageNet-LT, iNaturalist 2018 and Places-LT. Top-1 accuracy (%) are reported. The best results are shown in underline bold.", "figure_data": "DatasetImageNet-LT iNat 2018Places-LTBackbone NetResNet-50ResNet-50 ResNet-152CE loss (baseline)44.5163.8027.13LDAM-DRW [6] (NeurIPS 2019)48.8068.00-Decoupling [2] (ICLR 2020)47.7069.4937.62LA [12] (ICLR 2021)50.4466.36-FBL (ours)50.7069.9038.66", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Ablation experiment of different learning strategy on CIFAR-10-LT with IF = 100.", "figure_data": "α(t)RepresentationAcc.(%)Linear decrease1 -t/T75.97Linear increaset/T81.67Sine increasesin(t/T • π/2)81.22Cosine increase1 -cos(t/T • π/2)80.79Parabolic increase(t/T ) 282.46", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "sum-", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Per-class accuracy (%) of test set on CIFAR-10-LT.", "figure_data": "Class index0123456789IF100CE loss91.098.283.272.578.865.168.859.549.044.6FBL88.194.781.973.083.675.186.377.382.781.9IF50CE loss84.595.868.574.681.172.782.967,559.166.4FBL83.792.181.773.985.076.187.785.088.589.3", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Per-class accuracy (%) of test set on CIFAR-100-LT.", "figure_data": "Class index091929• • •69798999IF100CE loss89.072.059.048.0• • •45.012.03.02.0FBL86.077.054.045.0• • •60.027.022.08.0IF50CE loss88.079.053.049.0• • •53.010.019.013.0FBL87.077.056.057.0• • •62.048.038.017.04.6. Feature-balanced ResultsTo further validate the effects of the proposed FBL, especiallythe tail classes, we visualize the changes of the feature norm(i.e., f ) with respect to training epochs and feature normdistribution of classes over the test set on CIFAR-10/100-LT.", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Mengke Li; Yiu-Ming Cheung; Juyong Jiang
[ { "authors": "Haibo He; Yang Bai; Edwardo A Garcia; Shutao Li", "journal": "", "ref_id": "b0", "title": "Adasyn: Adaptive synthetic sampling approach for imbalanced learning", "year": "2008" }, { "authors": "Bingyi Kang; Saining Xie; Marcus Rohrbach; Zhicheng Yan; Albert Gordo; Jiashi Feng; Yannis Kalantidis", "journal": "", "ref_id": "b1", "title": "Decoupling representation and classifier for long-tailed recognition", "year": "2020" }, { "authors": "Chen Huang; Yining Li; Chen Change Loy; Xiaoou Tang", "journal": "", "ref_id": "b2", "title": "Learning deep representation for imbalanced classification", "year": "2016" }, { "authors": "Salman H Khan; Munawar Hayat; Mohammed Bennamoun; Ferdous ; Ahmed Sohel; Roberto Togneri", "journal": "TNNLS", "ref_id": "b3", "title": "Cost-sensitive learning of deep feature representations from imbalanced data", "year": "2018" }, { "authors": "Yin Cui; Menglin Jia; Tsung-Yi Lin; Yang Song; Serge Belongie", "journal": "", "ref_id": "b4", "title": "Class-balanced loss based on effective number of samples", "year": "2019" }, { "authors": "Kaidi Cao; Colin Wei; Adrien Gaidon; Nikos Arechiga; Tengyu Ma", "journal": "NeurIPS", "ref_id": "b5", "title": "Learning imbalanced datasets with label-distribution-aware margin loss", "year": "2019" }, { "authors": "Yoshua Bengio; Jérôme Louradour; Ronan Collobert; Jason Weston", "journal": "", "ref_id": "b6", "title": "Curriculum learning", "year": "2009" }, { "authors": "Mengye Ren; Wenyuan Zeng; Bin Yang; Raquel Urtasun", "journal": "", "ref_id": "b7", "title": "Learning to reweight examples for robust deep learning", "year": "2018" }, { "authors": "Tsung-Yi Lin; Priya Goyal; Ross B Girshick; Kaiming He; Piotr Dollár", "journal": "TPAMI", "ref_id": "b8", "title": "Focal loss for dense object detection", "year": "2020" }, { "authors": "Jingru Tan; Changbao Wang; Buyu Li; Quanquan Li; Wanli Ouyang; Changqing Yin; Junjie Yan", "journal": "", "ref_id": "b9", "title": "Equalization loss for long-tailed object recognition", "year": "2020" }, { "authors": "Songyang Zhang; Zeming Li; Shipeng Yan; Xuming He; Jian Sun", "journal": "", "ref_id": "b10", "title": "Distribution alignment: A unified framework for long-tail visual recognition", "year": "2021" }, { "authors": "Aditya Krishna Menon; Sadeep Jayasumana; Ankit Singh Rawat; Himanshu Jain; Andreas Veit; Sanjiv Kumar", "journal": "", "ref_id": "b11", "title": "Long-tail learning via logit adjustment", "year": "2021" }, { "authors": "Youngkyu Hong; Seungju Han; Kwanghee Choi; Seokjun Seo; Beomsu Kim; Buru Chang", "journal": "", "ref_id": "b12", "title": "Disentangling label distribution for long-tailed visual recognition", "year": "2021" }, { "authors": "Yuhui Yuan; Kuiyuan Yang; Jianyuan Guo; Chao Zhang; Jingdong Wang", "journal": "", "ref_id": "b13", "title": "Feature incay for representation regularization", "year": "2018" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b14", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Ziwei Liu; Zhongqi Miao; Xiaohang Zhan; Jiayun Wang; Boqing Gong; Stella X Yu", "journal": "", "ref_id": "b15", "title": "Large-scale long-tailed recognition in an open world", "year": "2019" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Fei-Fei Li", "journal": "", "ref_id": "b16", "title": "Imagenet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Bolei Zhou; Àgata Lapedriza; Aditya Khosla; Aude Oliva; Antonio Torralba", "journal": "TPAMI", "ref_id": "b17", "title": "Places: A 10 million image database for scene recognition", "year": "2018" }, { "authors": "Grant Van Horn; Oisin Mac Aodha; Yang Song; Yin Cui; Chen Sun; Alex Shepard; Hartwig Adam; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b18", "title": "The inaturalist species classification and detection dataset", "year": "2018" }, { "authors": "Boyan Zhou; Quan Cui; Xiu-Shen Wei; Zhao-Min Chen", "journal": "", "ref_id": "b19", "title": "BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 406.9, 441.01, 142.03, 11.23 ], "formula_id": "formula_0", "formula_text": "W = {w 1 , w 2 , • • • , w C } ∈ R D×C" }, { "formula_coordinates": [ 2, 370.73, 532.78, 179.19, 26.29 ], "formula_id": "formula_1", "formula_text": "L softmax (x) = -log e zy j e zj .(1)" }, { "formula_coordinates": [ 2, 361.98, 591.48, 187.94, 23.23 ], "formula_id": "formula_2", "formula_text": "∂L softmax ∂z i = p i -1, i = y p i , i = y ,(2)" }, { "formula_coordinates": [ 3, 109.71, 320.09, 175.56, 26.29 ], "formula_id": "formula_3", "formula_text": "L = -log e zy i e zi + α λ y f , (3" }, { "formula_coordinates": [ 3, 285.26, 328.72, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 122.47, 506.88, 166.66, 30.49 ], "formula_id": "formula_5", "formula_text": "f j e z j = -log p y ,(4)" }, { "formula_coordinates": [ 3, 133.14, 633.93, 156, 22.46 ], "formula_id": "formula_6", "formula_text": "z b j = z j -α λ j f .(5)" }, { "formula_coordinates": [ 3, 117.32, 714.33, 171.82, 9.65 ], "formula_id": "formula_7", "formula_text": "λ j = log n max -log n j ,(6)" }, { "formula_coordinates": [ 3, 400.22, 463.37, 145.83, 22.31 ], "formula_id": "formula_8", "formula_text": "α(t) ∝ ( t T ) 2 , (7" }, { "formula_coordinates": [ 3, 546.05, 470.43, 3.87, 8.64 ], "formula_id": "formula_9", "formula_text": ")" }, { "formula_coordinates": [ 3, 368.41, 546.29, 181.51, 31.24 ], "formula_id": "formula_10", "formula_text": "L FBL = - 1 N i log e z b y i j e z b j .(8)" } ]
10.18653/v1/N19-1245
2023-11-08
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b23", "b1", "b26", "b8", "b27", "b23", "b21" ], "table_ref": [], "text": "Humans use symbols -number words such as \"three\" and digits such as \"3\" -to quantify the world. How humans understand these symbols has been the subject of cognitive science research for half a century. The dominant theory is that people understand number symbols by mapping them to mental representations, specifically magnitude representations (Moyer and Landauer, 1967). This is true for both number words (e.g., \"three\") and digits (e.g., \"3\"). These magnitude representations are organized as a \"mental number line\" (MNL), with numbers mapped to points on the line as shown in Figure 1d. Cognitive science research has revealed that this representation is present in the minds of young children (Ansari et al., 2005) and even nonhuman primates (Nieder and Miller, 2003). Most of this research has been conducted with numbers in the range 1-9, in part, because corpus studies have shown that 0 belongs to a different distribution (Dehaene and Mehler, 1992) and, in part, because larger numbers require parsing place-value notation (Nuerk et al., 2001), a cognitive process beyond the scope of the current study.\nEvidence for this proposal comes from magnitude comparison tasks in which people are asked to compare two numbers (e.g., 3 vs. 7) and judge which one is greater (or lesser). Humans have consistently exhibited three effects that suggest recruitment of magnitude representations to understand numbers: the distance effect, the size effect, and the ratio effect (Moyer and Landauer, 1967;Merkley and Ansari, 2010). We review the experimental evidence for these effects, shown in Figure 1, in LLMs. Our behavioral benchmarking approach shifts the focus from what abilities LLMs have in an absolute sense to whether they successfully mimic human performance characteristics. This approach can help differentiate between human tendencies captured by models and the model behaviors due to training strategies. Thus, the current study bridges between Natural Language Processing (NLP), computational linguistics, and cognitive science." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0", "fig_0", "fig_0", "fig_0" ], "heading": "Effects of Magnitude Representations", "publication_ref": [ "b12", "b3", "b25", "b23", "b28", "b10", "b15", "b6", "b7" ], "table_ref": [], "text": "Physical quantities in the world, such as the brightness of a light or the loudness of a sound, are encoded as logarithmically scaled magnitude representations (Fechner, 1860). Research conducted with human participants and non-human species has revealed that they recruit many of the same brain regions, such as the intra-parietal sulcus, to determine the magnitude of symbolic numbers (Billock and Tsou, 2011;Nieder and Dehaene, 2009). Three primary magnitude representation effects have been found using the numerical comparison task in studies of humans. First, comparisons show a distance effect: The greater the distance |x -y| between the numbers x vs. y, the faster the comparison (Moyer and Landauer, 1967). Thus, people compare 1 vs. 9 faster than 1 vs. 2. This is shown in abstract form in Figure 1a. This effect can be explained by positing that people possess an MNL. When comparing two numbers, they first locate each number on this representation, determine which one is \"to the right\", and choose that number as the greater one. Thus, the farther the distance between the two points, the easier (and thus faster) the judgment.\nSecond, comparisons show a size effect: Given two comparisons of the same distance (i.e., of the same value for |x -y|), the smaller the numbers, the faster the comparison (Parkman, 1971). For example, 1 vs. 2 and 8 vs. 9 both have the same distance (i.e., |x -y| = 1), but the former involves smaller numbers and is therefore the easier (i.e., faster) judgment. The size effect is depicted in abstract form in Figure 1b. This effect also references the MNL, but a modified version where the points are logarithmically compressed, i.e., the distance from 1 to x is proportional to log(x); see Figure 1d. To investigate if a logarithmically compressed number line is also present in LLMs, we use multidimensional scaling (Ding, 2018) on the cosine distances between number embeddings.\nThird, comparisons show a ratio effect: The time to compare two numbers x vs. y is a decreasing function of the ratio of the larger number over the smaller number, i.e., max(x,y) min(x,y) (Halberda et al., 2008). This function is nonlinear, as depicted in abstract form in Figure 1c. Here, we assume that this function is a negative exponential, though other functional forms have been proposed in the cognitive science literature. The ratio effect can also be explained by the logarithmically compressed MNL depicted in Figure 1d.\nThese three effects -distance, size, and ratiohave been replicated numerous times in studies of human adults and children, non-human primates, and many other species (Cantlon, 2012;Cohen Kadosh et al., 2008). The MNL model in Figure 1d accounts for these effects (and many others in the mathematical cognition literature). Here, we use LLMs to evaluate a novel scientific hypothesis: that the MNL representation of the human mind is latent in the statistical structure of the linguistic environment, and thus learnable. Therefore, there is less need to posit pre-programmed neural circuitry to explain magnitude effects." }, { "figure_ref": [], "heading": "LLMs and Behavioral Benchmarks", "publication_ref": [ "b44", "b9", "b20", "b31", "b37" ], "table_ref": [], "text": "Modern NLP models are pre-trained on large corpora of texts from diverse sources such as Wikipedia (Wikipedia contributors, 2004) and the open book corpus (Zhu et al., 2015). LLMs like BERT (Devlin et al., 2018), ROBERTA (Liu et al., 2019) and GPT-2 (Radford et al., 2019) learn contextual semantic vector representations of words.\nThese models have achieved remarkable success on NLP benchmarks (Wang et al., 2018). They can perform as well as humans on a number of language tests such as semantic verification (Bhatia and Richie, 2022) and semantic disambiguation (Lake and Murphy, 2021).\nMost benchmarks are designed to measure the absolute performance of LLMs, with higher accuracy signaling \"better\" models. Human or superhuman performance is marked by exceeding certain thresholds. Here, we ask not whether LLMs can perform well or even exceed human performance at tasks, but whether they show the same performance characteristics as humans while accomplishing the same tasks. We call these behavioral benchmarks. The notion of behavioral benchmarks requires moving beyond accuracy (e.g., scores) as the dominant measure of LLM performance.\nAs a test case, we look at the distance, size, and ratio effects as behavioral benchmarks to determine whether LLMs understand numbers as humans do, using magnitude representations. This requires a linking hypothesis to map measures of human performance to indices of model performance. Here, we map human response times on numerical comparison tasks to similarity computations on number word embeddings." }, { "figure_ref": [ "fig_0" ], "heading": "Research Questions", "publication_ref": [], "table_ref": [], "text": "The current study investigates the number representations of LLMs and their alignment with the human MNL. It addresses five research questions:\n1. Which LLMs, if any, capture the distance, size, and ratio effects exhibited by humans? 2. How do different layers of LLMs vary in exhibiting these effects? 3. How do model behaviors change when using larger variants (more parameters) of the same architecture? 4. Do the models show implicit numeration (\"four\" = \"4\"), i.e., do they exhibit these effects equally for all number symbol types or more for some types (e.g., digits) than others (e.g., number words)? 5. Is the MNL representation depicted in Figure 1d latent in the representations of the models?" }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b35", "b24", "b36", "b5", "b0", "b19", "b43", "b13", "b11", "b36", "b34", "b11", "b39", "b11", "b34", "b36", "b5", "b0", "b16" ], "table_ref": [], "text": "Research on the numerical abilities of LLMs focuses on several aspects of mathematical reasoning (Thawani et al., 2021), such as magnitude com-parison, numeration (Naik et al., 2019;Wallace et al., 2019), arithmetic word problems (Burns et al., 2021;Amini et al., 2019), exact facts (Lin et al., 2020), and measurement estimation (Zhang et al., 2020). The goal is to improve performance on application-driven tasks that require numerical skills. Research in this area typically attempts to (1) understand the numerical capabilities of pretrained models and (2) propose new architectures that improve numerical cognition abilities (Geva et al., 2020;Dua et al., 2019).\nOur work also focuses on the first research direction: probing the numerical capabilities of pretrained models. Prior research by Wallace et al. (2019) judges the numerical reasoning of various contextual and non-contextual models using different tests (e.g., finding the maximum number in a list, finding the sum of two numbers from their word embeddings, decoding the original number from its embedding). These tasks have been presented as evaluation criteria for understanding the numerical capabilities of models. Spithourakis and Riedel (2018) change model architectures to treat numbers as distinct from words. Using perplexity score as a proxy for numerical abilities, they argue that this ability reduces model perplexity in neural machine translation tasks. Other work focuses on finding numerical capabilities through building QA benchmarks for performing discrete reasoning (Dua et al., 2019). Most research in this direction casts different tasks as proxies of numerical abilities of NLP systems (Weiss et al., 2018;Dua et al., 2019;Spithourakis and Riedel, 2018;Wallace et al., 2019;Burns et al., 2021;Amini et al., 2019).\nAn alternative approach by Naik et al. ( 2019) tests multiple non-contextual task-agnostic embedding generation techniques to identify the failures in models' abilities to capture the magnitude and numeration effects of numbers. Using a systematic foundation in cognitive science research, we build upon their work in two ways: we (1) use contextual embeddings spanning a wide variety of pre-training strategies, and (2) evaluate models by comparing their behavior to humans. Our work looks at numbers in an abstract sense, and is relevant for the grounding problem studied in artificial intelligence and cognitive science (Harnad, 2023)." }, { "figure_ref": [], "heading": "Experimental Design", "publication_ref": [ "b9", "b20", "b42", "b31", "b32", "b18" ], "table_ref": [], "text": "The literature lacks adequate experimental studies demonstrating magnitude representations of num-Model Category Size Base Large BERT (Devlin et al., 2018) Encoder 110M 340M RoBERTA (Liu et al., 2019) Encoder 125M 355M XLNET (Yang et al., 2019) Auto-regressive Encoder 110M 340M GPT-2 (Radford et al., 2019) Auto-regressive Decoder 117M 345M T5 (Raffel et al., 2019) Encoder 110M 335M BART (Lewis et al., 2020) Encoder-Decoder 140M 406M\nTable 1: Popular Language Models bers in LLMs from a cognitive science perspective.\nThe current study addresses this gap. We propose a general methodology for mapping human response times to similarities computed over LLM embeddings. We test for the three primary magnitude representation effects described in section 1.1." }, { "figure_ref": [], "heading": "Linking Hypothesis", "publication_ref": [ "b38", "b33" ], "table_ref": [], "text": "In studies with human participants, the distance, size, and ratio effects are measured using reaction time. Each effect depends on the assumption that when comparing which of two numbers x and y is relatively easy, humans are relatively fast, and when it is relatively difficult, they are relatively slow. The ease or difficulty of the comparison is a function of x and y: |x -y| for the distance effect, min(x, y) for the size effect, and max(x,y) min(x,y) for the ratio effect. LLMs do not naturally make reaction time predictions. Thus, we require a linking hypothesis to estimate the relative ease or difficulty of comparisons for LLMs. Here we adopt the simple assumption that the greater the similarity of two number representations in an LLM, the longer it takes to discriminate them, i.e., to judge which one is greater (or lesser).\nWe calculate the similarity of two numbers based on the similarity of their vector representations. Specifically, the representation of a number for a given layer of a given model is the vector of activation across its units. There are many similarity metrics for vector representations (Wang and Dong, 2020): Manhattan, Euclidean, cosine, dot product, etc. Here, we choose a standard metric in distributional semantics: the cosine of the angle between the vectors (Richie and Bhatia, 2021). This reasoning connects an index of model function (i.e., the similarity of the vector representations of two numbers) to a human behavioral measure (i.e., reaction time). Thus, the more similar the two representations are, the less discriminable they are from each other, and thus the longer the reaction time to select one over the other." }, { "figure_ref": [], "heading": "Materials", "publication_ref": [], "table_ref": [], "text": "For these experiments, we utilized three formats for number representations in LLMs: lowercase number words, mixed-cased number words (i.e., the first letter is capitalized), and digits. These formats enable us to explore variations in input tokens and understand numeration in models. Below are examples of the three input types:\n• \"one\", \"two\", \"three\", \"four\" ... \"nine\" • \"One\", \"Two\", \"Three\", \"Four\" ... \"Nine\"\n• \"1\", \"2\", \"3\", \"4\" ... \"9\" As noted in the Introduction, prior studies of the distance, size and ratio effects in humans have largely focused on numbers ranging from 1 to 9. Our input types are not-affected by tokenization methods as the models under consideration have each input as a separate token." }, { "figure_ref": [ "fig_0", "fig_1" ], "heading": "Large Language Models -Design Choices", "publication_ref": [ "b41", "b22", "b9", "b32", "b13", "b13" ], "table_ref": [ "tab_0", "tab_1", "tab_0", "tab_1" ], "text": "Modern NLP models are pre-trained on a large amount of unlabeled textual data from a diverse set of sources. This enables LLMs to learn contextually semantic vector representations of words. We experiment on these vectors to evaluate how one specific dimension of human knowledge -number sense -is captured in different model architectures.\nWe use popular large language models from Huggingface's Transformers library (Wolf et al., 2020) to obtain vector representations of numbers in different formats. Following the work by Min et al. (2021) to determine popular model architectures, we select models from three classes of architectural design: encoder models (e.g., BERT (Devlin et al., 2018)), auto-regressive models (e.g., GPT-2 (Radford et al., 2019)), and encoder-decoder models (e.g., T5 (Raffel et al., 2019)). The final list of models is provided in Table 1.\nOperationalization: We investigate the three number magnitude effects as captured in the representations of each layer of the six models for the three number formats. For these experiments, we consider only the obtained hidden layer outputs for the tokens corresponding to the input number word tokens. We ignore the special prefix and suffix tokens of models (e.g., the [cls] token in BERT) for uniformity among different architectures. For the T5-base model, we use only the encoder to obtain model embedding. All models tested use a similar number of model parameters (around 110-140 million parameters). For our studies, we arbitrarily choose the more popular BERT uncased variant as opposed to the cased version. We compare the two models in Appendix section A.2 for a complete analysis, showing similar behaviors in the variants. Model size variations for the same architecture are considered in the Appendix section A.1 to show the impact of model size on the three effects. Recall that the distance effect is that people are slower (i.e., find it more difficult) to compare numbers the closer they are to each other on the MNL. We use the pipeline depicted in Figure 1 to investigate if LLM representations are more similar to each other if the numbers are closer on the MNL.\nEvaluation of the distance effect in LLMs is done by fitting a straight line (a + bx) on the cosine similarity vs. distance plot. We first perform two operations on these cosine similarities: (1) We average the similarities across each distance (e.g., the point at distance 1 on the x-axis represents the average similarity of 1 vs. 2, 2 vs. 3, ..., 8 vs. 9). ( 2) We normalize the similarities to be in the range [0,1]. These decisions allow relative output comparisons across different model architectures, which is not possible using the raw cosine similarities of each LLM. To illustrate model performance, the distance effects for the best-performing layer in terms of R 2 values for BART are shown in Figure 2 for the three number formats. All of the models show strong distance effects for all layers, as shown in Table 2, and for all number formats, as shown in Table 3. Interestingly, LLMs are less likely to reveal the distance effect as layer count increases (Table 2). For example, layer one results in the strongest distance effect while layer twelve is the least representative of the distance effect. With respect to number format, passing digits as inputs tended to produce stronger distance effects than passing number words (Table 3); this pattern was present for four of the six LLMs (i.e., all but T5 and BERT)." }, { "figure_ref": [ "fig_0", "fig_3", "fig_3", "fig_0", "fig_5" ], "heading": "The Size Effect", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "The size effect holds for comparisons of the same distance (e.g., for a distance of 1, these include 1 vs. 2, 2 vs. 3, ..., 8 vs. 9). Among these comparisons, those involving larger numbers (e.g., 8 vs. 9) are made more slowly (i.e., people find them more difficult) than those involving smaller numbers (e.g., 1 vs. 2). That larger numbers are harder to differentiate than smaller numbers aligns with the logarithmically compressed MNL depicted in Figure 1d. This study evaluates whether a given LLM shows a size effect on a given layer for numbers of a given format by plotting the normalized cosine similarities against the size of the comparison, defined as the minimum of the two numbers being compared. For each minimum value (points on the x-axis), we average the similarities for all comparisons to form a single point (vertical compression). We then fit a straight line (ax + b) over the vertically compressed averages (blue line in Figure 3) to obtain the R 2 values (scores). To illustrate model performance, the size effects for the best-performing layer of the BERT-uncased model (in terms of R 2 values) are shown in Figure 3. Similar to the results for the distance effect, the high R 2 values indicate a human-like size effect.\nInterestingly, Table 4 generally shows an increasing trend in the layer-wise capability of capturing the size effect across the six LLMs. This is opposite to the trend observed across layers for the distance effect. Table 5 shows that using digits as the input values yields significantly better R 2 values than the other number formats. In fact, this is the only number format for which the models produce strong size effects. However, the vertical compression of points fails to capture the spread of points across the y-axis for each point on the x-axis. This spread, a limitation of the size effect analysis, is captured in the ratio effect (section 4.3). The ratio effect in humans can be thought of as simultaneously capturing both the distance and size effects. Behaviorally, the time to compare x vs. y is a decreasing function of the ratio of the larger number over the smaller number, i.e., of max(x,y) min(x,y) . In fact, the function is nonlinear as depicted in Figure 1c. For the LLMs, we plot the normalized cosine similarity vs. max(x,y) min(x,y) . To each plot, we fit the negative exponential function a * e -bx + c and evaluate the resulting R 2 . To illustrate model performance, Figure 4 shows the ratio effects for the best-fitting layer of the BART model for the three number formats. As observed with the distance and size effect, the high R 2 values of the LLMs indicate a human-like ratio effect in the models. " }, { "figure_ref": [ "fig_0", "fig_6" ], "heading": "Multidimensional Scaling", "publication_ref": [ "b4", "b10", "b14", "b29", "b30" ], "table_ref": [ "tab_6", "tab_7", "tab_8" ], "text": "Along with the three magnitude effects, we also investigate whether the number representations of LLMs are consistent with the human MNL. To do so, we utilize multidimensional scaling (Borg and Groenen, 2005;Ding, 2018). MDS offers a method for recovering the latent structure in the matrix of cosine (dis)similarities between the vector representations of all pairs of numbers (for a given LLM, layer, and number format distance between each pair of points is consistent with the cosine dissimilarity between their vector representations.\nWe fix N = 1 to recover the latent MNL representation for each LLM, layer, and number format. For each solution, we anchor the point for \"1\" to the left side and evaluate whether the resulting visualization approximates the log compressed MNL as shown in Figure 1d. To quantify this approximation, we calculate the correlation between the positions of the numbers 1 to 9 in the MDS solution and the expected values (log(1) to log ( 9)) of the human MNL; see Table 8. All inputs have similar correlation values. Surprisingly, GPT-2 with digits as the number format (and averaged across all layers) shows a considerably higher correlation with the log-compressed MNL than all other models and number formats. The average correlation between latent model number lines and the log compressed MNL decreases over the 12 layers; see Table 9.\nWe visualize the latent number line of GPT-2 by averaging the cosine dissimilarity matrix across layers and number formats, submitting this to MDS, and requesting a one-dimensional solution; see Figure 5. This representation shows some evidence of log compression, though with a few exceptions. One obvious exception is the right displacement of 2 away from 1. Another is the right displacement of 9 very far from 8.\nTo better understand if this is a statistical artifact of GPT-2 or a more general difference between number understanding in humans versus LLMs, we perform a residual analysis comparing positions on the model's number line to those on the human MNL. We choose the digits number format, estimate the latent number line representation averaged across the layers of each model, and compute the residual between the position of each number in this representation compared to the human MNL. This analysis is presented in Table 10. For 1, all models show a residual value of less than 0.03. This makes sense given our decision to anchor the latent number lines to 1 on the left side. The largest residuals are for 2 and 9, consistent with the anomalies noticed for the GPT-2 solution in Figure 5. These anomalies are a target for future research. We note here that 2 is often privileged even in languages such as Piraha and Mundurucu that have very limited number of word inventories (Gordon, 2004;Pica et al., 2004). Further note that 9 has special significance as a \"bargain price numeral\" in many cultures, a fact that is often linguistically marked (Pollmann and Jansen, 1996). " }, { "figure_ref": [], "heading": "Ablation studies: Base vs Large Model Variants", "publication_ref": [], "table_ref": [ "tab_9", "tab_0" ], "text": "We investigate changes in model behaviors when increasing the number of parameters for the same architectures. We use the larger variants of each of the LLMs listed in Table 1. The detailed tabular results of the behaviors are presented in Appendix section A.1; see Tables 11,12, and 13. Here, we summarize key takeaways from the ablation studies:\n• The distance and ratio effects of the large variants of models align with human performance characteristics. Similar to the results for the base variants, the size effect is only observed when the input type is digits. • We observe the same decreasing trend in the layer-wise capability of capturing the distance effect, ratio effect, and the MDS correlation values in the Large variants of LLMs as observed in the base variants. The increasing trend in the layer-wise capability of the size effect is not observed in the Larger LLMs. • Residual analysis shows high deviation for the numbers \"2\", \"5\", and \"9\"; which is in line with our observations for the base variations." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper investigates the performance characteristics in various LLMs across numerous configurations, looking for three number-magnitude comparison effects: distance, size, and ratio. Our results show that LLMs show human-like distance and ratio effects across number formats. The size effect is also observed among models for the digit number format, but not for the other number formats, showing that LLMs do not completely capture numeration. Using MDS to scale down the pairwise (dis)similarities between number representations produces varying correspondences between LLMs and the logarithmically compressed MNL of humans, with GPT-2 showing the highest correlation (using digits as inputs). Our residual analysis exhibits high deviation from expected outputs for the numbers 2, 5, 9 which we explain through patterns observed in previous linguistics studies. The behavioral benchmarking of the numeric magnitude representations of LLMs presented here helps us understand the cognitive plausibility of the representations the models learn. Our results show that LLM pre-training allows models to approximately learn human-like behaviors for two out of the three magnitude effects without the need to posit explicit neural circuitry. Future work on building pre-trained architectures to improve numerical cognition abilities should also be evaluated using these three effects." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b13" ], "table_ref": [ "tab_16" ], "text": "Limitations to our work are as follows: (1) We only study the three magnitude effects for the number word and digit denotations of the numbers 1 to 9. The effects for the number 0, numbers greater than 10, decimal numbers, negative numbers, etc. are beyond the scope of this study. Future work can design behavioral benchmark for evaluating whether LLMs shows these effects for these other number classes.\n(2) The mapping of LLM behaviors to human behaviors and effects might vary for each effect. Thus, we might require a different linking hypothesis for each such effect. (3) We only use the models built for English tasks and do not evaluate multi-lingual models. (4) We report and analyze aggregated scores across different dimensions. There can be some information loss in this aggregation.\n(5) Our choice of models is limited by certain resource constraints. Future works can explore the use of other foundation / super-large models (1B parameters +) and API-based models like GPT3 and OPT3. ( 6) The behavioral analysis of this study is one-way: we look for human performance characteristics and behaviors in LLMs.\nFuture research can utilize LLMs to discover new numerical effects and look for the corresponding performance characteristics in humans. This could spur new research in cognitive science. ( 7) The results show similar outputs to low dimensional human output and show that we do not need explicit neural circuitry for number understanding. We do not suggest models actually are humanlike in how they process numbers. For the models in Table1, we show the three effects for the larger variants. The variants have the same architectures and training methodologies as their base variants but more parameters ( thrice the number of parameters). The in-depth results for the and size effect as inputs (column: Total Averages; tables 2, 4) and the ratio effect (column: Total Averages; Table4) as output. Importantly, the distance effect averages are statistically significant predictors of ratio effect averages; see Table 23). These results provide a superficial view of the impact of distance and size effect in the ratio effect scores because of the aggregation performed at different levels of the study. " }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [ "b13" ], "table_ref": [], "text": "A. 1 " } ]
Large Language Models (LLMs) do not differentially represent numbers, which are pervasive in text. In contrast, neuroscience research has identified distinct neural representations for numbers and words. In this work, we investigate how well popular LLMs capture the magnitudes of numbers (e.g., that 4 < 5) from a behavioral lens. Prior research on the representational capabilities of LLMs evaluates whether they show human-level performance, for instance, high overall accuracy on standard benchmarks. Here, we ask a different question, one inspired by cognitive science: How closely do the number representations of LLMs correspond to those of human language users, who typically demonstrate the distance, size, and ratio effects? We depend on a linking hypothesis to map the similarities among the model embeddings of number words and digits to human response times. The results reveal surprisingly human-like representations across language models of different architectures, despite the absence of the neural circuitry that directly supports these representations in the human brain. This research shows the utility of understanding LLMs using behavioral benchmarks and points the way to future work on the number representations of LLMs and their cognitive plausibility.
Numeric Magnitude Comparison Effects in Large Language Models
[ { "figure_caption": "Figure 1 :1Figure 1: The input types, LLMs, and effects in this study. The three effects are depicted in an abstract manner in sub-figures (a), (b), (c).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Distance effect for the best-performing layer (9th layer) for the BART model", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Size effect for the best-performing layer for the BERT model (layer 11).", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "tw numbers c mpared/min f the tw numbers c mpared N rmalized C sine Similarity", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Ratio effect for the best-performing layer for the BART model (layer 3).", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: MDS visualization on averaged distances of the GPT-2 model for all number formats and layers.", "figure_data": "", "figure_id": "fig_6", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "4 Magnitude Representation Effects inLLMs4.1 The Distance EffectLayerT5BART RoB XLNET BERT GPT-2Avg.10.974 0.965 0.9540.9670.979 0.937 0.96320.984 0.959 0.9590.9510.983 0.940 0.96330.973 0.957 0.9610.9600.955 0.937 0.95740.956 0.964 0.9770.9620.956 0.923 0.95750.941 0.951 0.9760.9480.982 0.931 0.95560.972 0.916 0.9660.9420.991 0.932 0.95370.967 0.960 0.9670.9430.990 0.930 0.95980.945 0.969 0.9540.9230.977 0.931 0.95090.950 0.978 0.9450.9200.967 0.929 0.948100.933 0.958 0.9280.9260.923 0.931 0.933110.924 0.975 0.9680.9510.926 0.930 0.946120.920 0.956 0.8540.9340.890 0.931 0.914LLMs\\InputLCMC Digits Avg.T50.986 0.937 0.936 0.953BART0.942 0.951 0.983 0.959RoBERTa0.945 0.943 0.964 0.951XLNET0.888 0.965 0.979 0.944BERT (uncased)0.9760.944 0.960GPT-20.906 0.904 0.986 0.932Total Averages across models0.941 0.946 0.965 0.950", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Distance Effect: Averaged (across layers) R 2 values of different LLMs on the three numbers when fitting a linear function. LC: Lowercase number words, MC: Mixed-case number words.", "figure_data": "", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The high R 2 values indicate a human-like distance effect.", "figure_data": "LayerT5BART RoB XLNET BERT GPT-2Avg.10.756 0.651 0.4940.6020.617 0.466 0.59720.685 0.637 0.5070.5510.783 0.653 0.63630.744 0.697 0.5030.4920.834 0.574 0.64140.726 0.677 0.5190.4930.871 0.478 0.62750.665 0.685 0.6100.540.783 0.528 0.63560.670 0.692 0.5860.5630.757 0.539 0.63570.701 0.634 0.6130.5850.823 0.539 0.64980.705 0.687 0.5670.5910.870 0.532 0.65990.697 0.757 0.5810.5660.877 0.541 0.670100.727 0.694 0.6220.5550.905 0.533 0.672110.729 0.756 0.7340.6020.911 0.547 0.713120.703 0.702 0.7440.6620.889 0.550 0.708", "figure_id": "tab_2", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": "LLMs\\InputLCMC Digits Avg.T50.702 0.539 0.886 0.709BART0.614 0.568 0.885 0.689RoBERTa0.520 0.466 0.783 0.59XLNET0.500 0.408 0.793 0.567BERT (uncased)0.8030.851 0.827GPT-20.434 0.332 0.853 0.54Total Averages across models0.596 0.519 0.842 0.654", "figure_id": "tab_3", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Ratio Effect: Averaged (across layers) R 2 values of different LLMs on different number formats when fitting a negative exponential function. LC: Lowercase number words, MC: Mixed-case number words.", "figure_data": "LayerT5BART RoB XLNET BERT GPT-2Avg.10.850 0.820 0.7560.8680.837 0.735 0.81120.865 0.837 0.7450.8280.878 0.755 0.81930.846 0.861 0.7250.8200.853 0.738 0.80740.847 0.859 0.7390.8220.820 0.659 0.79150.851 0.847 0.8050.8250.847 0.695 0.81260.880 0.821 0.8000.8160.883 0.703 0.81770.867 0.811 0.7950.8100.883 0.698 0.81180.824 0.849 0.7800.7800.880 0.702 0.80390.806 0.852 0.7800.7460.861 0.705 0.791100.785 0.821 0.7200.7540.779 0.704 0.760110.755 0.849 0.6660.7810.769 0.702 0.754120.731 0.834 0.5160.7170.687 0.708 0.699", "figure_id": "tab_4", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Ratio Effect: Averaged (across number formats) R 2 values of different LLMs on different input layers when fitting a negative exponential function. RoB: Roberta-base model, BERT: uncased variant.", "figure_data": "", "figure_id": "tab_5", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Averaged (across layers) correlations when comparing MDS values with Log 10 1 to Log 10 9 for different LLMs. LC: Lowercase number words, MC: Mixed-case number words.", "figure_data": "LLMs\\InputLCMC Digits Avg.T50.489 0.526 0.410 0.475BART0.676 0.714 0.678 0.690RoBERTa0.520 0.597 0.587 0.568XLNET0.622 0.620 0.622 0.621BERT (uncased)0.3120.423 0.368GPT-20.566 0.513 0.828 0.636Total Averages across models0.531 0.547 0.591 0.560", "figure_id": "tab_6", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "", "figure_data": "LayerT5BART RoB XLNET BERT GPT-2Avg.10.686 0.679 0.6020.5950.739 0.526 0.63820.271 0.693 0.7630.7340.704 0.669 0.63930.374 0.657 0.7720.7040.456 0.685 0.60840.385 0.728 0.4890.6210.425 0.663 0.55250.476 0.733 0.5970.7070.448 0.615 0.59660.540 0.739 0.5710.5980.465 0.608 0.58770.687 0.696 0.2500.6770.445 0.665 0.57080.529 0.624 0.5940.5910.189 0.624 0.52590.544 0.718 0.6910.5660.400 0.671 0.598100.502 0.624 0.6970.5630.394 0.613 0.566110.195 0.708 0.6020.543-0.013 0.675 0.451120.509 0.677 0.1860.557-0.239 0.615 0.384Number T5 BART RoB XLNET BERT GPT-2 Avg.10.010.000.020.000.020.000.0120.100.170.150.170.090.120.1330.070.050.070.100.060.100.0740.050.040.050.050.030.050.0450.170.090.070.050.200.050.1160.020.040.080.020.060.040.0470.090.080.110.040.200.060.1080.040.010.080.010.090.050.0590.400.080.170.180.440.170.24", "figure_id": "tab_7", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Residual analysis on MDS outputs in 1 dimension on the base variants of the model. RoB: Robertabase model, BERT: uncased variant.", "figure_data": "", "figure_id": "tab_8", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Averaged distance effect, size effect, ratio effect, and the MDS correlation values for the different input types of the models.", "figure_data": "", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Size Effect: Averaged (across inputs) R 2 values of different Larger variants of LLMs for different layers when fitting a linear function. RoB: Roberta-base model, BERT: uncased variant.", "figure_data": "Number T5 BART RoB XLNET BERT GPT-2 Avg.10.040.010.010.010.010.000.0120.090.170.090.160.070.120.1230.020.090.040.070.030.100.0640.020.070.030.040.030.070.0450.120.070.130.170.160.020.1160.200.060.060.050.100.020.0870.170.090.090.070.120.020.0980.220.090.050.060.090.030.0990.150.190.250.360.250.140.22Table 12: Residual analysis on MDS outputs in 1 dimen-sion on the large variants of the models. RoB: Roberta-base model, BERT: uncased variant.AveragedAveragedAveragedAveraged MDSLayer\\EffectsDistanceSizeRatioCorrelationEffectEffectEffectvalues10.9670.6470.8250.64320.9630.5490.7180.55730.9640.5870.7360.58440.9680.6220.7650.54450.9620.6320.7630.42360.9580.6410.7740.48370.9570.5910.7520.52680.9560.6080.7530.55090.9560.5990.7730.625100.9440.6120.7660.610110.9380.6080.7420.526120.9230.6040.7260.557130.9390.6590.7390.538140.9440.6560.7550.562150.9400.6450.7510.500160.9330.6110.7410.509170.9340.5670.7300.550180.9330.5800.7230.505190.9190.5590.6900.527200.9000.5570.6710.535210.8670.5580.6440.553220.8540.5710.6640.524230.8290.5090.6330.484240.8050.5080.6220.414Table 13: Averaged distance effect, size effect, ratioeffect, and MDS correlation values for the 24 layers ofthe models.LLMs\\InputLCMC Digits Avg.T50.961 0.957 0.974 0.964BART0.892 0.957 0.845 0.898RoBERTa0.893 0.959 0.946 0.933XLNET0.924 0.952 0.855 0.910BERT (uncased)0.8370.969 0.903GPT-20.946 0.934 0.987 0.956Total Averages across models0.909 0.933 0.930 0.927Table 14: Distance Effect: Averaged (across layers) R 2values of different Larger variants of LLMs on differentinput types when fitting a linear function. LC: Lower-case number words, MC: Mixedcase number words.three effects are presented in tables 14, 16, 15, 17,18, and 19. We also present the MDS correlationvalues in the same manner as done for base variants;see tables 20 and 21.Given the large layer count for these model vari-", "figure_id": "tab_10", "figure_label": "17", "figure_type": "table" }, { "figure_caption": "Ratio Effect: Averaged (across layers) R 2 values of different Larger variants of LLMs on different input types when fitting a negative exponential function.", "figure_data": "LC: Lowercase number words, MC: Mixedcase numberwords.model.A.3 Impact of Distance effect and Size effectin Ratio effect scoresWhen interpreting LLM findings on the ratio effect,we observe that they are dominated by the distanceeffect as compared to the size effect. We observethe same decreasing trend in averaged results overinput types in layers; see Table 7 (column: TotalAverages). The impact of layer-wise trends can bequantified using regression with the distance effect", "figure_id": "tab_11", "figure_label": "18", "figure_type": "table" }, { "figure_caption": "Ratio Effect: Averaged (across inputs) R 2 values of different Larger variants of LLMs for different layers when fitting a negative exponential function. RoB: Roberta-base model, BERT: uncased variant.", "figure_data": "LLMs\\InputLCMC Digits Avg.T50.572 0.127 0.408 0.369BART0.677 0.546 0.515 0.580RoBERTa0.669 0.573 0.473 0.572XLNET0.498 0.373 0.465 0.445BERT (uncased)0.5190.541 0.530GPT20.623 0.624 0.888 0.711Total Averages across models0.593 0.460 0.548 0.534", "figure_id": "tab_12", "figure_label": "19", "figure_type": "table" }, { "figure_caption": "Averaged (across layers) correlation values when comparing MDS values with Log 10 1 to Log 10 9 for Large variants of different LLMs. LC: Lowercase number words, MC: Mixedcase number words.", "figure_data": "", "figure_id": "tab_13", "figure_label": "20", "figure_type": "table" }, { "figure_caption": "Averaged (across inputs) correlation values of the Large variants of different LLMs on different model layers when comparing MDS values with Log 10 1 to Log 10 9. RoB: Roberta-base model, BERT: uncased variant.", "figure_data": "LayerT5BART RoB XLNET BERT GPT-2 Avg.10.675 0.633 0.7310.5900.542 0.689 0.64320.249 0.662 0.4610.6490.555 0.767 0.55730.251 0.673 0.5220.6890.662 0.707 0.58440.156 0.682 0.6980.6740.353 0.703 0.54450.059 0.518 0.4930.6860.065 0.719 0.42360.219 0.471 0.4110.5330.535 0.729 0.48370.569 0.421 0.5580.5490.367 0.688 0.52680.578 0.413 0.5400.6900.385 0.695 0.55090.581 0.710 0.5940.5460.598 0.720 0.625100.495 0.716 0.5310.4870.710 0.718 0.610110.286 0.691 0.4040.4950.576 0.702 0.526120.481 0.682 0.3040.4660.708 0.700 0.557130.387 0.605 0.5330.3940.588 0.721 0.538140.483 0.672 0.5380.3830.574 0.718 0.562150.486 0.386 0.5960.2410.586 0.705 0.500160.485 0.454 0.6890.1400.591 0.692 0.509170.536 0.677 0.6170.1630.588 0.719 0.550180.259 0.562 0.6510.2510.602 0.704 0.505190.458 0.750 0.5830.0770.599 0.694 0.527200.463 0.545 0.6520.2460.585 0.718 0.535210.362 0.526 0.6530.5240.554 0.700 0.553220.402 0.522 0.6560.2470.596 0.719 0.52423-0.019 0.466 0.6490.4900.600 0.720 0.48424-0.051 0.473 0.6520.4760.205 0.726 0.414VariantEffectLCMC Digits Avg.Distance0.9760.944 0.960UncasedSize Ratio0.803 0.9060.851 0.827 0.757 0.831MDS (Corr.)0.3120.423 0.386Distance0.958 0.980 0.890 0.943CasedSize Ratio0.664 0.691 0.918 0.758 0.854 0.880 0.866 0.867MDS (Corr.) 0.621 0.553 0.487 0.554", "figure_id": "tab_14", "figure_label": "21", "figure_type": "table" }, { "figure_caption": "Behavioral differences between the cased and uncased variants of the BERT architecture. LC: Lowercase number words, MC: Mixed-case number words.", "figure_data": "VariantCoef. Std. Error t Stat P-valueIntercept-0.9160.531-1.7220.119BaseDistance Effect 1.9530.4524.314 0.001 ⊙Size Effect-0.2280.188-1.2120.256Intercept-0.1880.075-2.491 0.0.021LargeDistance Effect 0.7000.1175.997 0.000 ⊕Size Effect0.4470.1243.612 0.001 ⊙", "figure_id": "tab_15", "figure_label": "22", "figure_type": "table" }, { "figure_caption": "Impact of layer-wise trends of distance and size effect on the ratio effect; ⊙ indicates statistical significance with p-value less that 0.01, ⊕ indicates statistical significance with p-value less that 0.00001", "figure_data": "", "figure_id": "tab_16", "figure_label": "23", "figure_type": "table" } ]
Raj Sanjay; Vijay Marupudi; Reba Koenen; Khushi Bhardwaj; Sashank Varma
[ { "authors": "Aida Amini; Saadia Gabriel; Shanchuan Lin; Rik Koncel-Kedziorski; Yejin Choi; Hannaneh Hajishirzi", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "MathQA: Towards interpretable math word problem solving with operation-based formalisms", "year": "2019" }, { "authors": "Daniel Ansari; Nicolas Garcia; Elizabeth Lucas; Kathleen Hamon; Bibek Dhital", "journal": "Neuroreport", "ref_id": "b1", "title": "Neural correlates of symbolic number processing in children and adults", "year": "2005" }, { "authors": "Sudeep Bhatia; Russell Richie", "journal": "Psychological Review", "ref_id": "b2", "title": "Transformer networks of human conceptual knowledge", "year": "2022" }, { "authors": "A Vincent; Brian H Billock; Tsou", "journal": "Psychological Bulletin", "ref_id": "b3", "title": "To honor Fechner and obey Stevens: Relationships between psychophysical and neural nonlinearities", "year": "2011" }, { "authors": "I Borg; P J F Groenen", "journal": "Springer", "ref_id": "b4", "title": "Modern Multidimensional Scaling: Theory and Applications", "year": "2005" }, { "authors": "Collin Burns; Saurav Kadavath; Akul Arora; Steven Basart; Eric Tang; Dawn Song; Jacob Steinhardt", "journal": "", "ref_id": "b5", "title": "Measuring mathematical problem solving with the MATH dataset", "year": "2021" }, { "authors": "Jessica F Cantlon", "journal": "Proceedings of the National Academy of Sciences", "ref_id": "b6", "title": "Math, monkeys, and the developing brain", "year": "2012" }, { "authors": "Roi Cohen Kadosh; Jan Lammertyn; Veronique Izard", "journal": "Progress in Neurobiology", "ref_id": "b7", "title": "Are numbers special? An overview of chronometric, neuroimaging, developmental and comparative studies of magnitude representation", "year": "2008" }, { "authors": "Stanislas Dehaene; Jacques Mehler", "journal": "Cognition", "ref_id": "b8", "title": "Crosslinguistic regularities in the frequency of number words", "year": "1992" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b9", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Cody S Ding", "journal": "Springer International Publishing", "ref_id": "b10", "title": "Fundamentals of Applied Multidimensional Scaling for Educational and Psychological Research", "year": "2018" }, { "authors": "Dheeru Dua; Yizhong Wang; Pradeep Dasigi; Gabriel Stanovsky; Sameer Singh; Matt Gardner", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs", "year": "2019" }, { "authors": "Gustav Theodor; Fechner ", "journal": "", "ref_id": "b12", "title": "Elements of psychophysics", "year": "1860" }, { "authors": "Mor Geva; Ankit Gupta; Jonathan Berant", "journal": "", "ref_id": "b13", "title": "Injecting numerical reasoning skills into language models", "year": "2020" }, { "authors": "Peter Gordon", "journal": "Science", "ref_id": "b14", "title": "Numerical cognition without words: Evidence from amazonia", "year": "2004" }, { "authors": "Justin Halberda; M M Michèle; Lisa Mazzocco; Feigenson", "journal": "Nature", "ref_id": "b15", "title": "Individual differences in nonverbal number acuity correlate with maths achievement", "year": "2008" }, { "authors": "Stevan Harnad", "journal": "", "ref_id": "b16", "title": "Symbol grounding problem", "year": "2023" }, { "authors": "M Brenden; Gregory L Lake; Murphy", "journal": "", "ref_id": "b17", "title": "Word meaning in minds and machines", "year": "2021" }, { "authors": "M Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Veselin Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b18", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2020" }, { "authors": "Seyeon Bill Yuchen Lin; Rahul Lee; Xiang Khanna; Ren", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Birds have four legs?! NumerSense: Probing Numerical Commonsense Knowledge of Pre-Trained Language Models", "year": "2020" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; M Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b20", "title": "Roberta: A robustly optimized bert pretraining approach", "year": "2019" }, { "authors": "Rebecca Merkley; Daniel Ansari", "journal": "Experimental Brain Research", "ref_id": "b21", "title": "Using eye tracking to study numerical cognition: The case of the ratio effect", "year": "2010" }, { "authors": "Bonan Min; Hayley Ross; Elior Sulem; Amir Pouran; Ben Veyseh; Thien Huu Nguyen; Oscar Sainz; Eneko Agirre; Ilana Heintz; Dan Roth", "journal": "", "ref_id": "b22", "title": "Recent advances in natural language processing via large pre-trained language models: A survey", "year": "2021" }, { "authors": "Robert S Moyer; Thomas K Landauer", "journal": "Nature", "ref_id": "b23", "title": "Time required for judgements of numerical inequality", "year": "1967" }, { "authors": "Aakanksha Naik; Abhilasha Ravichander; Carolyn Rose; Eduard Hovy", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Exploring numeracy in word embeddings", "year": "2019" }, { "authors": "Andreas Nieder; Stanislas Dehaene", "journal": "Annual Review of Neuroscience", "ref_id": "b25", "title": "Representation of Number in the Brain", "year": "2009" }, { "authors": "Andreas Nieder; Earl K Miller", "journal": "Neuron", "ref_id": "b26", "title": "Coding of Cognitive Magnitude: Compressed Scaling of Numerical Information in the Primate Prefrontal Cortex", "year": "2003" }, { "authors": "Hans-Christoph Nuerk; Ulrich Weger; Klaus Willmes", "journal": "Cognition", "ref_id": "b27", "title": "Decade breaks in the mental number line? Putting the tens and units back in different bins", "year": "2001" }, { "authors": "John M Parkman", "journal": "Journal of Experimental Psychology", "ref_id": "b28", "title": "Temporal aspects of digit and letter inequality judgments", "year": "1971" }, { "authors": "Pierre Pica; Cathy Lemer; Stanislas Ve'ronique Izard; Dehaene", "journal": "Science", "ref_id": "b29", "title": "Exact and approximate arithmetic in an amazonian indigene group", "year": "2004" }, { "authors": "Thijs Pollmann; Carel Jansen", "journal": "Cognition", "ref_id": "b30", "title": "The language user as an arithmetician", "year": "1996" }, { "authors": "Alec Radford; Jeff Wu; R Child; David Luan; Dario Amodei; Ilya Sutskever", "journal": "", "ref_id": "b31", "title": "Language models are unsupervised multitask learners", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "", "ref_id": "b32", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2019" }, { "authors": "Russell Richie; Sudeep Bhatia", "journal": "Cognitive Science", "ref_id": "b33", "title": "Similarity Judgment Within and Across Categories: A Comprehensive Model Comparison", "year": "2021" }, { "authors": "Georgios P Spithourakis; Sebastian Riedel", "journal": "", "ref_id": "b34", "title": "Numeracy for language models: Evaluating and improving their ability to predict numbers", "year": "2018" }, { "authors": "Avijit Thawani; Jay Pujara; Filip Ilievski; Pedro Szekely", "journal": "", "ref_id": "b35", "title": "Representing numbers in NLP: a survey and a vision", "year": "2021" }, { "authors": "Eric Wallace; Yizhong Wang; Sujian Li; Sameer Singh; Matt Gardner", "journal": "", "ref_id": "b36", "title": "Do NLP models know numbers? probing numeracy in embeddings", "year": "2019" }, { "authors": "Alex Wang; Amanpreet Singh; Julian Michael; Felix Hill; Omer Levy; Samuel R Bowman", "journal": "", "ref_id": "b37", "title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", "year": "2018" }, { "authors": "Jiapeng Wang; Yihong Dong", "journal": "Information", "ref_id": "b38", "title": "Measurement of text similarity: A survey", "year": "2020" }, { "authors": "Gail Weiss; Yoav Goldberg; Eran Yahav", "journal": "", "ref_id": "b39", "title": "On the practical computational power of finite precision rnns for language recognition", "year": "2018" }, { "authors": "", "journal": "", "ref_id": "b40", "title": "Wikipedia, the free encyclopedia", "year": "2004" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Zhilin Yang; Zihang Dai; Yiming Yang; Jaime G Carbonell; Ruslan Salakhutdinov; Quoc V Le", "journal": "", "ref_id": "b42", "title": "Xlnet: Generalized autoregressive pretraining for language understanding", "year": "2019" }, { "authors": "Xikun Zhang; Deepak Ramachandran; Ian Tenney; Yanai Elazar; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Do language embeddings capture scales?", "year": "2020" }, { "authors": "Yukun Zhu; Ryan Kiros; Rich Zemel; Ruslan Salakhutdinov; Raquel Urtasun; Antonio Torralba; Sanja Fidler", "journal": "", "ref_id": "b44", "title": "Aligning books and movies: Towards story-like visual explanations by watching movies and reading books", "year": "2015" } ]
[]
10.18653/v1/2021.emnlp-main.367
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b67", "b61", "b58", "b24", "b60", "b67", "b10", "b0", "b54" ], "table_ref": [], "text": "Human intelligence possesses the extraordinary ability to adapt rapidly to new tasks and multimodal environments. This capacity emerges at an early age, as humans acquire new skills and learn to solve problems by imitating others or following natural language instructions. Studies in developmental psychology have proven that natural language communication is a highly effective method of transmitting generic knowledge between individuals, even among infants. This learning approach accelerates the acquisition of new skills by eliminating the need for trial-and-error learning from observations. One of the long-lasting goals of AI agents (Winograd, 1972) is the ability to seamlessly interact * with humans to assist in solving tasks. To achieve this, the agent must understand and respond to human language to execute instructions in a given environment (Skrynnik et al., 2022;Kiseleva et al., 2022a,b) or ask clarifying questions (Aliannejadi et al., 2021a;Shi et al., 2022), where the end goal is to build an embodied agent. Over the years, researchers have proposed many tasks to tackle this human-AI collaboration challenge, many centered around humans providing instructions to the agent to solve a goal (Gluck and Laird, 2018;Shridhar et al., 2020). An early example is the blocks world task, where the agent must understand human instructions to move blocks on a grid (Winograd, 1972;Bisk et al., 2016). Other setups use Minecraft (Gray et al., 2019a), such as to move objects around (Abramson et al., 2020), or to simulate human behavior (Park et al., 2023)." }, { "figure_ref": [ "fig_0" ], "heading": "Equal contribution", "publication_ref": [ "b51", "b8" ], "table_ref": [], "text": "Our paper aims to provide an in-depth investigation into the production of clarifying questions in the context of human-centered AI instructionbased interaction using a Minecraft environment. This scenario presents a unique challenge, as the agent must navigate and complete tasks in a complex, virtual environment, relying solely on natural language instructions. To ensure successful task completion, the agent must accurately identify gaps in the instructions and pose relevant clarifying questions, as demonstrated in Figure 1. By tackling this problem head-on, we intend to pave the way for more effective and user-friendly human-AI interactions.\nA significant challenge hindering the exploration of clarifying question generation to enhance the user experience during interactions with embodied agents (Narayan-Chen et al., 2019;Bara et al., 2021) is the scarcity of appropriate datasets and scalable data collection tools. These deficiencies have impeded progress in the field and pose a considerable obstacle to developing effective solutions. Our work addresses this challenge by proposing a novel dataset and scalable data collection methodology, providing a crucial contribution to the field's progress. By addressing this important obstacle, we believe our work will enable researchers to explore new avenues in the field and ultimately enhance user experience in human-AI interactions.\nIn summary, our main contributions are: C1 Crowdsourcing Tool for Collecting Interactive Grounded Language Instructions: The development of a crowdsourcing tool specifically designed to efficiently gather interactive grounded language instructions within a Minecraft-like environment at a large scale (Sec. 3). This tool facilitates the collection of high-quality data for further research and experimentation. C2 Largest Available Dataset of Human-to-Human Grounded Language Instructions:\nThe creation of an extensive and comprehensive dataset comprising human-to-human grounded language instructions, accompanied by clarifying questions (Sec. 4). This dataset represents a valuable resource for various research directions, including but not limited to building structures based on given instructions or predicting clarifying questions. C3 Baselines for Predicting Clarifying Questions: The establishment of a set of baselines for the task of predicting clarifying questions that serve as a benchmark for evaluating the performance of future models (Sec. 5)." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b68", "b16", "b29", "b19", "b48", "b15", "b1", "b57", "b11", "b14", "b42", "b12", "b44", "b20", "b74", "b17", "b9", "b65", "b63", "b22", "b18", "b71", "b69", "b7", "b21", "b55", "b13", "b36", "b37", "b49", "b50", "b61", "b64", "b70", "b62", "b51", "b32", "b53", "b40", "b75" ], "table_ref": [], "text": "Natural Language Interfaces (NLIs) have been a subject of study in various disciplines, including human-computer interaction and information search, for several decades. Early works such as (Woods et al., 1972;Codd, 1974;Hendrix et al., 1978) laid the foundation for understanding and designing effective interfaces for human language communication with computers. In recent years, there has been a resurgence of interest in NLIs due to advances in language understanding capabilities driven by large-scale deep learning models (Devlin et al., 2018;Liu et al., 2019;Clark et al., 2020;Adiwardana et al., 2020;Roller et al., 2020;Brown et al., 2020;OpenAI, 2023;Chowdhery et al., 2022) and the increasing demand for various applications such as virtual assistants, dialog systems (Li et al., 2019(Li et al., , 2020c;;Burtsev et al., 2017;Li et al., 2020bLi et al., , 2021)), semantic parsing, and question answering systems (Liu andLane, 2017, 2018;Dinan et al., 2020;Zhang et al., 2019). The scope of NLIs have expanded from traditional databases to knowledge bases (Copestake and Jones, 1990;Berant et al., 2013) to robots (Tellex et al., 2011), personal assistants (Kiseleva et al., 2016b,a), Web service APIs (Su et al., 2017), and other forms of interaction (Fast et al., 2018;Desai et al., 2016;Young et al., 2013). The focus has shifted towards interactivity and continuous learning, enabling agents to interact with users, learn new tasks from instructions, assess their uncertainty, ask clarifying questions, seek and leverage human feedback to correct mistakes, and even assess their own mistakes. This includes systems that can learn new tasks from instructions (Li et al., 2020a), assess their uncertainty (Yao et al., 2019), ask clarifying questions (Aliannejadi et al., 2020a(Aliannejadi et al., , 2021b;;Arabzadeh et al., 2022), seek and leverage feedback from humans to correct mistakes (Elgohary et al., 2020), currently LLM can asses their own mistakes (Press et al., 2022).\nThis paper addresses the important aspect of grounded language understanding, which involves connecting natural language instructions with the real-world context and taking action accordingly. This is crucial to enabling more effective and accurate communication between humans and intelligent agents. Our work focuses specifically on tackling grounded language understanding in the context of collaborative building tasks performed by embodied agents, as highlighted in (Carta et al., 2023;Kiseleva et al., 2021Kiseleva et al., , 2022b;;Mehta et al., 2023;Mohanty et al., 2022;Skrynnik et al., 2022).\nThe selection of Minecraft as an environment for grounded language understanding in this work is rooted in a multitude of compelling reasons. Szlam et al. (2019) substantiated the advantages of constructing an open interactive assistant within the sandbox construction game of Minecraft, as opposed to a complex and costly real-world assistant. The Minecraft world's constraints (e.g., coarse 3-d voxel grid and simple physics) and the regularities in the head of the distribution of ingame tasks allow numerous scenarios for grounded NLU research (Yao et al., 2020;Srinet et al., 2020;Narayan-Chen et al., 2019). Furthermore, the immense popularity of Minecraft as a video game makes it an enticing competition domain, with the second-highest number of total copies sold among all games ever released. This popularity ensures that players will be enthusiastic about interacting with the developed assistants, thus providing a rich resource for human-in-the-loop studies. Another important advantage of using Minecraft is the availability of the highly developed set of tools for logging agents interactions and deploying agents for evaluation with human-in-the-loop, including: • Malmo (Johnson et al., 2016): a powerful platform for AI experimentation; • Craftassist (Gray et al., 2019b): a framework for dialog-enabled interactive agents; • TaskWorldMod (Ogawa et al., 2020): a platform for situated task-oriented dialog data collection using gamification; and • MC-Saar-Instruct (Köhn et al., 2020): a platform for Minecraft Instruction Giving Agents; • IGLU GridWorld (Zholus et al., 2022): fast environment for training embodied agents." }, { "figure_ref": [ "fig_1" ], "heading": "Data Collection Tool", "publication_ref": [ "b51", "b31", "b64", "b73", "b32" ], "table_ref": [], "text": "We developed a tool to enable the collection of multi-modal data such as text, images, and keyvalue pairs for the collaborative building task (Kiseleva et al., 2022a;Narayan-Chen et al., 2019;Jayannavar et al., 2020). This task involves training interactive embodied agents to solve complex tasks while receiving natural language instructions within a collaborative environment. The interactive agent is defined as follows: (i) Accurately following instructions in natural language, with a grounding in the current world; (ii) Seeking clarification when faced with uncertainties; (iii) Swiftly adapting to newly acquired skills.\nFor our data collection tool, we strategically harnessed a Minecraft-like game environment, which has gained significant popularity and adoption in the NLP and RL communities. Utilizing this environment can overcome the limitations and costs associated with developing and maintaining a realworld assistant (Szlam et al., 2019). The Minecraft world's unique characteristics, such as its 3D voxel gridworld and adherence to simple rules of physics, provide an abundance of research scenarios and opportunities for experimentation with agents trained by demonstration. The game's interactive nature, including player interaction and dialog exchange, enables grounded language understanding and exploration. Another important factor for consideration is the availability of tools for logging agents' interaction and deploying agents for evaluation with human-in-the-loop within Minecraft.\nNarayan-Chen et al., 2019 proposed a setup for a collaborative building task with the Minecraft environment where an Architect is provided with a target structure that needs to be built by the Builder. The Architect and Builder communicate with each other through a chat interface. The Architect provides instructions to the Builder on how to create the target structure, and the Builder can ask clarifying questions if an instruction is unclear (Zhang et al., 2021). The Architect is able to view the actions of the Builder. This approach required installing Microsoft's Project Malmo (Johnson et al., 2016) client, which provides an API for Minecraft agents to chat, build, and the ability to save and load game states. This setup is used to collect multi-turn interactions between the Architect and the Builder collaboratively working towards the common goal of building a given target structure. However, the data collection setup is limited to lab-based studies.\nIn our work, we have developed and released an open-source data collection tool1 . This tool is specifically designed to facilitate the collection of multi-modal collaborative building tasks, seamlessly integrating with crowd-sourcing platforms for efficient participant scaling. Notably, the tool eliminates the need for participants to install a local client, streamlining the data collection process. Figure 2 illustrates the overall design of the tool.\nIn our study, we have used Amazon Mechanical Turk (MTurk) as the crowd-sourcing platform. Each annotator submits a task referred to as a HIT (Human Intelligence Task). A HIT consists of the CraftAssist (Gray et al., 2019b) voxelworld along with a HIT survey. The HIT survey is customizable for different tasks and includes rules for a given task, a form where instructions can be submitted, or clarifying questions asked for the building task. CraftAssist is a framework that provides tools and a platform for dialog-enabled interactive agents that learn from natural language interactions. The library provides a 3-d voxelworld grid where agents perform building actions that can be recorded as action states and retrieved for following sessions. Current actions supported by the integrated CraftAssist library include picking, placing, and removing blocks of different colors within the voxelworld. Agents can also jump to place the blocks. These actions enable agents to create structures of varying complexity. Examples of the task or HITs in MTurk along with the embedded voxelworld are provided in the appendix. Finally, the data is stored in two kinds of data stores for ease of access: Tables are used to save game ids, instructions, and clarifying questions while the Object Store is used for storing files with game world states and collected actions. This data collection tool has been used to collect multi-turn interactions between Architect and Builder similar to the datasets collected by Narayan-Chen et al., 2019 described next." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b61" ], "table_ref": [], "text": "We used the previously described data collection tool to build corpora of multi-modal data which could be used towards solving wide-ranging NLP and RL tasks including training interactive agents by demonstrations given natural language instructions (Skrynnik et al., 2022). Our research initially concentrates on multi-turn interactions, following a similar approach as presented by (Narayan-Chen et al., 2019) (Sec. 4.1). To enhance the size of our dataset, we subsequently expanded our data collection efforts to a Single-Turn dataset (Sec. 4.2). This approach allowed us to gather a larger corpus of data more efficiently. The datasets and accompanying code for analysis and visualization have been made openly available2 ." }, { "figure_ref": [ "fig_2" ], "heading": "Multi-Turn Dataset", "publication_ref": [], "table_ref": [], "text": "The Multi-Turn dataset comprises dialog-behavior sequences, which we called game, as depicted in Figure 3. In each turn, an annotator takes on the role of either the Architect or the Builder. The Architect provides the next step instruction, while the Builder executes the instruction or poses a clarifying question. The sequences follow a linear progression without branching out, starting from scratch for a given goal structure or building on top of intermediate results. The goal structures used in the dataset are sourced from (Narayan-Chen et al., 2019).\nTab. 1 shows the summary of the Multi-Turn dataset. There are 31 goal structures presented to annotators to build. We process and clean the data by filtering out missing and low-quality submissions such as very short instructions. Finally, we have 47 completed game sessions with the median duration of a game being around 1 hour. A game session is considered complete when the Builder is able to create a given goal structure after interacting with and following instructions provided by the Architect. This is denoted by the Architect marking the structure as \"complete\". Across all the games, there were 871 number of utterances or dialog interactions between the Architect and Builder annotators. The average length of instructions provided by the Architects was around 19 words, and the number of clarifying questions asked by the Builders -126.\nTo provide a deeper understanding of the covered structures in our multi-turn dataset, we performed manual labeling on the 31 structures. The labels, along with their meanings and the corresponding number of structures in the dataset in brackets, are as follows:\n(i) flat [7]: all blocks on the ground (ii) flying [27]: there are blocks that cannot be fully-added without removing some other blocks (iii) diagonal [6]: some blocks are adjacent (in the vertical axis) diagonally (iv) tricky [6]: some blocks are hidden or there should be a specific order in which they should be placed (v) tall [25]: a structure cannot be built without the agent being high enough (the placement radius is 3 blocks)" }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Single-Turn Dataset", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "From our extensive study on Multi-Turn data collection, we identified certain challenges that crowdsource workers encountered when engaging in the collaborative building task and issuing instructions for specific target structures. To streamline and enhance the crowd-sourcing process, we decided to simplify the task.\nOur approach involved removing the added complexity of building a predefined target structure. Instead, participants were free to perform freeform building actions within the voxelworld while providing instructions that should allow another worker to rebuild the same structure. This modification led to creating Single-Turn task segments, where participants collaborated asynchronously to construct the same structure. This adjustment enabled us to collect data at a faster pace, resulting in a larger corpus comprising natural language instructions, corresponding actions performed based on those instructions, and a set of clarifying questions. We record and save actions performed by annotators in a key-value pair format that stores the movement of the agent and positional changes of blocks within the voxelworld.\nTo provide diverse starting canvases for annotators, we utilized the Multi-Turn dataset to load different world states, which served as varying initial conditions for the building process. The process of collecting single-turn instructions and associated clarifying questions is illustrated in Figure 1. The detailed procedure is outlined below:\n• An annotator is assigned a world state from the Multi-Turn dataset as the starting point for their building task (Figure 1: Ideation Stage). • The annotator is prompted to perform a sequence of actions for a duration of one minute. • Then, the annotator is required to describe their set of actions in the form of instruction. • Another annotator is shown the instruction and is asked to perform the steps mentioned. If the instruction is unclear, the annotator specifies it as thus and asks clarification questions (Figure 1: Clarification Question Stage).\nTab. 2 presents comprehensive statistics on the Single-Turn dataset, currently the largest dataset available for interactive grounded language understanding. We processed and cleaned the collected Single-Turn dataset by following a heuristic approach which included filtering out samples where the length of instruction was very short. We also checked whether the instruction was in English and manually evaluated jobs to remove submissions by annotators who provided low-quality instructions such as providing the same instruction repeatedly. As shown in Table 2, the Single-Turn corpus consists of 8,136 pairs of actions and instructions. On average, an instruction has 18 words which are indicative of the instructions being descriptive enough for a one-minute building process.\nIn addition to the processing steps for cleaning instructions, for the clarifying questions we furthermore verified if the annotator marked the instruction as ambiguous, they must have issued a clarifying question else the submission would be filtered out with a warning provided to the annotator. This was to ensure that every instruction annotated as \"not clear\" is accompanied by at least one clarifying question. Out of 8,136 instructions, 1,056 (12.98%) were annotated as Not Clear thus being ambiguous, and 7,080 (87.02%) as Clear instructions. The average length of clarifying questions is around 12 words.\nTab. 3 exemplifies a few instructions marked as being unclear along with clarifying questions issued by annotators. The majority of clarifying questions fall into the categories below:\n• Color of blocks: Questions clarifying the color of the blocks to be used. For instance, an instruction specified \"Place four blocks to the east of the highest block horizontally.\" The corresponding clarifying question issued was \"Which color blocks?\" • Direction or Orientation: Questions that clarify the direction and orientation to build in the world. For example, given the instruction \"break three blue blocks and place two green ones.\" The clarifying question issued was \"Where do I place two green ones?\" • Number of blocks: Questions that clarify the number of blocks to be placed. For example, given the instruction \"Built yellow blocks across the north.\" The clarifying question issued was \"How many yellow blocks should be built?\" • Identifying blocks to be changed: Questions posed to identify specific blocks that need to be changed. For instance, given the instruction \"Destroy the 3 stacked red blocks on the east side.\nReplace them with 3 stacked blue boxes.\" The clarifying question issued was \"Which three of the four stacked red blocks on the east side need to be destroyed?\"\nThe Single-Turn approach offers several advantages over the sequential nature of the Multi-Turn process. One significant advantage is the independence of each sample, which allows for easier utilization in different tasks. Each turn can be interpreted as a complete set of information, enabling flexibility and versatility in its application. In the Single-Turn approach, samples can be easily plugged into different settings or scenarios, as they do not rely on the context of previous turns. This independence allows researchers to extract valuable insights and information from individual turns without the need to consider the entire dialogue sequence. Furthermore, the Single-Turn approach allows for the collection of multiple clarifying questions for each instruction. This enhances Destroy 2 purple blocks and then build 3 green blocks diagonally.\nWhich two purple blocks need to be destroyed? Destroy the 3 stacked red blocks on the east side. Replace them with 3 stacked blue boxes Which three of the four stacked red blocks on the east side need to be destroyed? Make a rectangle that is the width and height of the blue shape and fill it in with purple blocks.\nWhich side I need to make the rectangle is not clear Facing South remove the rightmost purple block. Place a row of three orange blocks to the left of the upper leftmost purple block. Place two orange blocks above and below the leftmost orange block.\nWhich one of the rightmost blocks should be removed?\nFacing north and purple green blocks will arrange one by one.\nWhere would you like to place the purple and green blocks exactly? Built yellow blocks across the north.\nHow many yellow blocks should be built?\nthe richness and diversity of the dataset, enabling a deeper understanding of the nuances and challenges in generating clarifying questions." }, { "figure_ref": [], "heading": "Baselines Models and Evaluation", "publication_ref": [ "b6", "b7" ], "table_ref": [], "text": "The collected dataset (Sec. 4.2) offers an opportunity to delve deeply into the exploration of the following key research questions:\n• When to ask clarifying questions?: This research question aims to predict whether an instruction provided by the Architect is sufficient for the Builder to complete a task successfully or if further clarification is required. • What clarifying question to ask? When faced with an instruction that is considered ambiguous, this research question focuses on determining the appropriate question to ask for further clarification. It is worth noting that issues related to determining \"When\" and \"What\" to ask as clarifying questions have gained significant attention in the domains of Natural Language Processing and Information Retrieval (Aliannejadi et al., 2019(Aliannejadi et al., , 2021b(Aliannejadi et al., , 2020b;;Arabzadeh et al., 2022). However, as far as our knowledge goes, this aspect has not been explored to a great extent in the context of interacting with intelligent embodied agents. In the following sections, we present two end-to-end pipelines that have shown promising performance in addressing each research question. " }, { "figure_ref": [], "heading": "When: Clarification Need Prediction", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "Al the baselines publicly are available. 3 All results are reported in Table 4. In line with the nature of the task, we utilize the F-1 Score as the evaluation metric since it provides a balanced measure of precision and recall, offering valuable insights into the performance of the classification model." }, { "figure_ref": [], "heading": "BERT fine-tuning", "publication_ref": [ "b19", "b7" ], "table_ref": [], "text": "Our dataset provides a substantial amount of training data. Therefore, as suggested in (Aliannejadi et al., 2021b), the simplest baseline to determine whether an instruction requires a clarifying question would be fine-tuning LLMs such as BERT (Devlin et al., 2018). This is followed by a classification layer to predict whether the instructions are clear. This approach has shown promising performance on similar classification tasks (Aliannejadi et al., 2021b;Arabzadeh et al., 2022) demonstrated in Tab. 4." }, { "figure_ref": [], "heading": "Text-Grid Cross Modularity", "publication_ref": [ "b59", "b31" ], "table_ref": [], "text": "This baseline (Shi et al., 2023), which has shown improved performance compared to the simple LLM fine-tuning approach, consists of the following four major components: (i) Utterance Encoder, where Architect and Builder annotations would be added before each architect utterance A t and each builder utterance B t , respectively. Then, the dialogue utterances are represented as D t = \"architect A t ⊕ \"builder B t at the turn t, where ⊕ is the operation of sequence concatenation. The dialogue is encoded through pre-trained language models such as BERT. (ii) World state encoderwhich aims to represent the pre-built structure using a voxelbased grid. Each grid state is encoded as a 7-dimensional one-hot vector, representing either an empty space or a block of one of six colors. This encoding results in a 7 × 11 × 9 × 11 representation of the world state. The structure of the World State Encoder is similar to the approach presented in (Jayannavar et al., 2020). It comprises k 3Dconvolutional layers followed by a Rectified Linear Unit (ReLU) activation function. This configuration allows the encoder to extract meaningful features from the voxel-based grid representation of the world state. By applying convolutional layers and non-linear activation, the World State Encoder captures spatial dependencies and abstract representations of the pre-built structure. (iii) Fusion module which consists of three submodules: one Single-Modality and two Cross-Modality. The former modules are based on self-attention layers and the latter on cross-attention layers. These take as input the world state representation and dialogue history representation. Between every successive pair of grid single-modality modules or text single-modality modules, there is a cross-modality module. (iv) Slot Decoder, this component contains one linear projection to obtain a scalar value for the final binary classification through the Sigmoid function." }, { "figure_ref": [], "heading": "Textual Grid world State", "publication_ref": [], "table_ref": [], "text": "This baseline focuses on mapping the GridWorld state to a textual context, which is then added as a prefix to the verbalizations of the Architect-Agent. This approach has demonstrated its effectiveness in the classification task by utilizing an automated description of the number of blocks per color in the pre-built structures. This additional information conveyed through the textual description, has proven to be valuable for the classification task.\nFor instance, a voxel world can be automatically converted into a textual description like: \"There are 4 levels. There are 15 different blocks. At level 0, there are 3 green blocks. Above the 1st level, there are 2 purple, 2 yellow, and 1 green block. Above at level 2, there are 3 green blocks. Above the 3rd level, there are 2 yellow and 2 green blocks.\" This description provides important contextual information about the voxel world and contributes to the improved performance of the simple LLM finetuning baseline. Overall, the inclusion of a textual description of the voxel world has enhanced the simple LLM fine-tuning baseline by 4% in terms of performance (Tab. 4. This approach showcases the importance of incorporating relevant contextual in- formation to enhance the understanding and classification of language-guided collaborative building tasks." }, { "figure_ref": [], "heading": "What: Clarifying Question Retrieval", "publication_ref": [ "b52" ], "table_ref": [], "text": "We formulate it as ranking a pool of clarifying questions based on their relevance to ambiguous instructions to place the most pertinent clarifying questions at the top of the ranked list. Given that the relevance judgments for this task are sparse. Namely, only one clarifying question per ambiguous instruction is annotated. We evaluate the task using the Mean Reciprocal Rank (MRR) at cut-off 20. This evaluation approach is consistent with wellknown benchmarks like MS MARCO (Nguyen et al., 2016).\nTab. 5 presents the performance of BM25 followed by the two introduced baselines, measured using the MRR@20." }, { "figure_ref": [], "heading": "Baseline 1", "publication_ref": [], "table_ref": [], "text": "Text Representation: A frozen DeBERTa-v3-base model has demonstrated promising performance for ranking tasks. The instructions are encoded in this baseline, followed by a separator and a question. The last four layers of DeBERTa are concatenated and passed through a two-layer BiLSTM to acquire a text representation.\nWorld Representation: A world representation is utilized to create a 3D grid. This is subsequently passed through a 1D convolutional network to simplify the height dimension (y), and then the resulting vector is passed through a 2D convolutional network to reduce the width/length (x, z) dimensions. The underlying assumption is that height occupies a different semantic space from the interchangeable x, z dimensions. For example, an instruction might include references to a \"tower\" or \"column,\" which would be a stack of objects in the y direction, while a \"wall\" could extend in the x or z direction. Ultimately, the size of the 3D grid is reduced by an AvgPooling layer to a 1D vector.\nSubsequently, the encoded text representation and the world representation are concatenated, and the vector is passed through a two-layer MLP to obtain the final representation. The model is trained using a CrossEntropy loss function over 10 folds cross-validation. At inference time, the ensemble predictions of the 10 models are used for the final predictions.\nIn addition, it has been revealed that certain straightforward post-processing tricks can enhance performance. These post-processing methods rely on certain assumptions about the content of questions given a world and an instruction. For example, the size of the ranking pool could be reduced by excluding questions that don't overlap with the given instructions. If the instruction doesn't mention a color like \"blue,\" and \"blue\" is also absent in the world, it can be assumed that the question won't reference the word \"blue.\" While these heuristic rules may seem somewhat aggressive, they have proven useful in excluding additional questions irrelevant to the instruction." }, { "figure_ref": [], "heading": "Baseline 2", "publication_ref": [ "b30", "b56", "b33", "b72", "b66", "b28", "b37", "b51", "b58", "b75", "b25", "b23" ], "table_ref": [], "text": "To comprehend the concept of relevance, the approach of aligning queries and relevant items closely in embedding space while distancing queries from irrelevant items in the same space has proven to be effective (Izacard et al., 2021;Reimers and Gurevych, 2019;Karpukhin et al., 2020;Zhan et al., 2021). Similarly, in this baseline, each positive question is paired with sampled irrelevant negative questions drawn from the candidate questions. The similarity between the instruction and the question is then measured using a BERTlike pre-trained language model.\nTo include information from the world state and pre-built structure, it is recommended to encode state information, such as the colors and numbers of initialized blocks, in the form of natural language and then concatenate this with the instruction. It has been demonstrated that clarifying questions about the same instruction can differ based on the world states. Therefore, to avoid redundant state information and improve the model's robustness and generalization, randomly selecting only one color type of block as the state information has proven helpful and has increased the model's generalizability. The state information and raw instruction are then concatenated and labeled with the keywords \"state\" and \"instruction,\" respectively. For instance, the input could be: \"state: There are nine green blocks; instruction: \"put a green block on top of the yellow and the two blue ones.\"\nBefore moving on to the training phase, and to balance the data distribution through augmentation, Easy Data Augmentation (EDA) has been shown to be effective (Wei and Zou, 2019). EDA primarily expands the dataset by four operations: synonym replacement, random insertion, random swap, and random deletion, according to a pre-defined ratio. Moreover, taking inspiration from DAPT (Gururangan et al., 2020), datasets such as (Kiseleva et al., 2022b;Narayan-Chen et al., 2019;Shi et al., 2022;Zholus et al., 2022) are used for performing domain-adaptive fine-tuning. To prevent overfitting, the Fast Gradient Method (FGM) is proposed, inspired by adversarial training, to mitigate the overfitting problem (Goodfellow et al., 2014). Finally, taking cues from (Gao et al., 2021), the list-wise loss is used to train the model." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [], "table_ref": [], "text": "In conclusion, our paper addresses the important capability of human intelligence to swiftly adapt to new tasks and multi-modal environments through imitating others and following natural language instructions. We have explored the field of interactive grounded language understanding, aiming to enable seamless human-centered AI collaboration in solving tasks. Specifically, our investigation focuses on the production of clarifying questions in the context of human-AI instruction-based interaction using the Minecraft environment. By tackling this challenge, we contribute to the development of embodied agents that can effectively understand and respond to human language instructions, as well as ask relevant clarifying questions when necessary. Our work emphasizes the importance of bridging the gap between human communication and AI systems, with the ultimate goal of enhancing user experience and achieving more user-friendly human-AI interactions. One significant obstacle hindering progress in this field has been the scarcity of appropriate datasets and scalable data collection tools. To address this challenge, we have developed a crowdsourcing tool specifically designed for collecting interactive grounded language instructions within a Minecraft-like environment at a large scale. Additionally, we have created the largest available dataset of human-to-human grounded language instructions, accompanied by clarifying questions. This dataset serves as a valuable resource for various research directions. Furthermore, we have established baselines for predicting clarifying ques-tions, providing a benchmark for evaluating the performance of future models and algorithms in this domain.\nOur contributions lay a solid foundation for further advancements in grounded language understanding research and open up new avenues for exploration and innovation in the field. We believe that our work will inspire and empower researchers to delve deeper into the realm of human-AI interactions, ultimately leading to more effective and seamless collaboration between humans and intelligent embodied agents." } ]
Human intelligence's adaptability is remarkable, allowing us to adjust to new tasks and multi-modal environments swiftly. This skill is evident from a young age as we acquire new abilities and solve problems by imitating others or following natural language instructions. The research community is actively pursuing the development of interactive "embodied agents" that can engage in natural conversations with humans and assist them with realworld tasks. These agents must possess the ability to promptly request feedback in case communication breaks down or instructions are unclear. Additionally, they must demonstrate proficiency in learning new vocabulary specific to a given domain. In this paper, we made the following contributions: (i) a crowd-sourcing tool for collecting grounded language instructions; (ii) the largest dataset of grounded language instructions; and (iii) several state-of-the-art baselines. These contributions are suitable as a foundation for further research.
Transforming Human-Centered AI Collaboration: Redefining Embodied Agents Capabilities through Interactive Grounded Language Instructions
[ { "figure_caption": "Figure 1 :1Figure 1: Human-Centered AI Collaboration", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The architecture of the developed data crowdsourcing collection tool", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The example of the game from the multi-turn dataset, where Architect can see the target structure and needs to provide instructions for the Builder(Mehta et al., 2023) ", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Statistics of Multi-Turn Dataset", "figure_data": "Target Structures31Completed Games47Median Duration of Completed Games59 minsUtterances871Avg. Length of Instructions 19.32 wordsClarifying Questions126", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Statistics of Single-Turn Dataset", "figure_data": "InstructionsAvg. Length (in words)Number8136 Instructions18.29Clear7080 Clarifying Questions 12.05Ambiguous 1056", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Examples of pairs of unclear instructions and clarifying questions", "figure_data": "Unclear InstructionClarifying QuestionPlace four blocks to the east of the highest block, horizontally.Which color blocks?", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of the baselines on 'When': Clarification Need Prediction task", "figure_data": "BaselineF-1 scoreFine-tuned BERT (Sec. 5.1.1)0.732Text-Grid Cross Modularity (Sec. 5.1.2) 0.757Textual Grid world State (Sec. 5.1.3)0.761", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results of the baselines on 'What': Clarification Need Prediction task", "figure_data": "BaselineMRR@20BM250.3410Baseline 1 (Sec. 5.2.1) 0.5360Baseline 2 (Sec. 5.2.2) 0.5960", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" } ]
Shrestha Mohanty; Negar Arabzadeh; Julia Kiseleva; Artem Zholus; Milagro Teruel; Ahmed Awadallah; Yuxuan Sun; Kavya Srinet; Arthur Szlam
[ { "authors": "Josh Abramson; Arun Ahuja; Iain Barr; Arthur Brussee; Federico Carnevale; Mary Cassin; Rachita Chhaparia; Stephen Clark; Bogdan Damoc; Andrew Dudzik", "journal": "", "ref_id": "b0", "title": "Imitating interactive intelligence", "year": "2020" }, { "authors": "Daniel Adiwardana; Minh-Thang Luong; David R So; Jamie Hall; Noah Fiedel; Romal Thoppilan; Zi Yang; Apoorv Kulshreshtha; Gaurav Nemade; Yifeng Lu", "journal": "", "ref_id": "b1", "title": "Towards a human-like open-domain chatbot", "year": "2020" }, { "authors": "Mohammad Aliannejadi; Julia Kiseleva; Aleksandr Chuklin; Jeff Dalton; Mikhail Burtsev", "journal": "", "ref_id": "b2", "title": "Convai3: Generating clarifying questions for opendomain dialogue systems (clariq)", "year": "2020" }, { "authors": "Mohammad Aliannejadi; Julia Kiseleva; Aleksandr Chuklin; Jeff Dalton; Mikhail Burtsev", "journal": "", "ref_id": "b3", "title": "Convai3: Generating clarifying questions for opendomain dialogue systems (clariq)", "year": "2020" }, { "authors": "Mohammad Aliannejadi; Julia Kiseleva; Aleksandr Chuklin; Jeff Dalton; Mikhail Burtsev", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "a. Building and evaluating open-domain dialogue corpora with clarifying questions", "year": "2021" }, { "authors": "Mohammad Aliannejadi; Julia Kiseleva; Aleksandr Chuklin; Jeffrey Dalton; Mikhail Burtsev", "journal": "", "ref_id": "b5", "title": "Building and evaluating open-domain dialogue corpora with clarifying questions", "year": "2021" }, { "authors": "Mohammad Aliannejadi; Hamed Zamani; Fabio Crestani; Bruce Croft", "journal": "", "ref_id": "b6", "title": "Asking clarifying questions in open-domain information-seeking conversations", "year": "2019" }, { "authors": "Negar Arabzadeh; Mahsa Seifikar; Charles La Clarke", "journal": "", "ref_id": "b7", "title": "Unsupervised question clarity prediction through retrieved item coherency", "year": "2022" }, { "authors": "Cristian-Paul Bara; Ch- Sky; Joyce Wang; Chai", "journal": "", "ref_id": "b8", "title": "Mindcraft: Theory of mind modeling for situated dialogue in collaborative tasks", "year": "2021" }, { "authors": "Jonathan Berant; Andrew Chou; Roy Frostig; Percy Liang", "journal": "", "ref_id": "b9", "title": "Semantic parsing on freebase from question-answer pairs", "year": "2013" }, { "authors": "Yonatan Bisk; Deniz Yuret; Daniel Marcu", "journal": "", "ref_id": "b10", "title": "Natural language communication with robots", "year": "2016" }, { "authors": "B Tom; Benjamin Brown; Nick Mann; Melanie Ryder; Jared Subbiah; Prafulla Kaplan; Arvind Dhariwal; Pranav Neelakantan; Girish Shyam; Amanda Sastry; Sandhini Askell; Ariel Agarwal; Gretchen Herbert-Voss; Tom Krueger; Rewon Henighan; Aditya Child; Daniel M Ramesh; Jeffrey Ziegler; Clemens Wu; Christopher Winter; Mark Hesse; Eric Chen; Mateusz Sigler; Scott Litwin; Benjamin Gray; Jack Chess; Christopher Clark; Sam Berner; Alec Mc-Candlish; Ilya Radford; Dario Sutskever; Amodei", "journal": "", "ref_id": "b11", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Mikhail Burtsev; Aleksandr Chuklin; Julia Kiseleva; Alexey Borisov", "journal": "", "ref_id": "b12", "title": "Search-oriented conversational ai (scai)", "year": "2017" }, { "authors": "Thomas Carta; Clément Romac; Thomas Wolf; Sylvain Lamprier; Olivier Sigaud; Pierre-Yves Oudeyer", "journal": "", "ref_id": "b13", "title": "Grounding large language models in interactive environments with online reinforcement learning", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b14", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b15", "title": "Electra: Pre-training text encoders as discriminators rather than generators", "year": "2020" }, { "authors": " Edgar F Codd", "journal": "", "ref_id": "b16", "title": "Seven steps to rendezvous with the casual user", "year": "1974" }, { "authors": "Ann Copestake; Karen Sparck Jones", "journal": "", "ref_id": "b17", "title": "Natural language interfaces to databases", "year": "1990" }, { "authors": "Aditya Desai; Sumit Gulwani; Vineet Hingorani; Nidhi Jain; Amey Karkare; Mark Marron; Subhajit Roy", "journal": "ACM", "ref_id": "b18", "title": "Program synthesis using natural language", "year": "2016" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b19", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Emily Dinan; Varvara Logacheva; Valentin Malykh; Alexander Miller; Kurt Shuster; Jack Urbanek; Douwe Kiela; Arthur Szlam; Iulian Serban; Ryan Lowe", "journal": "Springer", "ref_id": "b20", "title": "The second conversational intelligence challenge (convai2)", "year": "2020" }, { "authors": "Ahmed Elgohary; Saghar Hosseini; Ahmed Hassan; Awadallah ", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "Speak to your parser: Interactive text-to-SQL with natural language feedback", "year": "2020" }, { "authors": "Ethan Fast; Binbin Chen; Julia Mendelsohn; Jonathan Bassen; Michael S Bernstein", "journal": "ACM", "ref_id": "b22", "title": "Iris: A conversational agent for complex tasks", "year": "2018" }, { "authors": "Luyu Gao; Zhuyun Dai; Jamie Callan", "journal": "", "ref_id": "b23", "title": "Rethink training of BERT rerankers in multi-stage retrieval pipeline", "year": "2021" }, { "authors": "Kevin A Gluck; John E Laird", "journal": "The MIT Press", "ref_id": "b24", "title": "Interactive task learning: Humans, robots, and agents acquiring new tasks through natural interactions", "year": "2018" }, { "authors": "Ian J Goodfellow; Jonathon Shlens; Christian Szegedy", "journal": "", "ref_id": "b25", "title": "Explaining and harnessing adversarial examples", "year": "2014" }, { "authors": "Jonathan Gray; Kavya Srinet; Yacine Jernite; Haonan Yu; Zhuoyuan Chen; Demi Guo; Siddharth Goyal; C Lawrence Zitnick; Arthur Szlam", "journal": "", "ref_id": "b26", "title": "Craftassist: A framework for dialogue-enabled interactive agents", "year": "2019" }, { "authors": "Jonathan Gray; Kavya Srinet; Yacine Jernite; Haonan Yu; Zhuoyuan Chen; Demi Guo; Siddharth Goyal; C Lawrence Zitnick; Arthur Szlam", "journal": "", "ref_id": "b27", "title": "CraftAssist: A Framework for Dialogue-enabled Interactive Agents", "year": "2019" }, { "authors": "Suchin Gururangan; Ana Marasović; Swabha Swayamdipta; Kyle Lo; Iz Beltagy; Doug Downey; Noah A Smith", "journal": "", "ref_id": "b28", "title": "Don't stop pretraining: Adapt language models to domains and tasks", "year": "2020" }, { "authors": "G Gary; Earl D Hendrix; Daniel Sacerdoti; Jonathan Sagalowicz; Slocum", "journal": "ACM Transactions on Database Systems (TODS)", "ref_id": "b29", "title": "Developing a natural language interface to complex data", "year": "1978" }, { "authors": "Gautier Izacard; Mathilde Caron; Lucas Hosseini; Sebastian Riedel; Piotr Bojanowski; Armand Joulin; Edouard Grave", "journal": "", "ref_id": "b30", "title": "Towards unsupervised dense information retrieval with contrastive learning", "year": "2021" }, { "authors": "Prashant Jayannavar; Anjali Narayan-Chen; Julia Hockenmaier", "journal": "", "ref_id": "b31", "title": "Learning to execute instructions in a minecraft dialogue", "year": "2020" }, { "authors": "Matthew Johnson; Katja Hofmann; Tim Hutton; David Bignell", "journal": "", "ref_id": "b32", "title": "The malmo platform for artificial intelligence experimentation", "year": "2016" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b33", "title": "Dense passage retrieval for open-domain question answering", "year": "2020" }, { "authors": "Julia Kiseleva; Ziming Li; Mohammad Aliannejadi; Shrestha Mohanty; Mikhail Maartje Ter Hoeve; Alexey Burtsev; Artem Skrynnik; Aleksandr Zholus; Kavya Panov; Arthur Srinet; Yuxuan Szlam; Katja Sun; Marc-Alexandre Hofmann; Ahmed Côté; Linar Awadallah; Igor Abdrazakov; Putra Churin; Kata Manggala; Naszadi; Taewoon Michiel Van Der Meer; Kim", "journal": "", "ref_id": "b34", "title": "Interactive grounded language understanding in a collaborative environment: Iglu 2021", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b35", "title": "", "year": "" }, { "authors": "Julia Kiseleva; Ziming Li; Mohammad Aliannejadi; Shrestha Mohanty; Mikhail Maartje Ter Hoeve; Alexey Burtsev; Artem Skrynnik; Aleksandr Zholus; Kavya Panov; Srinet", "journal": "", "ref_id": "b36", "title": "Neurips 2021 competition iglu: Interactive grounded language understanding in a collaborative environment", "year": "2021" }, { "authors": "Julia Kiseleva; Alexey Skrynnik; Artem Zholus; Shrestha Mohanty; Negar Arabzadeh; Marc-Alexandre Côté; Mohammad Aliannejadi; Milagro Teruel; Ziming Li; Mikhail Burtsev", "journal": "", "ref_id": "b37", "title": "Iglu 2022: Interactive grounded language understanding in a collaborative environment at neurips", "year": "2022" }, { "authors": "Julia Kiseleva; Kyle Williams; Ahmed Hassan Awadallah; Aidan C Crook; Imed Zitouni; Tasos Anastasakos", "journal": "", "ref_id": "b38", "title": "Predicting user satisfaction with intelligent assistants", "year": "2016" }, { "authors": "Julia Kiseleva; Kyle Williams; Jiepu Jiang; Ahmed Hassan Awadallah; Aidan C Crook; Imed Zitouni; Tasos Anastasakos", "journal": "", "ref_id": "b39", "title": "Understanding user satisfaction with intelligent assistants", "year": "2016" }, { "authors": "Arne Köhn; Julia Wichlacz; Christine Schäfer; Alvaro Torralba; Jörg Hoffmann; Alexander Koller", "journal": "", "ref_id": "b40", "title": "Mc-saar-instruct: a platform for minecraft instruction giving agents", "year": "2020" }, { "authors": "Toby Jia-Jun Li; Tom Mitchell; Brad Myers; ; ", "journal": "", "ref_id": "b41", "title": "Interactive task learning from GUI-grounded natural language instructions and demonstrations", "year": "2020" }, { "authors": "Ziming Li; Julia Kiseleva; Maarten De; Rijke ", "journal": "", "ref_id": "b42", "title": "Dialogue generation: From imitation learning to inverse reinforcement learning", "year": "2019" }, { "authors": "Ziming Li; Julia Kiseleva; Maarten De Rijke", "journal": "Association for Computational Linguistics", "ref_id": "b43", "title": "Rethinking supervised learning and reinforcement learning in task-oriented dialogue systems", "year": "2020" }, { "authors": "Ziming Li; Julia Kiseleva; Maarten De Rijke", "journal": "", "ref_id": "b44", "title": "Improving response quality with backward reasoning in open-domain dialogue systems", "year": "2021" }, { "authors": "Ziming Li; Sungjin Lee; Baolin Peng; Jinchao Li; Julia Kiseleva; Maarten De Rijke; Shahin Shayandeh; Jianfeng Gao", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Guided dialogue policy learning without adversarial learning in the loop", "year": "2020" }, { "authors": "Bing Liu; Ian Lane", "journal": "IEEE", "ref_id": "b46", "title": "Iterative policy learning in end-to-end trainable task-oriented neural dialog models", "year": "2017" }, { "authors": "Bing Liu; Ian Lane", "journal": "", "ref_id": "b47", "title": "Adversarial learning of task-oriented neural dialog models", "year": "2018" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b48", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Nikhil Mehta; Milagro Teruel; Patricio Figueroa Sanz; Xin Deng; Ahmed Hassan Awadallah; Julia Kiseleva", "journal": "", "ref_id": "b49", "title": "Improving grounded language understanding in a collaborative environment by interacting with agents through help feedback", "year": "2023" }, { "authors": "Shrestha Mohanty; Negar Arabzadeh; Milagro Teruel; Yuxuan Sun; Artem Zholus; Alexey Skrynnik; Mikhail Burtsev; Kavya Srinet; Aleksandr Panov; Arthur Szlam; Marc-Alexandre Côté; Julia Kiseleva", "journal": "", "ref_id": "b50", "title": "Collecting interactive multimodal datasets for grounded language understanding", "year": "2022" }, { "authors": "Anjali Narayan-Chen; Prashant Jayannavar; Julia Hockenmaier", "journal": "Association for Computational Linguistics", "ref_id": "b51", "title": "Collaborative dialogue in Minecraft", "year": "2019" }, { "authors": "Tri Nguyen; Mir Rosenberg; Xia Song; Jianfeng Gao; Saurabh Tiwary; Rangan Majumder; Li Deng", "journal": "choice", "ref_id": "b52", "title": "Ms marco: A human generated machine reading comprehension dataset", "year": "2016" }, { "authors": "Haruna Ogawa; Hitoshi Nishikawa; Takenobu Tokunaga; Hikaru Yokono", "journal": "European Language Resources Association. OpenAI", "ref_id": "b53", "title": "Gamification platform for collecting task-oriented dialogue data", "year": "2020" }, { "authors": "Sung Joon; Park; C O' Joseph; Carrie J Brien; Meredith Ringel Cai; Percy Morris; Michael S Liang; Bernstein", "journal": "", "ref_id": "b54", "title": "Generative agents: Interactive simulacra of human behavior", "year": "2023" }, { "authors": "Ofir Press; Muru Zhang; Sewon Min; Ludwig Schmidt; Noah A Smith; Mike Lewis", "journal": "", "ref_id": "b55", "title": "Measuring and narrowing the compositionality gap in language models", "year": "2022" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "", "ref_id": "b56", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "year": "2019" }, { "authors": "Stephen Roller; Emily Dinan; Naman Goyal; Da Ju; Mary Williamson; Yinhan Liu; Jing Xu; Myle Ott; Kurt Shuster; Eric M Smith", "journal": "", "ref_id": "b57", "title": "Recipes for building an open-domain chatbot", "year": "2020" }, { "authors": "Zhengxiang Shi; Yue Feng; Aldo Lipani", "journal": "", "ref_id": "b58", "title": "Learning to execute or ask clarification questions", "year": "2022" }, { "authors": "Zhengxiang Shi; Jerome Ramos; Eun To; Xi Kim; Hossein A Wang; Aldo Rahmani; Lipani", "journal": "", "ref_id": "b59", "title": "When and what to ask through world states and text instructions: Iglu nlp challenge solution", "year": "2023" }, { "authors": "Mohit Shridhar; Jesse Thomason; Daniel Gordon; Yonatan Bisk; Winson Han; Roozbeh Mottaghi; Luke Zettlemoyer; Dieter Fox", "journal": "", "ref_id": "b60", "title": "Alfred: A benchmark for interpreting grounded instructions for everyday tasks", "year": "2020" }, { "authors": "Alexey Skrynnik; Zoya Volovikova; Marc-Alexandre Côté; Anton Voronov; Artem Zholus; Negar Arabzadeh; Shrestha Mohanty; Milagro Teruel; Ahmed Awadallah; Aleksandr Panov; Mikhail Burtsev; Julia Kiseleva", "journal": "", "ref_id": "b61", "title": "Learning to solve voxel building embodied tasks from pixels and natural language instructions", "year": "2022" }, { "authors": "Kavya Srinet; Yacine Jernite; Jonathan Gray; Arthur Szlam", "journal": "Association for Computational Linguistics", "ref_id": "b62", "title": "CraftAssist instruction parsing: Semantic parsing for a voxel-world assistant", "year": "2020" }, { "authors": "Yu Su; Ahmed Hassan Awadallah; Madian Khabsa; Patrick Pantel; Michael Gamon; Mark Encarnacion", "journal": "ACM", "ref_id": "b63", "title": "Building natural language interfaces to web apis", "year": "2017" }, { "authors": "Arthur Szlam; Jonathan Gray; Kavya Srinet; Yacine Jernite; Armand Joulin; Gabriel Synnaeve; Douwe Kiela; Haonan Yu; Zhuoyuan Chen; Siddharth Goyal; Demi Guo; Danielle Rothermel; C Lawrence Zitnick; Jason Weston", "journal": "", "ref_id": "b64", "title": "Why Build an Assistant in Minecraft?", "year": "2019" }, { "authors": "Stefanie Tellex; Thomas Kollar; Steven Dickerson; Matthew R Walter; Ashis Gopal Banerjee; Seth Teller; Nicholas Roy", "journal": "", "ref_id": "b65", "title": "Understanding natural language commands for robotic navigation and mobile manipulation", "year": "2011" }, { "authors": "Jason W Wei; Kai Zou", "journal": "", "ref_id": "b66", "title": "EDA: easy data augmentation techniques for boosting performance on text classification tasks", "year": "2019" }, { "authors": "Terry Winograd", "journal": "Cognitive psychology", "ref_id": "b67", "title": "Understanding natural language", "year": "1972" }, { "authors": "W A Woods; Ronald M Kaplan; Bonnie L ", "journal": "", "ref_id": "b68", "title": "The lunar sciences natural language information system: Final report", "year": "1972" }, { "authors": "Ziyu Yao; Yu Su; Huan Sun; Wen-Tau Yih", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "Model-based interactive semantic parsing: A unified framework and a text-to-SQL case study", "year": "2019" }, { "authors": "Ziyu Yao; Yiqi Tang; Wen-Tau Yih; Huan Sun; Yu Su", "journal": "", "ref_id": "b70", "title": "An imitation game for learning semantic parsers from user interaction", "year": "2020" }, { "authors": "Steve Young; Milica Gašić; Blaise Thomson; Jason D Williams", "journal": "", "ref_id": "b71", "title": "Pomdp-based statistical spoken dialog systems: A review", "year": "2013" }, { "authors": "Jingtao Zhan; Jiaxin Mao; Yiqun Liu; Jiafeng Guo; Min Zhang; Shaoping Ma", "journal": "", "ref_id": "b72", "title": "Optimizing dense retrieval model training with hard negatives", "year": "2021" }, { "authors": "Yi Zhang; Sujay Kumar Jauhar; Julia Kiseleva; Ryen White; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b73", "title": "Learning to decompose and organize complex tasks", "year": "2021" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; Bill Dolan", "journal": "", "ref_id": "b74", "title": "Dialogpt: Large-scale generative pre-training for conversational response generation", "year": "2019" }, { "authors": "Artem Zholus; Alexey Skrynnik; Shrestha Mohanty; Zoya Volovikova; Julia Kiseleva; Artur Szlam; Marc-Alexandre Coté; Aleksandr I Panov", "journal": "", "ref_id": "b75", "title": "Iglu gridworld: Simple and fast environment for embodied dialog agents", "year": "2022" } ]
[]
10.18653/v1/s15-2045
2023-10-23
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b9", "b14", "b8", "b18", "b16", "b13", "b12", "b12", "b24", "b13", "b20", "b18", "b11", "b18" ], "table_ref": [], "text": "Pre-trained language models (PLM) such as BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and ELECTRA (Clark et al., 2020) have achieved great success in a wide variety of natural language processing tasks. However, Reimers and Gurevych (2019) finds that sentence embeddings from the original BERT underperform traditional methods such as GloVe (Pennington et al., 2014). Typically, an input sentence is first embedded by the BERT embedding layer, which consists of token embeddings, segment embeddings, and position embeddings. The output is then encoded by the Transformer encoder and the hidden states at the last layer are averaged to obtain the sentence embeddings. Prior studies identify the anisotropy problem as a critical factor that harms BERT-based sentence embeddings, as sentence embeddings from the original BERT yield a high similarity between any sentence pair due to the narrow cone of learned embeddings (Li et al., 2020).\nPrior approaches for improving sentence embeddings from PLMs fall into three categories. The first category of approaches does not require any learning (that is, learning-free). Jiang et al. (2022) argues that the anisotropy problem may be mainly due to the static token embedding bias, such as token frequency and case sensitivity. To address these biases, they propose the static remove biases avg. method which removes top-frequency tokens, subword tokens, uppercase tokens, and punctuations, and uses the average of the remaining token embeddings as sentence representation. However, this approach does not use the contextualized representations of BERT and may not be effective for short sentences as it may exclude informative words. The prompt-based method (last manual prompt) (Jiang et al., 2022) uses a template to generate sentence embeddings. An example template is This sentence: \"[X]\" means [MASK] ., where [X] denotes the original sentence and the last hidden states in the [MASK] position are taken as sentence embeddings. However, this method has several drawbacks. (1) It increases the input lengths, which raises the computation cost. (2) It relies on using the [MASK] token to obtain the sentence representation, hence unsuitable for PLMs not using [MASK] tokens (e.g., ELECTRA). (3) The performance heavily depends on the quality of manual prompts which relies on human expertise (alternatively, OptiPrompt (Zhong et al., 2021) requires additional unsupervised contrastive learning).\nThe second category of approaches fixes the parameters of the PLM and improves sentence embeddings through post-processing methods that require extra learning. BERT-flow (Li et al., 2020) addresses the anisotropy problem by introducing a flow-based generative model that transforms the BERT sentence embedding distribution into a smooth and isotropic Gaussian distribution. BERTwhitening (Su et al., 2021) uses a whitening operation to enhance the isotropy of sentence representations. Both BERT-flow and BERT-whitening require Natural Language Inference (NLI)/Semantic Textual Similarity (STS) datasets to train the flow network or estimate the mean values and covariance matrices as the whitening parameters.\nThe third category updates parameters of the PLM by fine-tuning or continually pre-training the PLM using supervised or unsupervised learning, which is computationally intensive. For example, Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) fine-tunes BERT using a siamese/triplet network on NLI and STS datasets. SimCSE (Gao et al., 2021) explores contrastive learning. Unsupervised SimCSE uses the same sentences with different dropouts as positives and other sentences as negatives, and supervised SimCSE explores NLI datasets and treats entailment pairs as positives and contradiction pairs as hard negatives.\nIn this work, we first analyze BERT sentence embeddings. (1) We use a parameter-free probing method to analyze BERT and Sentence-BERT (SBERT) (Reimers and Gurevych, 2019) and find that the compositionality of informative words is crucial for generating high-quality sentence embeddings as from SBERT. (2) Visualization of BERT attention weights reveals that certain selfattention heads in BERT are related to informative words, specifically self-attention from a word to itself (that is, the diagonal values of the attention matrix). Based on these findings, we propose a simple and efficient approach, Diagonal Attention Pooling (Ditto), to improve sentence embeddings from PLM without requiring any learning (that is, Ditto is a learning-free method). We find that Ditto improves various PLMs and strong sentence embedding methods on STS benchmarks." }, { "figure_ref": [ "fig_1" ], "heading": "Analyze BERT Sentence Embeddings", "publication_ref": [ "b22", "b22", "b18", "b23", "b19", "b7" ], "table_ref": [ "tab_4" ], "text": "Observation 1: The compositionality of informative words is crucial for high-quality sentence embeddings. Perturbed masking (Wu et al., 2020) is a parameter-free probing technique for analyzing PLMs (e.g., BERT). Given a sentence x = [x 1 , x 2 , . . . , x N ], perturbed masking applies a two-stage perturbation process on each pair of tokens (x i , x j ) to measure the impact that a token x j has on predicting the other token x i . Details of perturbed masking can be found in Ap- The heatmap shows the impact matrix for the sentence \"For those who follow social media transitions on Capitol Hill, this will be a little different.\". The impact matrices are computed using BERT (bert-baseuncased) and SBERT (bert-base-nli-stsb-mean-tokens) on Hugging Face, respectively. pendix A.1. Prior works use perturbed masking to recover syntactic trees from BERT (Wu et al., 2020). Different from prior works, we use perturbed masking to analyze the original BERT and a strong sentence embedding model, supervised Sentence-BERT (SBERT) (Reimers and Gurevych, 2019). Figure 1 shows the heatmap representing the impact matrix F for an example sentence in the English PUD treebank (Zeman et al., 2017). The y-axis represents x i and the x-axis represents x j . A higher value in F ij indicates that a word x j has a greater impact on predicting another word x i . Comparing the impact matrices of BERT and SBERT, we observe that the impact matrix of SBERT exhibits prominent vertical lines on informative tokens such as \"social media\", \"Capitol Hill\", and \"different\", which implies that informative tokens have a high impact on predicting other tokens, hence masking informative tokens could severely affect predictions of other tokens in the sentence. In contrast, BERT does not show this pattern. This observation implies that the compositionality of informative tokens could be a strong indicator of high-quality sentence embeddings of SBERT. Furthermore, we compute the correlations between the impact matrix and TF-IDF (Sparck Jones, 1972) which measures the importance of a word, and report results in Table 3. We find that the impact matrix of SBERT has a much higher correlation with TF-IDF than the impact matrix of BERT, which is consistent with the observation above. Notably, ELECTRA performs poorly on STS tasks and shows a weak correlation with TF-IDF. Consequently, we hypothesize that sentence embeddings of the original BERT and ELECTRA may be bi- ased towards uninformative words, hence limiting their performance on STS tasks.\nObservation 2: Certain self-attention heads of BERT correspond to word importance. Although SBERT has a higher correlation with TF-IDF than BERT as verified in Observation 1, BERT still shows a moderate correlation. Thus, we hypothesize that the semantic information of informative words is already encoded in BERT but has yet to be fully exploited. Prior research (Clark et al., 2019) analyzes the attention mechanisms of BERT by treating each attention head as a simple, no-training-required classifier that, given a word as input, outputs the other word that it most attends to. Certain attention heads are found to correspond well to linguistic notions of syntax and coreference.\nFor instance, heads that attend to the direct objects of verbs, determiners of nouns, objects of prepositions, and coreferent mentions are found to have remarkably high accuracy. We believe that the attention information in BERT needs to be further exploited. We denote a particular attention head by <layer>-<head number> (l-h), where for a BERT-base-sized model, layer ranges from 1 to 12 and head number ranges from 1 to 12. We visualize the attention weights of each head in each layer of BERT and focus on informative words. We then discover that self-attention from a word to itself (that is, the diagonal value of the attention matrix, named diagonal attention) of certain heads may be related to the importance of the word. As shown in Figure 2, the informative words \"social media transitions\", \"hill\", and \"little\" have high diagonal values of the attention matrix of head 1-10. Section 4 will demonstrate that diagonal attentions indeed have a strong correlation with TF-IDF weights." }, { "figure_ref": [], "heading": "Token&Position&Segment Embeddings", "publication_ref": [], "table_ref": [], "text": "Self-attention Feed Forward \"it will be fine\"\nit will be fine " }, { "figure_ref": [ "fig_2" ], "heading": "Diagonal Attention Pooling", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Inspired by the two observations in Section 2, we propose a novel learning-free method called Diagonal Attention Pooling (Ditto) to improve sentence embeddings for PLMs, illustrated in Figure 3. Taking BERT as an example, the input to the Transformer encoder is denoted as h 0 = [h 0 1 , . . . , h 0 N ], and the hidden states at each Transformer encoder layer are denoted as h l = [h l 1 , . . . , h l N ], l ∈ {1, . . . , L}. Typically, the hidden states of the last layer of the PLM are averaged to obtain the fixedsize sentence embeddings, as 1 N N i=1 h L i (denoted as last avg.). Alternatively, we can also average the static word embeddings 1 N N i=1 h 0 i (denoted as static avg.), or average the hidden states from the first and last layers 1 2N N i=1 (h 0 i + h L i ) (denoted as first-last) to obtain the sentence embeddings. Ditto weights the hidden states with diagonal attention of a certain head. For example, to obtain the sentence embeddings from the first-last hidden states of BERT using Ditto, we first obtain the diagonal values [A 11 , . . . , A N N ] of the attention matrix A for head l-h of BERT, where l and h are treated as hyperparameters and are optimized on a development set based on the STS performance. Then, we compute\n1 2 N i=1 A ii (h 0 i + h L i )\nas the sentence embeddings2 . Note that the impact matrix (Section 2) correlates well with TF-IDF (as shown in Table 3) and hence may also improve sentence embeddings. However, the learning-free Ditto is much more efficient than computing the impact matrix, which is computationally expensive." }, { "figure_ref": [], "heading": "Experiments and Analysis", "publication_ref": [ "b12", "b13", "b20", "b11" ], "table_ref": [], "text": "Following prior works (Jiang et al., 2022;Li et al., 2020;Su et al., 2021;Gao et al., 2021), we exper- iment on the 7 common STS datasets, the widely used benchmark for evaluating sentence embeddings. Appendix A.2 presents dataset and implementation details." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b12", "b12", "b12", "b7", "b11", "b11" ], "table_ref": [ "tab_2", "tab_2", "tab_2", "tab_3", "tab_5" ], "text": "The first group of Table 1 presents the results of the three common learning-free methods (Jiang et al., 2022) described in Section 3, including static avg., last avg., and first-last avg., and two learning-free baselines described in Section 1, static remove biases avg. and last manual prompt (Jiang et al., 2022). Although last manual prompt achieves 67.85, different templates can have a significant impact on its performance, ranging from 39.34 to 73.44 on STS-B dev set (Jiang et al., 2022). Applying Ditto to static avg., last avg., and first-last avg. achieves absolute gains of +5.75, +6.50, and +8.07 on the Avg. score, respectively. 3 The second group of Table 1 presents results from methods that fix BERT parameters but require 3 Since static remove biases avg. may remove important tokens and last manual prompt uses the last hidden states of [MASK] as sentence embeddings instead of average pooling, they are not suitable for applying Ditto. extra learning. Note that our learning-free BERT first-last Ditto achieves comparable performance to BERT-flow and BERT-whitening in this group. For further analyzing Ditto, we compute TF-IDF weights on 10 6 sentences randomly sampled from English Wikipedia as token importance weights and use the weighted average of the first-last hidden states as sentence embeddings, denoted as first-last TF-IDF (the 4th row in this group). First-last TF-IDF yields +8.75 absolute gain over the first-last avg. baseline, only slightly better than the +8.07 absolute gain from our learning-free Ditto (with l and h searched on only 1500 samples).\nThe third group of Table 1 presents results of strong baselines that update BERT parameters through unsupervised or supervised learning, including unsupervised SimCSE (Unsup. BERT SimCSE), supervised SimCSE (Sup. BERT Sim-CSE), and supervised SBERT. We find that applying Ditto on the highly competitive supervised learning method Sup. SBERT first-last avg. still achieves an absolute gain of 0.17 (84.94→85.11), demonstrating that Ditto could also improve strong supervised sentence embedding methods. Note that since SimCSE uses the [CLS] representation as sentence embeddings instead of average pooling, SimCSE is not suitable for applying Ditto. 2 compares the baselines first-last avg. and last manual prompt and our first-last Ditto method on different PLMs. Note that last manual prompt does not work for ELECTRA because this method relies on using the [MASK] token as the sentence embeddings while the ELECTRA discriminator is trained without [MASK] tokens. Ditto consistently works well on ELECTRA and greatly outperforms the two baselines on RoBERTa and ELECTRA, while underperforming last manual prompt on BERT. Correlation with TF-IDF To further analyze correlations between diagonal attentions and word importance, we select the 4 heads corresponding to the Top-4 Ditto performance based on Spearman's correlation on the STS-B development set, and compute correlations between diagonal values of the self-attention matrix of these heads and TF-IDF weights. Table 4 shows all Top-4 heads exhibit moderate or strong correlations with TF-IDF weights. We find high-performing heads are usually in the bottom layers (Section A.2), which is consistent with the findings in Clark et al. (2019) that the bottom layer heads broadly attend to the entire sentence. Uniformity and Alignment We use the analysis tool from prior works (Gao et al., 2021) to evaluate the quality of sentence embeddings by measuring the alignment between semantically related positive pairs and uniformity of the whole representation space. Gao et al. (2021) finds that sentence embedding models with better alignment and uniformity generally achieve better performance." }, { "figure_ref": [ "fig_3", "fig_3" ], "heading": "Effectiveness of Ditto on Different PLMs Table", "publication_ref": [ "b11", "b10", "b10" ], "table_ref": [ "tab_7" ], "text": "Figure 4 shows the uniformity and alignment of different sentence embedding models along with their averaged STS results. Lower values indicate better alignment and uniformity. We find that Ditto improves uniformity at the cost of alignment degradation for all PLMs, similar to the flow and whitening methods as reported in Gao et al. (2021). Compared to Ditto, flow and whitening methods achieve larger improvements in uniformity but also cause larger degradations in alignment.\nCosine Similarity We use the cosine similarity metric from Ethayarajh (2019) are directionally uniform, and the average cosine similarity between random samples should be close to zero. Ethayarajh (2019) originally applied this metric to word representations, and we adapt it to sentence representations in our study. We sample 1000 sentences from the English Wikipedia dataset and compute the average cosine similarity of their representations. Table 5 shows the results. Lower values indicate better isotropy. Our proposed Ditto method improves the isotropy of all three learningfree baselines: static avg., last avg., and first-last avg. This result is consistent with the uniformity analysis in Figure 4, where Ditto also enhances the uniformity of different sentence embedding models. " }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although our proposed simple and learning-free Ditto approach demonstrates effectiveness in allevi-ating the anisotropy problem and improving strong sentence embedding baselines, there are several limitations. Firstly, we conduct our experiments solely on the English sentence embedding models and the English Semantic Textual Similarity (STS) datasets. We hypothesize that the two observations in Section 2 will hold true on pre-trained models for other languages, hence we predict that Ditto, which is based on the two observations, will be effective in improving sentence embeddings for languages other than English. We plan to investigate the efficacy of Ditto on improving sentence embeddings for other languages in future work.\nSecondly, while we select the attention head (that is, determining l and h) by conducting a grid search of all attention heads based on the performance of the STS development set, we will explore other approaches for selecting attention heads for Ditto in future studies. Lastly, we focus on using Semantic Textual Similarity tasks for evaluating sentence embeddings in this work. We plan to investigate the quality of sentence embeddings in more tasks, such as information retrieval." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "A.1 Details of Perturbed Masking\nIn the first stage, we replace x i with the [MASK] token, resulting in a new sequence x\\{x i }. The representation of this sequence is denoted as H(x\\{x i }) i . In the second stage, we mask out x j in addition to x i to obtain the second corrupted sequence x\\{x i , x j }. The representation of this sequence is denoted as H(x\\{x i , x j }) i . Thus we obtain an impact matrix F ∈ R N ×N by computing the Euclidean distance between the two representations\nF ij = d(H(x\\{x i }) i , H(x\\{x i , x j }) i )." }, { "figure_ref": [], "heading": "A.2 Dataset and Implementation Details", "publication_ref": [ "b3", "b4", "b1", "b0", "b2", "b15", "b18", "b21" ], "table_ref": [], "text": "We conduct experiments on 7 common STS datasets, namely, STS tasks 2012-2016 (Agirre et al., 2012(Agirre et al., , 2013(Agirre et al., , 2014(Agirre et al., , 2015(Agirre et al., , 2016)), STS-B (Cer et al., 2017), and SICK-R (Marelli et al., 2014), following prior works. These 7 STS datasets are widely used benchmarks for evaluating sentence embeddings. Each dataset consists of sentence pairs scored from 0 to 5 to indicate the semantic similarity. For evaluation, we follow the setting of Reimers and Gurevych (2019) and report the average Spearman's correlation on the test sets of all 7 STS tasks (that is, the \"all\" setting), without using an additional regressor. Our implementation is based on the SimCSE GitHub repository4 and we modify it to fit our purposes. We conduct a grid search of the attention head l-h for Ditto based on Spearman's correlation on the STS-B development set (1500 samples). In this way, we select head 1-10 for BERT5 , head 1-5 for RoBERTa6 , head 1-11 for ELECTRA7 , and head 3-7 for SBERT8 . The TF-IDF weights are learned on 10 6 sentences randomly sampled from English Wikipedia9 using the gensim tool10 . We also utilize the English Wikipedia dataset and randomly sampled 1000 sentences to calculate the average cosine similarity of sentence representations. We conduct experiments using a single Tesla V100 GPU. Note that BERTflow and BERT-whitening papers use the full target dataset (including all sentences in the train, development, and test sets, and excluding all labels) and optionally the NLI corpus (SNLI (Bowman et al., 2015) and MNLI corpus (Williams et al., 2018)) for training. " }, { "figure_ref": [], "heading": "", "publication_ref": [ "b11" ], "table_ref": [], "text": "Methods that update BERT parameters (supervised or unsupervised learning)\nUnsup. BERT SimCSE (Gao et al., 2021) 68 " } ]
Prior studies diagnose the anisotropy problem in sentence representations from pre-trained language models, e.g., BERT, without finetuning. Our analysis reveals that the sentence embeddings from BERT suffer from a bias towards uninformative words, limiting the performance in semantic textual similarity (STS) tasks. To address this bias, we propose a simple and efficient unsupervised approach, Diagonal Attention Pooling (Ditto), which weights words with model-based importance estimations and computes the weighted average of word representations from pre-trained models as sentence embeddings. Ditto can be easily applied to any pre-trained language model as a postprocessing operation. Compared to prior sentence embedding approaches, Ditto does not add parameters nor requires any learning. Empirical evaluations demonstrate that our proposed Ditto can alleviate the anisotropy problem and improve various pre-trained models on the STS benchmarks.
Ditto: A Simple and Efficient Approach to Improve Sentence Embeddings
[ { "figure_caption": "Figure1: The heatmap shows the impact matrix for the sentence \"For those who follow social media transitions on Capitol Hill, this will be a little different.\". The impact matrices are computed using BERT (bert-baseuncased) and SBERT (bert-base-nli-stsb-mean-tokens) on Hugging Face, respectively.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of attention weights for BERT head 1-10 (on the left) and head 11-11 (on the right). The darkness of a line represents the value of the attention weights. The top-5 diagonal values of the attention matrix are colored blue.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The diagram of our proposed diagonal attention pooling (Ditto) method.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Alignment-uniformity plot of baselines and Ditto on different PLMs. The arrow indicates the changes. For both alignment and uniformity, smaller numbers are better.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Performance of different sentence embedding methods on STS tasks (as Average Spearman's correlation). Table6in the Appendix reports detailed results.", "figure_data": "MethodAvg.Learning-free methodsBERT static avg.56.02BERT last avg.52.57BERT first-last avg.56.70BERT static remove biases avg.63.10BERT last manual prompt67.85BERT static Ditto (Ours)61.77BERT last Ditto (Ours)59.07BERT first-last Ditto (Ours)64.77Methods that fix BERT parameters but require extra learningBERT-flow66.55BERT-whitening66.28BERT last manual and continuous prompt73.59BERT first-last TF-IDF (Ours)65.45Methods that update BERT parametersUnsup. BERT SimCSE76.25Sup. BERT SimCSE81.57Sup. SBERT first-last avg.84.94Sup. SBERT first-last Ditto (Ours)85.11", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Sentence embedding performance of Ditto and learning-free baselines on PLMs, measured by average Spearman's correlation on the test sets of 7 STS tasks.", "figure_data": "MethodBERT RoBERTa ELECTRAFirst-last avg.56.7056.5736.28Last manual prompt67.8561.0819.44First-last Ditto (Ours) 64.7761.9652.00", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Correlation of the impact matrix with the TF-IDF weights. Column STS avg. shows the average Spearman's correlation on the test sets of 7 STS tasks.", "figure_data": "Columns Pear. and Spear. show Pearson's correlationand Spearman's correlation between the mean values ofthe impact matrix 1 NN i=1 F ij and the TF-IDF weightsfor 1K sentences randomly sampled from the EnglishPUD treebank.MethodSTS avg. Pear. Spear.BERT56.7057.27 57.44SBERT84.9462.90 70.21ELECTRA36.2812.97 21.91", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The sentence embedding performance of BERT first-last Ditto using different attention heads and the correlation with the TF-IDF weights. Dev denotes Spearman's correlation on the STS-B development set. Test denotes the average Spearman's correlation on the test sets of 7 STS tasks. Pear. and Spear. denote Pearson's correlation and Spearman's correlation between the diagonal values of the attention matrix for the certain attention head and the TF-IDF weights on sentences in the STS-B task.", "figure_data": "MethodDevTest Pear. Spear.Ditto Head 1-1074.56 64.77 64.34 63.56Ditto Head 2-1273.13 65.00 47.30 44.17Ditto Head 11-11 70.59 62.46 47.64 44.68Ditto Head 1-769.54 60.65 65.98 64.30", "figure_id": "tab_5", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "The average cosine similarity of sentence representations for Ditto and learning-free baselines.", "figure_data": "Methodavg. Cosine SimilarityBERT static avg.0.843BERT static Ditto0.768BERT last avg.0.508BERT last Ditto0.458BERT first-last avg.0.566BERT first-last Ditto0.4035 ConclusionsWe propose a simple and learning-free Diagonal At-tention Pooling (Ditto) approach to address the biastowards uninformative words in BERT sentenceembeddings. Ditto weights words with model-based importance estimations and can be easilyapplied to various PLMs. Experiments show thatDitto alleviates the anisotropy problem and im-proves strong sentence embedding baselines.", "figure_id": "tab_7", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "The performance comparison of different sentence embedding methods on STS tasks (Spearman's correlation).", "figure_data": "MethodSTS12 STS13 STS14 STS15 STS16 STS-B SICK-R Avg.Learning-free methodsBERT static avg.42.3856.7450.6065.0862.3956.8258.1556.02BERT last avg.30.8759.8947.7360.2963.7347.2958.2252.57BERT first-last avg.39.7059.3849.6766.0366.1953.8762.0656.70BERT static remove biases avg. (Jiang et al., 2022)53.0966.4865.0969.8067.8561.6057.8063.10BERT last manual prompt (Jiang et al., 2022)60.9673.8362.1871.5468.6870.6067.1667.85BERT static Ditto (Ours)52.6162.7259.8870.4065.6063.3457.8561.77BERT last Ditto (Ours)43.5864.8453.2766.0665.7758.8861.1159.07BERT first-last Ditto (Ours)53.7767.9959.7873.7769.6666.7661.6464.77", "figure_id": "tab_8", "figure_label": "6", "figure_type": "table" } ]
Qian Chen; Wen Wang; Qinglin Zhang; Siqi Zheng; Chong Deng; Hai Yu; Jiaqing Liu; Yukun Ma; Chong Zhang
[ { "authors": "Eneko Agirre; Carmen Banea; Claire Cardie; Daniel M Cer; Mona T Diab; Aitor Gonzalez-Agirre; Weiwei Guo; Iñigo Lopez-Gazpio; Montse Maritxalar; Rada Mihalcea; German Rigau; Larraitz Uria; Janyce Wiebe", "journal": "The Association for Computer Linguistics", "ref_id": "b0", "title": "Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability", "year": "2015" }, { "authors": "Eneko Agirre; Carmen Banea; Claire Cardie; Daniel M Cer; Mona T Diab; Aitor Gonzalez-Agirre; Weiwei Guo; Rada Mihalcea; German Rigau; Janyce Wiebe", "journal": "The Association for Computer Linguistics", "ref_id": "b1", "title": "Semeval-2014 task 10: Multilingual semantic textual similarity", "year": "2014" }, { "authors": "Eneko Agirre; Carmen Banea; Daniel M Cer; Mona T Diab; Aitor Gonzalez-Agirre; Rada Mihalcea; German Rigau; Janyce Wiebe", "journal": "The Association for Computer Linguistics", "ref_id": "b2", "title": "Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation", "year": "2016" }, { "authors": "Eneko Agirre; Daniel M Cer; Mona T Diab; Aitor Gonzalez-Agirre", "journal": "The Association for Computer Linguistics", "ref_id": "b3", "title": "Semeval-2012 task 6: A pilot on semantic textual similarity", "year": "2012" }, { "authors": "Eneko Agirre; Daniel M Cer; Mona T Diab; Aitor Gonzalez-Agirre; Weiwei Guo", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "sem 2013 shared task: Semantic textual similarity", "year": "2013" }, { "authors": "R Samuel; Gabor Bowman; Christopher Angeli; Christopher D Potts; Manning", "journal": "The Association for Computational Linguistics", "ref_id": "b5", "title": "A large annotated corpus for learning natural language inference", "year": "2015" }, { "authors": "M Daniel; Mona T Cer; Eneko Diab; Iñigo Agirre; Lucia Lopez-Gazpio; Specia", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation", "year": "2017" }, { "authors": "Kevin Clark; Urvashi Khandelwal; Omer Levy; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "What does BERT look at? an analysis of bert's attention", "year": "2019" }, { "authors": "Kevin Clark; Minh-Thang Luong; Quoc V Le; Christopher D Manning", "journal": "", "ref_id": "b8", "title": "ELECTRA: pretraining text encoders as discriminators rather than generators", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Kawin Ethayarajh", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "How contextual are contextualized word representations? comparing the geometry of bert, elmo, and GPT-2 embeddings", "year": "2019" }, { "authors": "Tianyu Gao; Xingcheng Yao; Danqi Chen", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "Simcse: Simple contrastive learning of sentence embeddings", "year": "2021" }, { "authors": "Ting Jiang; Jian Jiao; Shaohan Huang; Zihan Zhang; Deqing Wang; Fuzhen Zhuang; Furu Wei; Haizhen Huang; Denvy Deng; Qi Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Promptbert: Improving BERT sentence embeddings with prompts", "year": "2022" }, { "authors": "Bohan Li; Hao Zhou; Junxian He; Mingxuan Wang; Yiming Yang; Lei Li", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "On the sentence embeddings from pre-trained language models", "year": "2020" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b14", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Marco Marelli; Stefano Menini; Marco Baroni; Luisa Bentivogli; Raffaella Bernardi; Roberto Zamparelli", "journal": "European Language Resources Association (ELRA", "ref_id": "b15", "title": "A SICK cure for the evaluation of compositional distributional semantic models", "year": "2014" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b16", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": " ", "journal": "", "ref_id": "b17", "title": "", "year": "" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Sentence-bert: Sentence embeddings using siamese bert-networks", "year": "2019" }, { "authors": "Karen Sparck; Jones ", "journal": "Journal of documentation", "ref_id": "b19", "title": "A statistical interpretation of term specificity and its application in retrieval", "year": "1972" }, { "authors": "Jianlin Su; Jiarun Cao; Weijie Liu; Yangyiwen Ou", "journal": "", "ref_id": "b20", "title": "Whitening sentence representations for better semantics and faster retrieval", "year": "2021" }, { "authors": "Adina Williams; Nikita Nangia; Samuel R Bowman", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "A broad-coverage challenge corpus for sentence understanding through inference", "year": "2018" }, { "authors": "Zhiyong Wu; Yun Chen; Ben Kao; Qun Liu", "journal": "Association for Computational Linguistics", "ref_id": "b22", "title": "Perturbed masking: Parameter-free probing for analyzing and interpreting BERT", "year": "2020" }, { "authors": "Martin Daniel Zeman; Milan Popel; Jan Straka; Joakim Hajic; Filip Nivre; Juhani Ginter; Luotolahti", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Conll 2017 shared task: Multilingual parsing from raw text to universal dependencies", "year": "2017" }, { "authors": "Zexuan Zhong; Dan Friedman; Danqi Chen", "journal": "", "ref_id": "b24", "title": "Factual probing is [MASK]: learning vs. learning to recall", "year": "2020" }, { "authors": "Bert-Whitening ( Su", "journal": "", "ref_id": "b25", "title": "", "year": "2021" }, { "authors": " Jiang", "journal": "", "ref_id": "b26", "title": "BERT last manual and continuous prompt", "year": "2022" } ]
[ { "formula_coordinates": [ 3, 346.83, 593.66, 93.32, 15.78 ], "formula_id": "formula_0", "formula_text": "1 2 N i=1 A ii (h 0 i + h L i )" }, { "formula_coordinates": [ 7, 94.81, 763.57, 175.7, 10.63 ], "formula_id": "formula_1", "formula_text": "F ij = d(H(x\\{x i }) i , H(x\\{x i , x j }) i )." } ]
2023-09-19
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b12", "b16", "b57", "b3", "b8", "b31", "b32", "b67", "b68", "b73", "b74", "b4", "b24", "b26", "b63", "b8", "b4", "b31", "b9", "b28", "b29", "b4", "b24", "b26", "b63", "b64", "b75", "b9", "b21", "b6", "b9", "b21", "b28" ], "table_ref": [], "text": "Continuous advancements in Deepfake technologies [13,17,32,58,59] are resulting in the creation of remarkably realistic images and videos, exhibiting fewer noticeable tampering artifacts. Despite their applications in the film and entertainment industries, these Deepfake tools are also exploited for malicious purposes such as creating political propaganda or pornographic content. To address public concerns regarding misinformation, face manipulation detectors [4,9,19,[33][34][35]35,39,48,53,61,69,70,75,76] which aim to provide coarse-grained binary classification results (real or fake) at the image-level or video-level have geared extensive attentions. While the pixel-level localization of manipulated regions of Deepfake images, which has a pivotal role in analyzing and explaining the Deepfake detection results, receives inadequate attention.\nOne obstacle in the development of pixel-level face manipulation localization technology is the scarcity of publicly available datasets featuring pixel-level annotations. To cope with the problem, some recent works [12,25,26,28,45,56,65] proposed their algorithms to extract pixel-level annotations from existing face manipulation datasets (e.g, FaceForensics++ [51]). Despite impressive, it is difficult to compare their designed pixel-level face manipulation localization frameworks with each other due to the diverse pixel-level annotations they employ. Besides, the quality of their annotation is unsatisfactory. There are two prevalent approaches for acquiring pixel-level annotations, denoted as MG1 and MG2 in Figure 1. MG1 is used in some studies [9,25,33], which computes the pixel-wise difference between fake image and corresponding real image in RGB channels, converts it into grayscale, and divides by 255 to produce a map within the range of [0, 1]. Some other works [12,61] adopt MG2 that binarizes the output of MG1 using a pre-defined threshold to obtain a binary mask for manipulation regions. As shown in Figure 1, the annotations from MG1 are incomplete while those from MG2 contain authentic background regions. For example, the Neu-ralTextures (NT) [58] only manipulates local areas of expression (e.g., mouth, nose, etc.) as shown in the last row, but both two annotations contain errors. Such imprecise and inconsistent annotations greatly hinder the advancement of face manipulation localization.\nTo address this problem, we adopt a sequence of image processing operations to compensate for the deficiency of pixel-level manipulation mask annotations in the FF++ [51] dataset. As illustrated in the last column of Figure 1, the proposed annotation strategy yields a more rational manipulated region mask that conforms to the technical characteristics of different face manipulation technologies (e.g, NT [58]). Leveraging the FF++ benchmark dataset with the extracted pixel-level annotations, we further establish a comprehensive benchmark for face manipulation localization. Specifically, we reproduce several existing forgery localization-related methods using their publicly available source codes in our benchmark, including: 1) Face manipulation detection methods with segmentation loss [7,61]. 2) Face manipulation localization methods [12,45]. 3) Image forgery localization methods [10,22,30,31]. While, extensive experimental results of our benchmark suggest that existing forgery localization methods are far from satisfactory, which motivates us to develop a more effective framework for face manipulation localization.\nAmong them, earlier work [12] attempts to take advantage of the attention maps to generate forged regions rather than specialized localization branches, which misses rich global contextual information. Later approaches [7, 25,26,28,45,56,61,65] widely employ the pipeline of semantic segmentation since the simple decoder network and segmentation loss can naturally support the face manipulation localization task, which learn the discriminative global context features and obtain more precise localization results. Nevertheless, directly applying the segmentation framework to the forgery localization task may not be optimal, as the semantic segmentation models focus on the semantic objective information while the face manipulation localization model needs to predict tampering locations exclusively [3,66,77]. Some studies [10,23] also show that the deep semantic objective information would impact the learning of tampered features.\nIn order to inhibit the semantic object content in the deeper localization branch, we propose a novel Multi-Spectral Class Center Network (MSCCNet) for face manipulation detection and localization, which exploits classlevel representations of different frequency component features to enhance the tamper localization capability. The MSCCNet consists of two key components: Multi-level Features Aggregation (MFA) and Multi-Spectral Class Center (MSCC) modules. The proposed MFA module effectively aggregates the low-level texture information and forgery artifacts, as these cues are predominantly present in shallow features [37,39]. The MSCC module is designed to extract the semantic-agnostic forgery features by suppressing the semantic objective representation capability of the network. Specifically, we first decompose the semantic features using a frequency transformation and calculate pixel-class relations within each spectral feature. Then, the weighted attention of different frequency bands is acquired by computing similarity maps between different spectral class centers and the corresponding partial semantic features. Finally, we employ weighted attention to alleviate the impact of semantic objective information and refine the original global context. Meanwhile, in the image forgery localization community, some researchers [10,22,23,30] exploit the noise or frequency information to suppress image semantic content, but they normally extract noise or frequency maps on input RGB images. In this way, the deeper-layer features may remain semantic-aware and consequently cannot preserve semantic-agnostic capability at the localization decoder network. In contrast, our MSCC module first attempts to mitigate this phenomenon in the localization decoder network, and achieves satisfactory results.\nIn a nutshell, our main contributions could be summa-rized as:\n• To facilitate the localization tasks, we first reconstruct the FaceForensics++ (FF++) datasets by introducing more rational pixel-level annotations. Then we conduct a comprehensive benchmark for face manipulation localization based on the annotated FF++ datasets.\n• A novel Multi-spectral Class Center Network (MSC-CNet) is designed for face manipulation localization, which consists of a Multi-level Features Aggregation (MFA) module and a Multi-spectral Class Center (MSCC) module for learning more generalizable and semantic-agnostic features.\n• Extensive experiments on pixel-level FF++ datasets show that MSCCNet compares favorably against the benchmark methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Face Manipulation Detection and Localization", "publication_ref": [ "b10", "b17", "b44", "b0", "b52", "b53", "b55", "b58", "b60", "b73", "b77", "b78", "b8", "b31", "b40", "b68", "b74", "b4", "b24", "b26", "b63", "b4", "b24", "b26", "b4", "b75" ], "table_ref": [], "text": "Early face manipulation detection methods [6,11,18,19,46,47] utilize intrinsic statistics or hand-crafted features to model spatial manipulation patterns. Recently, data-derived detection models utilize spatial artifacts [1,54,55,57,60,62,75,79,80] to learn discriminative features and achieve remarkable detection performance. However, these methods ignore the importance of the manipulated regions for face manipulation detection. Some other studies [9,33,42,53,61,70,76] explore the spatially tampered regions as additional supervised signals with segmentation loss to improve the performance of real-fake binary classification, while they do not make prediction and evaluation for manipulated regions.\nRecently, a few methods [12,25,26,28,45,56,65] have superficially examined the positioning problem and there are still many deficiencies. For example, FFD [12] directly applies the low-resolution attention map of the network to detect the tampered regions in face manipulation images, but it lacks global context information. To address this issue, some face manipulation localization methods [25,26,28,45,56] employ a semantic segmentation pipeline to segment the fake regions. Specifically, Multitask [45] designs an additional segmentation branch for localizing manipulated regions. The prior arts [25,56] present a localization method for GAN-synthesized fake images, and yet they cannot be accommodated to face manipulation data. But semantic segmentation networks are adept at learning semantic dependent objects, in other words, they cannot adapt well to tampering target localization [3,77]. Because the manipulated regions (or objects) are semanticagnostic features, compressing image content information is the key to developing face manipulation locators within the image semantic segmentation network. In this paper, we proposed a multi-spectral class center module to enhance the forgery region localization ability of the localization branch and suppress the semantic objective information in images." }, { "figure_ref": [], "heading": "Image Forgery Detection and Localization", "publication_ref": [ "b21", "b29", "b75", "b21", "b64", "b9", "b28", "b29", "b12", "b16", "b57" ], "table_ref": [], "text": "Image forgery technologies (e.g., splicing, copy-move, removal) have been around for a long time in contrast to the recent rise of face manipulation methods. Image forensics tasks also aim to detect images as spoof or bona fide and locate the tampering regions, but most image forgery localization methods [3,23,31,77] only focus on fake image datasets rather than real-fake mixed datasets. One type of localization method is to segment the entire input image [23,66], and the other type is to perform binary classification repeatedly using a sliding window [50]. Our MSC-CNet framework takes the cropped facial areas as the input, which reduces the computational expenses compared to the full-image input and sliding window approaches. Image forgery localization tasks also appear to be a simplified case of image semantic segmentation and thus they likewise confront the perturbation of semantic objective content. In addition, image forgery localization methods [10,30,31] have only been studied for traditional image tampering techniques and cannot be tailored to the latest face manipulation algorithms. In this paper, we mainly focus on localizing the manipulated regions created by advanced face forgery techniques [13,17,32,58,59]." }, { "figure_ref": [], "heading": "Noise and Frequency Forgery Clues", "publication_ref": [ "b9", "b9", "b28", "b9", "b28", "b8", "b15", "b19", "b6", "b39", "b39" ], "table_ref": [], "text": "To learn semantic-agnostic features, many image forgery localization approaches [10] exploit noise or frequency artifacts to inhibit image content. MVSS [10] adopts the Ba-yarConv [5] to extract the noise-view patterns on the input RGB image and then build a noise-sensitive branch. CAT-Net [30] focuses on JPEG compression artifacts and uses a segmentation model based on DCT coefficients. HiFi-Net [22] extracts features of the given input RGB image via the color and frequency modules, and this frequency module consists of a Laplacian of Gaussian (LoG). The deep features of prior methods [10,22,30] are a risk that they may retain semantic-related information. In the face manipulation detection community, most methods [9,16,21,37,39,48] also extract frequency-or noise-related artifacts from the input RGB image. Other studies [41,43,44] learn frequency forgery traces to detect manipulated face images, range from shallow layer [41] or multi-layers [43,44]. However, they are only focusing on face manipulation detection tasks rather than localization tasks. In a nutshell, none of the prior arts suppress semantic content features in the deeper localization branch. Instead, we propose a multi-spectral class center module in the decoder to learn semantic-agnostic fea-tures." }, { "figure_ref": [], "heading": "Semantic Segmentation", "publication_ref": [ "b7", "b25", "b36", "b50", "b65", "b69", "b72", "b9", "b21", "b28", "b29", "b64", "b4", "b24", "b7", "b50", "b7", "b50", "b65", "b72", "b25", "b36", "b69" ], "table_ref": [], "text": "Semantic segmentation tasks aim to generate pixel-wise semantic object predictions (segmentation masks) for a given image [8,27,38,52,67,71,72,74]. The face manipulation localization and semantic segmentation are very similar, differing only in the object type and class (i.e., manipulated and authentic). Hence, the existing image forgery localization [10,22,23,30,31,66] and current face manipulation localization methods [25,26,45,56] employ a semantic segmentation pipeline [8,52] to segment the fake regions. However, early methods [8,52,67,74] still do not sufficiently explore global contextual information. Subsequent semantic segmentation researches [27,38,71,72] learn more discriminative global contextual features, but since they are not specifically designed for the localization task, there remain issues with interference from semantic objective information. In this paper, we proposed MSCCNet to suppress the semantic objective information in global contextual features." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "As demonstrated in Figure 2 (a), our proposed face manipulation forensics architecture consists of a backbone network, a classification branch, and a localization branch, where the backbone network is utilized to project each input image I ∈ R 3×H×W into multi-scale feature space F = {F 1 , F 2 , F 3 }, where H × W is the shape of the input image. After that, a multi-level forgery patterns aggregation scheme is designed to aggregate F and output F A ∈ R C×h×w , where C denotes for the number of feature channels.\nNext, to exploit the global contextual representation of tampered regions over different frequency bands from aggregated F A , we propose the multi-spectral class center (MSCC) module as M , and then we have:\nF M = M (F A ),(1)\nwhere F M ∈ R C×h×w is the enhanced features from the perspective of centers of different spectral classes. Finally, F M is leveraged to predict the label of each pixel in the input image:\nP 1 = U psample 8× (C 1 (F M )),(2)\nwhere C 1 is a pixel-level classification head and P 1 ∈ R k×H×W indicates the predicted pixel-level class probability distribution. Moreover, we apply the last layer output features F 3 of the backbone network as image-level classification head C 2 input, we have:\nP 2 = C 2 (F 3 ),(3)\nin which, P 2 ∈ R k represents the image-level prediction probability distribution. Here, k is the number of classes and k = 2." }, { "figure_ref": [ "fig_2", "fig_0" ], "heading": "Multi-level Features Aggregation", "publication_ref": [ "b6" ], "table_ref": [], "text": "The forgery artifacts (e.g., blending boundary, checkboard, blur artifacts, etc.) and local structure are low-level texture features, which are mostly exiting shallow layers of the network [37,39]. However, previous face manipulation localization methods [12,45] primarily focused on deep semantic information and disregarded low-level texture features and location information, which would result in coarse and inaccurate output and disrupt some crucial low-level details (see Figure 4). To leverage the forgery-related lowlevel texture features, we propose the Multi-level Features Aggregation (MFA) scheme, which exploits texture-related information from multi-level and enhances the texture details of high-level semantic features.\nAs shown in Figure 2 (b), we first gain multi-level features F 1 , F 2 , F 3 from the backbone network and then employ three different aligned layers (i.e., N 1 , N 2 , and N 3 ) for each of them:\nF ′ 1 = N 1 (F 1 ), F ′ 2 = N 2 (F 2 ), F ′ 3 = N 3 (F 3 ),(4)\nwhere\nF ′ 1 , F ′ 2 , F ′ 3 ∈ R C×h×w .\nEach aligned layer consists of a Conv and a Downsample, which aligns the different level features to assure the effectiveness of the lowerlevel texture information. Then, we aggregate the aligned multi-level features\nF ′ 1 , F ′ 2 , F ′\n3 by channel-wise concatenation operation Cat as follows:\nF A = Conv(Cat([F ′ 1 , F ′ 2 , F ′ 3 ])).\n(\n)5\nwhere F A ∈ R C×h×w and the Conv layer to make the channel size of 3C to C." }, { "figure_ref": [ "fig_0" ], "heading": "Multi-spectral Class Center", "publication_ref": [ "b9", "b4", "b24", "b63", "b7", "b25", "b36", "b50", "b65", "b69", "b72", "b75", "b9", "b21", "b8", "b6" ], "table_ref": [], "text": "Previous face or image manipulation localization approaches [10,25,26,45,56,65] apply semantic segmentation pipeline for localization decoder network since the discriminative contextual features play a crucial role in predicting meaningful object regions. However, these off-the-shelf semantic segmentation networks [8,27,38,52,67,71,72,74] are not suitable for face manipulation localization tasks [3,77]. As face manipulation localization models solely require the localization of tampered regions rather than all meaningful regions, further analysis indicates that semantic objective features interfere with the forgery cue [10,23]. Therefore, the primary concern is how to develop and train a face manipulation localization model that can acquire semantic-agnostic features with sensitivity towards manipulations. The manipulated elements have discrepancies in the frequency domain compared to the authentic part, and extracting frequency information in the contextual features helps to suppress the semantic objective features [9,37,39,48]. Inspired by these motivations, we propose a novel Multi-spectral Class Center (MSCC) module to learn semantic-agnostic forgery features from the differentfrequency bands perspective, as shown in Figure 2 (c)." }, { "figure_ref": [], "heading": "Discrete Cosine Transform Filters", "publication_ref": [], "table_ref": [], "text": "Following [2,49], the 2D Discrete Cosine Transform (DCT) filters basis functions as follows:\nD u,v = H-1 i=0 W -1 j=0 d i,j cos( πu U (i + 1 2 )) cos( πv V (j + 1 2 )) s.t. u ∈ {0, 1, • • • , U -1}, v ∈ {0, 1, • • • , V -1},(6)\nwhere d ∈ R H×W is a two-dimensional data and D u,v ∈ R H×W is the 2D DCT frequency spectrum with the transformation basis of (u, v). For simplicity, we define the above DCT operation as\nD n (•), in which n ∈ {0, 1, • • • , N -1}\nand N is the number of frequency transformation basis of (u, v). In this paper, we first split the features F A ∈ R C×h×w into N parts along the channel dimension, where each channel of the n-th part feature\nF n A ∈ R c×h×w is defined f n i ∈ R h×w , i ∈ {0, 1, • • • , c-1} and c = C N . Then, every f n i is transformed through D n (•)\nwith n-th transformation basis (u, v), as follows:\nF n A = Cat([D n (f n 0 ), D n (f n 1 ), • • • , D n (f n c-1 )]),(7)\nwhere F n A ∈ R c×h×w is the frequency features for specific spectral component. Similarly, we can obtain the frequency information of the F A for all spectral components and concatenate them together channel-wise:\nF A = Cat([ F 0 A , F 1 A , • • • , F N -1 A ]),(8)\nin which, F A ∈ R N ×c×h×w are multi-spectral feature maps with N different frequency bands (i.e., N transformation basis)." }, { "figure_ref": [], "heading": "Multi-spectral Class Center", "publication_ref": [], "table_ref": [], "text": "After getting the multi-spectral feature maps F A ∈ R N ×c×h×w , we calculate the coarse segmentation predictions of different frequency components through a pixellevel classification head C 3 , then we have:\nP A = C 3 ( F A ),(9)\nwhere P A ∈ R N ×k×h×w indicates the probability of a pixel belonging to a specific class in N different frequency bands. After that, we perform a matrix multiplication ⊗ between the P A and the transpose of F A to calculate the multi-spectral class centers F class ∈ R N ×k×c as follows:\nF class = P A ⊗ F ⊤ A .(10)\nMulti-spectral class centers are expected to learn a global representation of each class from a different frequency perspective. Since the class centers of the different spectra are calculated independently, there are missing interactions between them. To address this, we first treat the multi-spectral class centers as distinct nodes, then message across each node, and finally update the features for each node. The graph node modeling process can be formulated as follows:\nF ′ class = G (F class ),(11)\nwhere G is a GCN layer that enhances the relationships between different spectral class centers. " }, { "figure_ref": [], "heading": "Feature Refinement", "publication_ref": [ "b9", "b28", "b29" ], "table_ref": [], "text": "We employ the multi-spectral class centers F ′ class to refine the aggregated multi-level features F A through an attentional calculation mechanism. We first compute a multispectral weight matrix to represent pixel similarity maps between each class center and the corresponding partial feature in F A , as follows:\nW = Sof tmax(F A ⊗ (F ′ class ) ⊤ ),(12)\nwhere W ∈ R N ×hw×k and F A is split by channel-wise and reshaped as N × hw × c. Then, the weighted features F ′ A ∈ R N ×hw×c are calculated as follows:\nF ′ A = W ⊗ F ′ class .(13)\nFinally, the multi-spectral class centers refined features F M ∈ R C×h×w is obtained by fusing the original features F A and weighted features F ′ A via a Conv layer, we have:\nF M = Conv(Cat([F A , F ′ A ])).(14)\nNote that F ′ A is recovered and permuted to have a size of C × h × w and the Conv layer to make the channel size of 2C to C.\nOur MSCC module represents pixel-class relationships over different spectra features. The decomposed class centers are employed to calculate the attention of different frequency bands for suppressing semantic contextual information. This is because the original semanticaware features are frequency aliasing states, with particularly low-frequency information dominating and highfrequency forgery cues easily discounted [73]. Hence, our MSCC module enhances the capacity of the model to learn semantic-agnostic features that are sensitive to face manipulation traces. In this way, the proposed MSCCNet effectively mitigates the disruption of deep semantic features in the localization decoder network, surpassing previous methods [7,10,22,30,31,45,61]." }, { "figure_ref": [], "heading": "Objective Function", "publication_ref": [], "table_ref": [], "text": "We first apply two cross-entropy loss functions for the predictions P 1 and P 2 of the MSCCNet, i.e., a pixel-level loss L seg for localizing the manipulated regions and an image-level loss L cls for classifying the authentic or manipulated face. Then, for coarse segmentation predictions P A ∈ R N ×k×h×w in Eq.( 9), we employ a 1 × 1 Conv to fuse the multi-spectral results as follow:\nP ′ A = Conv(P A ),(15)\nwhere P ′ A ∈ R k×h×w is global representations. Similarly, the cross-entropy loss function is employed to calculate its loss L mscc . Finally, the multi-task loss function L is used to jointly optimize the model parameters, we have:\nL = L cls + L seg + L mscc . (16\n)" }, { "figure_ref": [], "heading": "Benchmark Datasets", "publication_ref": [ "b8", "b31", "b74", "b27", "b5", "b67" ], "table_ref": [], "text": "To facilitate the study of face manipulation localization, we define this task as the recognition of pixel-level manipulated regions from a given face image. Since there is no single-face image dataset annotated with manipulation at pixel-level, we first construct a pixel-level singleface manipulation dataset by preprocessing and annotating the existing FF++ [51] dataset. The FF++ [51] dataset is the most widely used dataset and it provides the authentic source image corresponding to the forgery image, which establishes the theoretical support for pixel-level annotation [9,12,33,61,76]. Most previous single face forgery datasets [20,29,36,69] cannot have the advantages of FF++ [51]." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Pixel-level Annotation for FaceForensics++", "publication_ref": [ "b12", "b57", "b16", "b8", "b31", "b74", "b61", "b74", "b31" ], "table_ref": [ "tab_1" ], "text": "The FF++ [51] is a challenging face forgery video dataset and consists of 1,000 original (youtube) videos and 5,000 corresponding fake videos that are generated through five typical manipulation methods, including Deepfakes (DF) [13], Face2Face (FF) [59], FaceSwap (FS) [17], FaceShifter (FSh) [32], and NeuralTextures (NT) [58]. Meanwhile, it is adopted with three quality levels, i.e., Raw Quality (C0), High Quality (C23), and Low Quality (C40).\nIn this paper, we further preprocess the FF++ [51] with annotations to facilitate forged region localization tasks. As shown in Figure 3, we apply the real-fake image pairs of FF++ [51] to generate the pixel-level annotation, because forgery images and their corresponding authentic images have pixel-level differences in the manipulated regions and are identical in the untampered regions [9,12,33,61,76]. To be specific, for the real face image and the fake face image of the RGB image pairs, we convert them into gray-scale (i.e., I real and I f ake ) and compute the structural dissimilarity (SSIM) [63] between them to produce an SSIM map S in the range of [0, 1], following [76]. To accurately portray the pixel-level discrepancy S on the forged images, we first employ the S to compute the coarse manipulated regions factor f , following [33]. Second, f and the I f ake are multiplied to obtain I f ake , which is then binarized to produce M . But the M still is scattered and disjointed for practical manipulation region labels, as shown in Figure 3. Therefore, we dilate the M to fill the missing tampered area and then generate a more comprehensive tamper region mask M by convex wrapping twice. Finally, to eliminate the deviation of the convex hull M edges, we apply an erosion operation to them, and then the binary manipulation mask M gt is generated by Gaussian blurring followed by the threshold of 0. The above process produces the ground truth masks M gt for the fake images, and for the corresponding real images, we apply zero-maps as its M gt .\nFor each video of the FF++ [51], we interval select up to 20 frames to form the single-face manipulation image datasets and obtain the forged region labels that employ the proposed annotation procedure. Then, we divide the training, validation, and testing sets, following [51]. Finally, the detailed statistics of the pixel-level FF++ [51] as shown in Table 1." }, { "figure_ref": [], "heading": "DEFACTO", "publication_ref": [ "b38" ], "table_ref": [], "text": "The DEFACTO [40] contains a face-swapping dataset that gathers public photo portraits on the IMDB website and selects 200 front-facing actors with a relatively neutral expression as a base to generate 3,8000 in face-swapping forgery images by an unknown method. It should be noted that this dataset retains the mask annotations of the forgery regions during the creation process, which provides us with accurate labeling, so we treat it as an unseen test set in this paper." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Model Architecture", "publication_ref": [ "b22", "b22", "b22" ], "table_ref": [], "text": "As previously defined, the backbone of our MSCCNet is the dilated ResNet-50 network [24], the classification branch is a simple fully connected layer, and the localization branch consists of the proposed MFA and MSCC modules. Specifically, the ResNet-50 [24] backbone is initialized by the weights pre-trained on ImageNet datasets, while the remaining layers and modules are randomly initialized. The output stride of the dilated ResNet-50 [24] is set to 8. , so h = H 8 and w = W 8 in the MSCCNet. The remaining benchmark models follow the original papers unless stated otherwise." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [], "table_ref": [], "text": "We train the proposed MSCCNet with SGD setting the initial learning rate to 0.009, the momentum to 0.9, and the weight decay to 5e -4. The learning rate is decayed according to the \"poly\" learning rate policy with factor (1 -iter total iter ) 0.9 . The size of the input images is 512 × 512 and the batch size is 64. We apply random horizontal flipping as the only data augmentation method for the training phase. Synchronized batch normalization implemented by Pytorch 1.8.1 is enabled during multi-GPU training. The training protocols of the remaining benchmark methods follow the original papers unless stated otherwise." }, { "figure_ref": [], "heading": "Benchmark Methods", "publication_ref": [ "b9", "b28", "b29", "b29" ], "table_ref": [], "text": "We conduct a competitive benchmark for face manipulation localization, in which we train and evaluate existing forgery localization-related methods across various scenarios, including quantitative and qualitative evaluations. For the purpose of a just and reproducible comparison, we broadly select methods associated with the task of localizing tampered faces for which source code is publicly available. 1) Face manipulation detection methods with segmentation loss [7,61]. 2) Face manipulation localization methods [12,45]. 3) Image forgery localization methods [10,22,30,31]. These methods are described below:\nHPFCN [31]: It presents a high-pass filtered fully convolutional network to locate the regions manipulated by deep inpainting. Specifically, a high-pass filter is designed to extract inpainting traces as image residuals. Then the ResNet-based fully convolutional network learns discriminative features from image residuals. However, image-level results cannot be obtained due to its lack of a classification branch.\nMulti-task [45]: The work designs a multi-task learning network that includes an encoder and a Y-shaped decoder. The encoded features are used for binary classification. The output of one branch of the decoder is used for segmentation while that of the other is used for reconstruction.\nFFD [12]: The method proposes to utilize an attention mechanism to process and improve the feature maps, which not only classifies the genuine or fake faces but also highlights the informative regions for manipulation localization." }, { "figure_ref": [], "heading": "M2TR [61]:", "publication_ref": [], "table_ref": [], "text": "The authors operate multi-scale patches to detect local inconsistencies and design a cross-modality block to fuse multi-modal forgery artifacts in the frequency and spatial domains. Additionally, the model employs extra segmentation loss to improve detection (classification) performance but does not provide pixel-level results." }, { "figure_ref": [], "heading": "SLADD [7]:", "publication_ref": [ "b9", "b28" ], "table_ref": [], "text": "The work proposes a large forgery augmentation space to enrich and strengthen types of forgeries by using the adversarial training strategy to dynamically synthesize the most challenging forgeries. The work also applies a forgery region prediction head to generate a forgery region mask, but it only aims to improve detection (classification) performance.\nMVSS [10]: The method uses multi-view feature learning and multi-scale supervision to localize image manipulation regions, which exploit noise distribution information to learn semantic-agnostic features and apply boundary artifacts to address authentic images.\nCATNet [30]: It focuses on JPEG compression artifacts left during image acquisition and editing, so it proposes a Compression Artifact Tracing Network that designs a convolutional neural network to learn the distribution of discrete cosine transform (DCT) coefficients. However, the DCT coefficients of image compression are not suitable for addressing video compression of FF++ [51] (i.e., H.264)." }, { "figure_ref": [], "heading": "HiFi-Net [22]:", "publication_ref": [ "b29", "b9" ], "table_ref": [], "text": "The authors leverage color and frequency blocks to exploit image generation artifacts that can exist in both RGB and frequency domain, then they propose a multi-branch features extractor that learns feature maps of different resolutions for the image forgery detection and localization.\nAmong them, M2TR [61] and SLADD [7] are only designed for Deepfake detection tasks, but we additionally compute their pixel-level results in the training and testing phases. In the case of HPFCN [31] and MVSS [10], the output from the backbone network is utilized as input for an additional classification (detection) branch, in accordance with the network structure proposed by MSCCNet." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b9", "b28", "b29", "b64", "b7", "b25", "b65", "b72" ], "table_ref": [], "text": "The Accuracy (ACC) and Area Under the Receiver Operating Characteristic Curve (AUC) are reported for face manipulation detection comparison metrics, following [7,12,45,61].\nFor the evaluation of localization results, we employ the pixel-level F1-score and mIoU (mean of class-wise intersection over union), following image forgery localization tasks [10,22,30,31,66] and semantic segmentation tasks [8,27,67,74]. The higher value indicates that the performance is better." }, { "figure_ref": [], "heading": "Benchmark for Pixel-level FF++", "publication_ref": [ "b38" ], "table_ref": [], "text": "To completely evaluate our MSCCNet, we adopt three evaluation protocols: 1) Intra-dataset: We adopt the High Quality (C23) and Low Quality (C40) of the pixel-level FF++ [51] for intra-test evaluation. 2) Unseen datasets (cross-dataset): We train the proposed method on FF++ [51] C40 dataset and then test it on unseen face-swapping datasets of DEFACTO [40]. 3) Unseen manipulations (cross-manipulation): We perform experiments on the C40 set of the pixel-level FF++ [51] dataset through a leave-oneout strategy. Specifically, there are five manipulation types of fake face images in pixel-level FF++ [51], one type is used as a test set while the remaining four types form the training set." }, { "figure_ref": [], "heading": "Intra-dataset Evaluation", "publication_ref": [ "b9", "b28", "b29", "b29", "b9", "b28", "b29", "b38" ], "table_ref": [ "tab_2", "tab_2", "tab_2" ], "text": "We first investigate the localization performance of benchmark approaches on the C40 and C23 sets [51]. This task is more practical and challenging, yet is rarely explored in the previous literature. As shown in Table 2, the FFD [12] model exhibits inadequate localization results due to its utilization of low-resolution attention maps as prediction masks. Furthermore, it lacks the ability to incorporate global contextual representation in its localization branch, rendering it unsuitable for forgery localization tasks. M2TR [61] and SLADD [7] apply the semantic segmentation pipeline to supervise the manipulated regions but their main objective is to enhance detection performance rather than localization, consequently leading to unsatisfactory localization performance. Another crucial factor is the disregard for the negative impact of semantic objective information in these methods [7,12,45,61]. In the image forgery localization community, while some approaches [10,22,30,31] address this drawback and achieve notable performance improvements from a noise or frequency perspective, they still struggle to effectively suppress semantic objective information in the deep features of the localization branch. For example, HPFCN [31] employs a filter on the input RGB image and only achieves a 60.53 F1-score. MVSS [10], CAT-Net [30], and HiFi-Net [22] fuse the features of the RGB image with noise-or frequency-view patterns, but they also extract them on the inputs. Hence, their localization performance on face manipulation datasets is poorer than our MSCCNet, especially on the FF++ C40 dataset [51]. This is inherently caused by the diminished discrepancy between tampered and real areas in low-quality forged images, leading to a reduction in distinctive semantic objective features and consequent localization failures. In comparison to alternative models, our MSCCNet model exhibits superior performance, especially on the C40 dataset. This outcome suggests that the proposed MFA and MSCC modules enhance global contextual representations that are semantic-agnostic features while enabling the suppression of objective semantic-related information.\nWe next analyze the image-level classification performance of the face forgery localization approaches on the FF++ datasets [51]. Face manipulation detection methodologies [7, 61] have already extensively studied classification tasks, so they achieve remarkable results. As shown in Table 2, these classification results of C40 sets show that FFD [12] and Multi-task [45] are not suitable for lowquality datasets. Our findings from Table 2 illustrate that preceding image forgery localization methods [22,31] have yielded inadequate classification outcomes on the C40 and C23 datasets. Our MSCCNet outperforms all benchmark methods in terms of ACC and AUC on the C40 dataset. It is worth noting that the proposed MFA and MSCC modules are specifically designed to enhance the localization Table 3. Generalization to unseen datasets. The model is trained on the training set of pixel-level FF++ C40 [51] datasets while tested on the face-swapping datasets of DEFACTO [40]. The ⋆ indicates an outlier as the test data is extremely unbalanced in terms of real and fake samples." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "Image " }, { "figure_ref": [], "heading": "Unseen Datasets Evaluation", "publication_ref": [ "b38", "b38", "b9", "b28" ], "table_ref": [], "text": "The unseen datasets are created by anonymous forgery methodology based on unknown source data. As shown in Table 3, we conduct cross-dataset experiments to evaluate the generalization capacity of the face manipulation localization models on unseen DEFACTO [40] datasets. Due to the highly imbalanced DEFACTO [40] dataset with a ratio of 200 real samples to 38, 000 fake samples, certain methods [7, 10, 45] have encountered outliers in terms of imagelevel ACC. Specifically, these methods tend to predict either all real (i.e., Muilt-task [45]) or all fake (i.e., SLADD [7] and MVSS [10]) due to the extreme class imbalance. The AUC metric is calculated without considering the specific classes within the dataset. Therefore, our method demonstrates significantly better performance in terms of imagelevel AUC, providing strong evidence of its superiority.\nRegarding the localization results for face manipulation, CAT-Net [30] and HiFi-Net [22] accomplish significant performance by learning semantic-agnostic features. However, their localization branch networks do not fully excel in this aspect. Exiting forgery localization benchmark methods have not fully taken into account the semantic-agnostic features of the localization branch network. In contrast, the proposed MSCCNet effectively inhibits the image semantic content of deeper features by utilizing multi-spectral class centers, thereby achieving this target. From Table 3, our MSCCNet significantly outperforms all the competitors, which suggests that semantic-agnostic forgery cues offer a significant contribution to generalization." }, { "figure_ref": [], "heading": "Unseen Manipulation Evaluation", "publication_ref": [ "b12", "b57", "b16", "b12", "b12", "b57", "b16" ], "table_ref": [ "tab_4", "tab_4" ], "text": "In order to assess the cross-manipulation generalization capabilities of different face manipulation localization mod-els, we conduct the unseen manipulation evaluation experiments in Table 4. These results demonstrate that our MSC-CNet achieves exceptional localization generalization performance (65.25% F1 score and 53.68% mIoU) to novel forgeries, surpassing most approaches. Despite the various manipulation methods employed in the five types of manipulations (Deepfakes [13], Face2Face [59], FaceSwap [17], FaceShifter [32], and NeuralTextures [58]) within the pixellevel FF++ dataset, each of which focuses on different tasks, the proposed MSCCNet succeeded in learning a generalized discriminative feature on four of the manipulations and generalized to the remaining one. Different types of forgeries exhibit varying levels of difficulty, with Deepfakes [13] generally being easier to detect and localize compared to Neu-ralTextures [58] forgeries, which are often more challenging. Furthermore, it is worth noting that the majority of face manipulation detection methods tend to have better performance in detection rather than localization, suggesting that they are primarily designed for detection tasks. Our MSC-CNet also offers comparable generalizable detection results with other benchmark methods in Table 4. . DF [13], FF [59], FSh [32], FS [17] " }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative Comparisons", "publication_ref": [ "b38" ], "table_ref": [], "text": "After training, our model can generate high-quality mask predictions that depict tampering locations on the test set.\nHere, we provide some qualitative samples in The superior performance of our MSCCNet is evident from its ability to identify tampered regions in distinct types of forgeries, as well as real facial images. This capability highlights the strength of our method in effectively modeling semantic-agnostic features.\nIn Figure 5, we present the visualization of the unseen DEFACTO [40] datasets. The results indicate that HiFi-Net [22] demonstrates superior generalization performance compared to other benchmark methods, although it falls short in capturing fine edge details. However, thanks to the integration of the MFA module and MSCC module in our method, we achieve more accurate predictions of details and demonstrate enhanced generalization capabilities." }, { "figure_ref": [], "heading": "Extend Experiment", "publication_ref": [ "b9", "b14", "b13", "b9", "b9" ], "table_ref": [ "tab_5" ], "text": "To further validate the effectiveness of our approach, we conducted experiments on the image forgery datasets following the experiment setting of the MVSS [10]. In general, the training dataset consists of CASIAv2 [15], which contains 7,491 real samples and 5,063 fake samples, including both copy-move and splitting image editing types. There are two unseen testing datasets used for evaluation. The first dataset, CASIAv1 [14], comprises 800 images and 920 fake images, including both copy-move and splitting image editing types. The second dataset, COVER [64], consists of 100 samples and 100 copy-move manipulated samples.\nAs presented in Table 5, the localization and detection results for other comparisons are obtained from the original MVSS paper [10]. Based on the results, it is evident that our MSCCNet outperforms other methods in terms of pixel-level F1-score. Additionally, it achieves comparable image-level performance with the best-performing MVSS method [10]. These indicate that our method learns generalizable forgery features for transferability across different manipulation datasets. " }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We analyze different modules of proposed MSCCNet on the FF++ [51] C40 dataset and adopt the intra-dataset evaluation protocol." }, { "figure_ref": [], "heading": "Analysis on MSCCNet Architecture", "publication_ref": [], "table_ref": [ "tab_6", "tab_6" ], "text": "We set the baseline (Base.) model by removing the MFA and MSCC modules, and the remaining convolutional blocks. As summarized in Table 6, applying the MFA module could bring 0.27% mIoU improvements, which demonstrates that low-level local textures are helpful for manipulated region localization. MSCC module is the key component for modeling semantic-agnostic features, it achieves 75.76% in terms of mIoU. The multi-spectral features of the coarse segmentation supervision mechanism enable the assessment of the probability of pixel attribution to its specific class. These features subsequently drive the MSCC module's ability to approximate a robust class center. From the last line in Table 6, we can observe that L mscc improves the localization performance from 75.76% to 77.22%. Our results show that the combination of semantic-agnostic features and low-level artifacts improves face manipulation localization. Moreover, the proposed MSCC module offers a viable solution to suppress semantic-related information through a multi-frequency perspective." }, { "figure_ref": [], "heading": "Influence of GCN", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "The GCN layer in our MSCC module improves the consistency of multi-spectral class-level representations by enhancing interaction between class centers across various frequency bands. As can be seen in Table 7, if the GCN layer is removed, the localization performance drops from 77.22% to 76.62% mIoU. It helps with multi-frequency attention map calculation in feature refinement operations." }, { "figure_ref": [], "heading": "Influence of DCT Filters", "publication_ref": [ "b8", "b6" ], "table_ref": [ "tab_7" ], "text": "In Sec. 3.3, the DCT filters decompose semantic context features to different frequency bands, which relieves the aliasing among low-frequency and high-frequency components [73]. Given that forgery traces are more prominent in high-frequency rather than low-frequency compo- nents [9,37,39,48,61], the multi-spectral class centers have the potential to model frequency-dependent forgery traces, particularly in high-frequency regions. To show the effectiveness, we remove the DCT filters of the MSCC module, the performance drops to 75.92%. In comparison, applying DCT filters brings 1.3% mIoU improvements, as indicated in Table 7." }, { "figure_ref": [], "heading": "Influence of Fusion Type", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "There are two feature fusion types: addition (Add) and concatenation (Concat) options for Eq. ( 14). In Table 7, we try both addition and concatenation, and the experimental results demonstrate that the concatenation type is better performance." }, { "figure_ref": [], "heading": "Influence of the Number of Transformation Basis", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "The number of transformation basis of 2D DCT can be denoted as N in Sec. 3.3. To investigate the performance of using different N , various experiments are conducted and the experimental results are shown in Table 8. When setting N to 1, the (u, v) only is (0, 0), which indicates that the features are decomposed to low-frequency components and miss high-frequency forgery traces. Thus, its mIoU is 0.88% lower than N = 4. Note that N = 4 means the (u, v) is (0, 0), (0, 1), (1, 0), and (1, 1), which decomposes the more frequency components including low-and highfrequency. We also notice that performance drops to 76.70 if we use N = 16. This is primarily due to the increased difficulty of predicting accurate coarse segmentation outcomes for multi-frequency features, resulting in inadequate class-level representations when N is too large. Therefore, we adopt N = 4 for the other experiments." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes a novel Multi-spectral Class Centers Network (MSCCNet) to facilitate the acquisition of more generalizable and semantic-agnostic features for improved face manipulation localization outcomes. To avoid reliance on semantic objective information, we employ the multispectral class centers (MSCC) module to compute different frequency class-level contexts and weighted attention, which enables the refinement of deep semantic features. The Multi-level Feature Aggregation (MFA) module is integrated to fuse low-level forgery-specific textures. Our extensive experiments demonstrate the superior localization ability of MSCCNet on comprehensive benchmarks introduced in this paper.\nHowever, we remain cognizant of certain limitations that exist in this paper. First, it must be acknowledged that the pixel-level annotation method proposed in this work may not generate absolute and unequivocal tampering mask labels. Nevertheless, none of the current existing single-face manipulation datasets offer definitive and ground-truth tampering mask labels. This makes the masks generated based on pixel-level disparities between real-fake-image pairs the most suitable approximation of the ground-truth labels. Our approach and benchmark can be readily utilized in the event that more precisely-formed forged region mask labels come to fruition. Besides, the proposed MSCCNet makes no attempt to improve classification performance, since recent methodologies for single-face classification have been extensively studied. Moving forward, we plan to investigate and optimize both the classification and location branches of our approach in a unified fashion." } ]
As Deepfake content continues to proliferate on the internet, advancing face manipulation forensics has become a pressing issue. To combat this emerging threat, previous methods mainly focus on studying how to distinguish authentic and manipulated face images. Although impressive, image-level classification lacks explainability and is limited to certain specific application scenarios, which spawns recent research on pixel-level prediction for face manipulation forensics. However, existing forgery localization methods suffer from imprecise and inconsistent pixel-level annotations. To alleviate these problems, this paper first re-constructs the FaceForensics++ dataset by introducing pixel-level annotations, then establishes an comprehensive benchmark for localizing tampered regions. Besides, a novel Multi-Spectral Class Center Network (MSCCNet) is proposed for face manipulation detection and localization. Specifically, inspired by the power of frequency-related forgery traces, we design a Multi-Spectral Class Center (MSCC) module to learn more generalizable and semanticagnostic features. Based on the features of different frequency bands, the MSCC module collects multi-spectral class centers and computes pixel-to-class relations. Applying multi-spectral class-level representations suppresses the semantic information of the visual concepts which is insensitive to manipulated regions of forgery images. Furthermore, we propose a Multi-level Features Aggregation (MFA) module to employ more low-level forgery artifacts and structural textures. Experimental results quantitatively and qualitatively demonstrate the effectiveness and superiority of the proposed MSCCNet on comprehensive localization benchmarks. We expect this work to inspire more studies on pixel-level face manipulation localization. The annotations and codes are available. DF FF FSh FS NT Real Fake MG1 MG2 Ours Figure 1. The different pixel-level annotation methods for Face-Forensics++ (FF++) [51]. DF [13], FF [59], FSh [32], FS [17], and NT [58] rows are five different Deepfake technologies. The Real and Fake columns depict authentic and corresponding manipulated faces, respectively. In contrast, the MG1 column exhibits dispersed points, whereas the MG2 column contains numerous background regions. This paper proposes an annotation method (Ours column) that yields more precise and comprehensive masks of the tampered regions.
Multi-spectral Class Center Network for Face Manipulation Detection and Localization
[ { "figure_caption": "Figure 2 .2Figure 2. Detailed architecture of the proposed MSCCNet. The overall network structure is shown in (a), which consists of a backbone network, a classification branch, and a localization branch. (b) shows the scheme of the forgery-related low-level texture features aggregation. (c) illustrates the process of multi-spectral class centers and different frequency attention calculations. They are solely dedicated to enhancing the capabilities of the localization branch.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Pixel-level annotation procedure of FF++ [51]. The symbol * is a multiplication operation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Visualization mask predictions of benchmark methods and our MSCCNet. The examples are randomly selected from the C40 test set of FF++ [51]. DF[13], FF[59], FSh[32], FS[17], and NT [58] rows are five different face manipulation technologies. The YT (YouTube) row is the original face image. Column Annotation indicates the proposed pixel-level manipulation region mask in this paper.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 4. Visualization mask predictions of benchmark methods and our MSCCNet. The examples are randomly selected from the C40 test set of FF++ [51]. DF[13], FF[59], FSh[32], FS[17], and NT [58] rows are five different face manipulation technologies. The YT (YouTube) row is the original face image. Column Annotation indicates the proposed pixel-level manipulation region mask in this paper.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Generalization visualization of benchmark methods and our MSCCNet. The examples are randomly selected from the unseen DEFACTO [40] datasets. The Groundtruth columns are genuine tampered regions preserved during the data manipulation process.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4The predictions of FFD[12] suffer from being small and coarse due to the limited resolution of the attention map. Simi-lar issues arise with SLADD [7], which also relies on lowresolution feature maps. HPFCN[31] proves inadequate for advanced face manipulation images, thus failing to accurately predict tampered regions. The detrimental effects of Multi-task [45] and M2TR [61], which excessively prioritize objective semantic features, are clearly evident in Figure 4. For example, NT [58] is local forgery technology,", "figure_data": "", "figure_id": "fig_5", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Details of the statistical quantity of the pixel-level FF++ [51] datasets. It includes both fake and corresponding real face images for each type of manipulation. The symbol ⋆ denotes the removal of duplicate real face images during the validation and testing phases. All Types 143, 724 16, 922 ⋆ 16, 929 ⋆ 177, 575", "figure_data": "TypesTrain Set Valid Set Test Set All SetsDF [13]28, 7565, 5605, 56039, 876FF [59]28, 7245, 5225, 56039, 806FS [17]28, 7605, 5605, 56039, 880FSh [32]28, 7605, 5605, 56039, 880NT [58]28, 7245, 5225, 56039, 806", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Intra-dataset results for face manipulation localization and detection on the FF++ [51] datasets. The C40 and C23 indicate different compression levels.", "figure_data": "C40C23MethodsReferencesImage-levelPixel-levelImage-levelPixel-levelACC AUCF1mIoU ACC AUCF1mIoUHPFCN [31]ICCV 201982.05 69.08 60.53 48.37 86.60 88.69 69.12 55.91Muilt-task [45]BTAS 201971.09 74.05 74.91 61.86 88.08 93.41 81.88 70.39FFD [12]CVPR 202081.65 80.05 61.24 48.84 90.94 94.54 72.63 59.27M2TR [61]ICMR 202286.18 86.34 75.33 62.02 93.44 97.18 85.13 74.78SLADD [7]CVPR 202286.25 85.53 70.95 57.86 91.12 97.23 79.96 67.87MVSS [10]TPAMI 2022 85.08 81.99 82.34 70.82 95.30 98.71 88.79 80.20CAT-Net [30]IJCV 202285.86 85.77 84.89 74.40 96.14 98.83 89.18 80.86HiFi-Net [22]CVPR 202372.77 80.28 76.66 63.17 89.46 97.35 84.81 74.28MSCCNet (ours) -88.07 87.61 86.82 77.22 97.21 98.94 90.71 83.29", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Generalization to unseen manipulations on the FF++ C40 [51] dataset, which consists of five manipulation methods. We train on four methods and test on the other one method. The italicized numbers indicate the average of the five different generalization results. The F1 and mIoU in the upper table are pixel-level results, while the ACC and AUC in the lower table are image-level results. .11 62.07 50.71 63.52 53.03 62.55 49.75 55.36 45.61 62.82 51.44 CAT-Net [30] 71.13 58.32 63.92 51.51 64.55 53.07 63.76 51.40 56.89 47.19 64.05 52.30 HiFi-Net [22] 52.41 44.94 61.35 50.15 60.74 49.83 52.64 44.12 55.26 45.97 56.48 47.00 MSCCNet (ours) 75.28 63.20 64.23 52.21 65.15 54.03 63.81 51.58 57.79 47.37 65.25 53.68 .14 61.78 65.84 59.69 63.84 60.20 65.03 57.09 59.69 60.51 65.91 MVSS [10] 69.33 77.33 59.23 63.14 58.38 62.83 58.71 62.56 55.18 58.86 55.18 64.94 CAT-Net [30] 68.09 74.86 60.61 64.26 58.65 61.74 59.37 64.27 58.38 63.53 61.02 65.73 HiFi-Net [22] 59.46 72.48 57.41 67.00 50.04 53.79 57.16 63.62 57.59 61.01 56.33 63.58", "figure_data": "MethodsDeepfakes F1 mIoUFace2Face F1 mIoUFaceSwap F1 mIoUFaceShifter F1 mIoUNeuralTextures F1 mIoUAverage F1 mIoUHPFCN [31]66.09 55.78 56.79 47.06 63.47 53.69 57.16 46.61 53.31 44.54 59.53 49.54Multi-task [45]72.74 61.17 60.77 50.04 57.25 48.54 58.21 47.75 53.42 44.88 60.48 50.48FFD [12]67.77 56.54 46.24 41.06 53.24 47.50 61.31 49.69 54.50 45.13 56.61 47.98M2TR [61]73.15 60.83 62.73 51.17 64.21 52.89 54.02 44.91 57.58 47.11 62.34 51.38SLADD [7]74.79 63.58 59.91 49.31 63.08 53.71 58.82 48.11 55.80 46.12 62.48 52.17MVSS [10] 70.61 58Methods Deepfakes ACC AUC ACC AUC ACC AUC ACC AUC ACC Face2Face FaceSwap FaceShifter NeuralTextures AUCAverage ACC AUCHPFCN [31]58.88 68.52 55.63 57.80 53.97 55.12 58.31 61.55 54.95 57.62 56.35 60.12Multi-task [45]66.35 73.32 56.22 59.12 50.05 53.40 57.57 63.77 55.86 58.60 57.21 61.64FFD [12]65.76 71.90 63.20 68.27 53.51 56.63 58.53 63.89 56.37 59.25 59.47 63.99M2TR [61]66.13 75.38 61.06 65.86 56.73 60.03 57.01 61.65 57.07 60.83 59.59 64.75SLADD [7] 63.81 75MSCCNet(ours) 69.66 80.50 61.98 67.83 60.13 63.75 60.16 64.73 58.44 62.45 62.07 67.85", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Extend experiment on image forgery datasets. The model is trained on the CASIAv2[15] dataset while tested on the CASIAv1[14] and COVER [64] datasets. The default decision threshold of 0.5 is used for all models, following MVSS[10].", "figure_data": "MethodsPixel-level F1 CASIAv1 COVER CASIAv1 COVER Image-level AUCManTra-Net [66]15.528.614.154.3CR-CNN [68]40.529.176.654.6GSR-Net [78]38.728.550.245.6MVSS [10]45.245.383.957.3MSCCNet(ours)49.649.285.858.6while Multi-task [45] and M2TR [61] predict the whole faceobject regions. In the case of real face images (YT row),where the facial area is the meaningless object, MVSS [10],CAT-Net [30], and HiFi-Net [22] exhibit localization errors.", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Analysis of different modules of the proposed MSCCNet.", "figure_data": "Base. MFA MSCC L msccImage-level ACC AUCPixel-level F1 mIoU✓---87.49 86.67 83.79 72.84✓✓--87.29 86.68 83.98 73.11✓✓✓-87.38 86.99 85.82 75.76✓✓✓✓88.07 87.61 86.82 77.22", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Analysis of the proposed MSCC module.", "figure_data": "GCN DCT Add ConcatImage-level ACC AUCPixel-level F1 mIoU✓✓-✓88.07 87.61 86.82 77.22-✓-✓87.60 87.17 86.41 76.62✓--✓87.61 87.43 85.94 75.92✓✓✓-88.03 87.10 86.44 76.66", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Analysis of the number of transformation basis of the MSCC module.", "figure_data": "Number ofImage-levelPixel-levelTransformation BasisACC AUCF1mIoU187.68 86.83 86.23 76.34488.07 87.61 86.82 77.221688.06 87.44 86.46 76.70", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" } ]
Changtao Miao; Qi Chu; Zhentao Tan; Zhenchao Jin; Wanyi Zhuang; Yue Wu; Bin Liu; Honggang Hu; Nenghai Yu
[ { "authors": "Darius Afchar; Vincent Nozick; Junichi Yamagishi; Isao Echizen", "journal": "IEEE", "ref_id": "b0", "title": "Mesonet: a compact facial video forgery detection network", "year": "2018" }, { "authors": "Nasir Ahmed; T Natarajan; Kamisetty R Rao", "journal": "IEEE transactions on Computers", "ref_id": "b1", "title": "Discrete cosine transform", "year": "1974" }, { "authors": "H Jawadul; Bappy; K Amit; Jason Roy-Chowdhury; Lakshmanan Bunk; B S Nataraj; Manjunath", "journal": "", "ref_id": "b2", "title": "Exploiting spatial structure for localizing manipulated image regions", "year": "2017" }, { "authors": "Belhassen Bayar; Matthew C Stamm", "journal": "", "ref_id": "b3", "title": "A deep learning approach to universal image manipulation detection using a new convolutional layer", "year": "2016" }, { "authors": "Belhassen Bayar; Matthew C Stamm", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b4", "title": "Constrained convolutional neural networks: A new approach towards general purpose image manipulation detection", "year": "2018" }, { "authors": "Tiago Carvalho; Fabio A Faria; Helio Pedrini; Ricardo Da S Torres; Anderson Rocha", "journal": "IEEE transactions on information forensics and security", "ref_id": "b5", "title": "Illuminant-based transformed spaces for image forensics", "year": "2015" }, { "authors": "Liang Chen; Yong Zhang; Yibing Song; Lingqiao Liu; Jue Wang", "journal": "", "ref_id": "b6", "title": "Self-supervised learning of adversarial example: Towards good generalizations for deepfake detection", "year": "2022" }, { "authors": "Liang-Chieh Chen; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b7", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Taiping Shen Chen; Yang Yao; Shouhong Chen; Jilin Ding; Rongrong Li; Ji", "journal": "", "ref_id": "b8", "title": "Local relation learning for face forgery detection", "year": "2007" }, { "authors": "Xinru Chen; Chengbo Dong; Jiaqi Ji; Juan Cao; Xirong Li", "journal": "", "ref_id": "b9", "title": "Image manipulation detection by multi-view multi-scale supervision", "year": "2021" }, { "authors": "Davide Cozzolino; Diego Gragnaniello; Luisa Verdoliva", "journal": "IEEE", "ref_id": "b10", "title": "Image forgery localization through the fusion of camerabased, feature-based and pixel-based techniques", "year": "2014" }, { "authors": "Hao Dang; Feng Liu; Joel Stehouwer; Xiaoming Liu; Anil K Jain", "journal": "", "ref_id": "b11", "title": "On the detection of digital face manipulation", "year": "2020" }, { "authors": " Deepfakes", "journal": "", "ref_id": "b12", "title": "", "year": "2019" }, { "authors": "J Dong; W Wang; T Tan", "journal": "", "ref_id": "b13", "title": "Casia image tampering detection evaluation database", "year": "2010" }, { "authors": "J Dong; W Wang; T Tan", "journal": "IEEE", "ref_id": "b14", "title": "Casia image tampering detection evaluation database", "year": "2013" }, { "authors": "Ricard Durall; Margret Keuper; Franz-Josef Pfreundt; Janis Keuper", "journal": "", "ref_id": "b15", "title": "Unmasking deepfakes with simple features", "year": "2019" }, { "authors": " Faceswap", "journal": "", "ref_id": "b16", "title": "", "year": "2019" }, { "authors": "Pasquale Ferrara; Tiziano Bianchi; Alessia De Rosa; Alessandro Piva", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b17", "title": "Image forgery localization via fine-grained analysis of cfa artifacts", "year": "2012" }, { "authors": "Jessica Fridrich; Jan Kodovsky", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b18", "title": "Rich models for steganalysis of digital images", "year": "2012" }, { "authors": "Qiqi Gu; Shen Chen; Taiping Yao; Yang Chen; Shouhong Ding; Ran Yi", "journal": "", "ref_id": "b19", "title": "Exploiting fine-grained face forgery clues via progressive enhancement learning", "year": "2021" }, { "authors": "Xiao Guo; Xiaohong Liu; Zhiyuan Ren; Steven Grosz; Iacopo Masi; Xiaoming Liu", "journal": "", "ref_id": "b20", "title": "Hierarchical fine-grained image forgery detection and localization", "year": "2023" }, { "authors": "Jing Hao; Zhixin Zhang; Shicai Yang; Di Xie; Shiliang Pu", "journal": "", "ref_id": "b21", "title": "Transforensics: Image forgery localization with dense self-attention", "year": "2021" }, { "authors": "X Kaiming He; Shaoqing Zhang; Jian Ren; Sun", "journal": "", "ref_id": "b22", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "Yihao Huang; Felix Juefei-Xu; Run Wang; Xiaofei Xie; L Ma; Jianwen Li; Weikai Miao; Yang Liu; Geguang Pu", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b23", "title": "Fakelocator: Robust localization of gan-based face manipulations", "year": "2020" }, { "authors": "Gengyun Jia; Meisong Zheng; Chuanrui Hu; Xin Ma; Yuting Xu; Luoqi Liu; Yafeng Deng; Ran He", "journal": "IEEE Transactions on Biometrics, Behavior, and Identity Science", "ref_id": "b24", "title": "Inconsistencyaware wavelet dual-branch network for face forgery detection", "year": "2021" }, { "authors": "Alexander Kirillov; Ross B Girshick; Kaiming He; Piotr Dollár", "journal": "", "ref_id": "b25", "title": "Panoptic feature pyramid networks", "year": "2019" }, { "authors": "Chenqi Kong; Baoliang Chen; Haoliang Li; Shiqi Wang; Anderson Rocha; Sam Kwong", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b26", "title": "Detect and locate: Exposing face manipulation by semantic-and noise-level telltales", "year": "2022" }, { "authors": "Pavel Korshunov; Sébastien Marcel", "journal": "", "ref_id": "b27", "title": "Deepfakes: a new threat to face recognition? assessment and detection", "year": "2018" }, { "authors": "Myung-Joon Kwon; Seung-Hun Nam; In-Jae Yu; Heung-Kyu Lee; Changick Kim", "journal": "International Journal of Computer Vision", "ref_id": "b28", "title": "Learning jpeg compression artifacts for image manipulation detection and localization", "year": "2022" }, { "authors": "Haodong Li; Jiwu Huang", "journal": "", "ref_id": "b29", "title": "Localization of deep inpainting using high-pass fully convolutional network", "year": "2019" }, { "authors": "Lingzhi Li; Jianmin Bao; Hao Yang; Dong Chen; Fang Wen", "journal": "", "ref_id": "b30", "title": "Faceshifter: Towards high fidelity and occlusion aware face swapping", "year": "2019" }, { "authors": "Lingzhi Li; Jianmin Bao; Ting Zhang; Hao Yang; Dong Chen; Fang Wen; Baining Guo", "journal": "", "ref_id": "b31", "title": "Face x-ray for more general face forgery detection", "year": "2020" }, { "authors": "Xiaodan Li; Yining Lang; Yuefeng Chen; Xiaofeng Mao; Yuan He; Shuhui Wang; Hui Xue; Quan Lu", "journal": "", "ref_id": "b32", "title": "Sharp multiple instance learning for deepfake video detection", "year": "2020" }, { "authors": "Yuezun Li; Siwei Lyu", "journal": "", "ref_id": "b33", "title": "Exposing deepfake videos by detecting face warping artifacts", "year": "2018" }, { "authors": "Yuezun Li; Xin Yang; Pu Sun; Hongang Qi; Siwei Lyu", "journal": "", "ref_id": "b34", "title": "Celeb-df: A large-scale challenging dataset for deepfake forensics", "year": "2020" }, { "authors": "Honggu Liu; Xiaodan Li; Wenbo Zhou; Yuefeng Chen; Yuan He; Hui Xue; Weiming Zhang; Nenghai Yu", "journal": "", "ref_id": "b35", "title": "Spatialphase shallow learning: rethinking face forgery detection in frequency domain", "year": "2021" }, { "authors": "Sun'ao Liu; Hongtao Xie; Hai Xu; Yongdong Zhang; Qi Tian", "journal": "", "ref_id": "b36", "title": "Partial class activation attention for semantic segmentation", "year": "2022" }, { "authors": "Yuchen Luo; Yong Zhang; Junchi Yan; Wei Liu", "journal": "", "ref_id": "b37", "title": "Generalizing face forgery detection with high-frequency features", "year": "2021" }, { "authors": "Gaël Mahfoudi; Badr Tajini; Florent Retraint; Frederic Morain-Nicolier; Jean Luc Dugelay; Marc", "journal": "IEEE", "ref_id": "b38", "title": "Defacto: Image and face manipulation dataset", "year": "2019" }, { "authors": "Iacopo Masi; Aditya Killekar; Marian Royston; Shenoy Mascarenhas; Wael Pratik Gurudatt; Abdalmageed", "journal": "Springer", "ref_id": "b39", "title": "Twobranch recurrent network for isolating deepfakes in videos", "year": "2020" }, { "authors": "Changtao Miao; Qi Chu; Weihai Li; Suichan Li; Zhentao Tan; Wanyi Zhuang; Nenghai Yu", "journal": "IEEE Transactions on Biometrics, Behavior, and Identity Science", "ref_id": "b40", "title": "Learning forgery region-aware and id-independent features for face manipulation detection", "year": "2022" }, { "authors": "Changtao Miao; Zichang Tan; Qi Chu; Huan Liu; Honggang Hu; Nenghai Yu", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b41", "title": "F 2 trans: High-frequency fine-grained transformer for face forgery detection", "year": "2023" }, { "authors": "Changtao Miao; Zichang Tan; Qi Chu; Nenghai Yu; Guodong Guo", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b42", "title": "Hierarchical frequency-assisted interactive networks for face manipulation detection", "year": "2022" }, { "authors": "Fuming Huy H Nguyen; Junichi Fang; Isao Yamagishi; Echizen", "journal": "IEEE", "ref_id": "b43", "title": "Multi-task learning for detecting and segmenting manipulated facial images and videos", "year": "2019" }, { "authors": "Xunyu Pan; Xing Zhang; Siwei Lyu", "journal": "IEEE", "ref_id": "b44", "title": "Exposing image splicing with inconsistent local noise variances", "year": "2012" }, { "authors": "Bo Peng; Wei Wang; Jing Dong; Tieniu Tan", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b45", "title": "Optimized 3d lighting environment estimation for image forgery detection", "year": "2016" }, { "authors": "Yuyang Qian; Guojun Yin; Lu Sheng; Zixuan Chen; Jing Shao", "journal": "Springer", "ref_id": "b46", "title": "Thinking in frequency: Face forgery detection by mining frequency-aware clues", "year": "2020" }, { "authors": "Zequn Qin; Pengyi Zhang; Fei Wu; Xi Li", "journal": "IEEE", "ref_id": "b47", "title": "Fcanet: Frequency channel attention networks", "year": "2021" }, { "authors": "Nicolas Rahmouni; Vincent Nozick; Junichi Yamagishi; Isao Echizen", "journal": "IEEE Workshop on Information Forensics and Security (WIFS)", "ref_id": "b48", "title": "Distinguishing computer graphics from natural images using convolution neural networks", "year": "2017" }, { "authors": "Andreas Rossler; Davide Cozzolino; Luisa Verdoliva; Christian Riess; Justus Thies; Matthias Nießner", "journal": "", "ref_id": "b49", "title": "Faceforen-sics++: Learning to detect manipulated facial images", "year": "2019" }, { "authors": "Evan Shelhamer; Jonathan Long; Trevor Darrell", "journal": "", "ref_id": "b50", "title": "Fully convolutional networks for semantic segmentation", "year": "2014" }, { "authors": "Kaede Shiohara; T Yamasaki", "journal": "", "ref_id": "b51", "title": "Detecting deepfakes with self-blended images", "year": "2022" }, { "authors": "Luchuan Song; Zheng Fang; Xiaodan Li; Xiaoyi Dong; Zhenchao Jin; Yuefeng Chen; Siwei Lyu", "journal": "Springer", "ref_id": "b52", "title": "Adaptive face forgery detection in cross domain", "year": "2022" }, { "authors": "Luchuan Song; Xiaodan Li; Zheng Fang; Zhenchao Jin; Yue-Feng Chen; Chenliang Xu", "journal": "", "ref_id": "b53", "title": "Face forgery detection via symmetric transformer", "year": "2022" }, { "authors": "Kritaphat Songsri-In; Stefanos Zafeiriou", "journal": "", "ref_id": "b54", "title": "Complement face forensic detection and localization with faciallandmarks", "year": "2019" }, { "authors": "Zichang Tan; Zhichao Yang; Changtao Miao; Guodong Guo", "journal": "IEEE Signal Processing Letters", "ref_id": "b55", "title": "Transformer-based feature compensation and aggregation for deepfake detection", "year": "2022" }, { "authors": "Justus Thies; Michael Zollhöfer; Matthias Nießner", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b56", "title": "Deferred neural rendering: Image synthesis using neural textures", "year": "2019" }, { "authors": "Justus Thies; Michael Zollhofer; Marc Stamminger; Christian Theobalt; Matthias Nießner", "journal": "", "ref_id": "b57", "title": "Face2face: Real-time face capture and reenactment of rgb videos", "year": "2016" }, { "authors": "Chengrui Wang; Weihong Deng", "journal": "", "ref_id": "b58", "title": "Representative forgery mining for fake face detection", "year": "2021" }, { "authors": "Junke Wang; Zuxuan Wu; Jingjing Chen; Yu-Gang Jiang", "journal": "", "ref_id": "b59", "title": "M2tr: Multi-modal multi-scale transformers for deepfake detection", "year": "2021" }, { "authors": "Run Wang; Felix Juefei-Xu; Lei Ma; Xiaofei Xie; Yihao Huang; Jian Wang; Yang Liu", "journal": "", "ref_id": "b60", "title": "Fakespotter: A simple yet robust baseline for spotting ai-synthesized fake faces", "year": "2020" }, { "authors": "Zhou Wang; Alan Conrad Bovik; Hamid R Sheikh; Eero P Simoncelli", "journal": "IEEE Transactions on Image Processing", "ref_id": "b61", "title": "Image quality assessment: from error visibility to structural similarity", "year": "2004" }, { "authors": "B Wen; Y Zhu; R Subramanian; T T Ng; S Winkler", "journal": "IEEE", "ref_id": "b62", "title": "Coverage-a novel database for copy-move forgery detection", "year": "2016" }, { "authors": "Haiwei Wu; Jiantao Zhou; Shile Zhang; Jinyu Tian", "journal": "", "ref_id": "b63", "title": "Exploring spatial-temporal features for deepfake detection and localization", "year": "2022" }, { "authors": "Yue Wu; Wael Abdalmageed; P Natarajan", "journal": "", "ref_id": "b64", "title": "Mantra-net: Manipulation tracing network for detection and localization of image forgeries with anomalous features", "year": "2019" }, { "authors": "Tete Xiao; Yingcheng Liu; Bolei Zhou; Yuning Jiang; Jian Sun", "journal": "", "ref_id": "b65", "title": "Unified perceptual parsing for scene understanding", "year": "2018" }, { "authors": "Chao Yang; Huizhou Li; Fangting Lin; Bin Jiang; Hao Zhao", "journal": "IEEE", "ref_id": "b66", "title": "Constrained r-cnn: A general image manipulation detection model", "year": "2020" }, { "authors": "Xin Yang; Yuezun Li; Siwei Lyu", "journal": "IEEE", "ref_id": "b67", "title": "Exposing deep fakes using inconsistent head poses", "year": "2019" }, { "authors": "Peipeng Yu; Jianwei Fei; Zhihua Xia; Zhili Zhou; Jian Weng", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b68", "title": "Improving generalization by commonality learning in face forgery detection", "year": "2022" }, { "authors": "Yuhui Yuan; Xilin Chen; Jingdong Wang", "journal": "", "ref_id": "b69", "title": "Objectcontextual representations for semantic segmentation", "year": "2019" }, { "authors": "Fan Zhang; Yanqin Chen; Zhihang Li; Zhibin Hong; Jingtuo Liu; Feifei Ma; Junyu Han; Errui Ding", "journal": "", "ref_id": "b70", "title": "Acfnet: Attentional class feature network for semantic segmentation", "year": "2019" }, { "authors": "Richard Zhang", "journal": "PMLR", "ref_id": "b71", "title": "Making convolutional networks shiftinvariant again", "year": "2019" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b72", "title": "Pyramid scene parsing network", "year": "2016" }, { "authors": "Hanqing Zhao; Wenbo Zhou; Dongdong Chen; Tianyi Wei; Weiming Zhang; Nenghai Yu", "journal": "", "ref_id": "b73", "title": "Multi-attentional deepfake detection", "year": "2021" }, { "authors": "Tianchen Zhao; Xiang Xu; Mingze Xu; Hui Ding; Yuanjun Xiong; Wei Xia", "journal": "", "ref_id": "b74", "title": "Learning self-consistency for deepfake detection", "year": "2007" }, { "authors": "Peng Zhou; Bor-Chun Chen; Xintong Han; Mahyar Najibi; Larry S Davis", "journal": "", "ref_id": "b75", "title": "Generate, segment and replace: Towards generic manipulation segmentation", "year": "2018" }, { "authors": "Peng Zhou; Bor-Chun Chen; Xintong Han; Mahyar Najibi; Abhinav Shrivastava; Ser-Nam Lim; Larry Davis", "journal": "", "ref_id": "b76", "title": "Generate, segment, and refine: Towards generic manipulation segmentation", "year": "2020" }, { "authors": "Wanyi Zhuang; Qi Chu; Zhentao Tan; Qiankun Liu; Haojie Yuan; Changtao Miao; Zixiang Luo; Nenghai Yu", "journal": "Springer", "ref_id": "b77", "title": "Uia-vit: Unsupervised inconsistency-aware method based on vision transformer for face forgery detection", "year": "2022" }, { "authors": "Wanyi Zhuang; Qi Chu; Haojie Yuan; Changtao Miao; Bin Liu; Nenghai Yu", "journal": "IEEE", "ref_id": "b78", "title": "Towards intrinsic common discriminative features learning for face forgery detection using adversarial learning", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 135.33, 547.51, 151.03, 9.82 ], "formula_id": "formula_0", "formula_text": "F M = M (F A ),(1)" }, { "formula_coordinates": [ 4, 105.66, 619.96, 180.7, 9.65 ], "formula_id": "formula_1", "formula_text": "P 1 = U psample 8× (C 1 (F M )),(2)" }, { "formula_coordinates": [ 4, 139.27, 704.03, 147.09, 9.82 ], "formula_id": "formula_2", "formula_text": "P 2 = C 2 (F 3 ),(3)" }, { "formula_coordinates": [ 4, 334.56, 345.92, 210.55, 14.34 ], "formula_id": "formula_3", "formula_text": "F ′ 1 = N 1 (F 1 ), F ′ 2 = N 2 (F 2 ), F ′ 3 = N 3 (F 3 ),(4)" }, { "formula_coordinates": [ 4, 337.24, 367.98, 101.13, 13.85 ], "formula_id": "formula_4", "formula_text": "F ′ 1 , F ′ 2 , F ′ 3 ∈ R C×h×w ." }, { "formula_coordinates": [ 4, 388.51, 415.8, 42.46, 13.85 ], "formula_id": "formula_5", "formula_text": "F ′ 1 , F ′ 2 , F ′" }, { "formula_coordinates": [ 4, 359.49, 449.91, 135, 14.34 ], "formula_id": "formula_6", "formula_text": "F A = Conv(Cat([F ′ 1 , F ′ 2 , F ′ 3 ]))." }, { "formula_coordinates": [ 4, 537.37, 453.95, 7.74, 8.64 ], "formula_id": "formula_7", "formula_text": ")5" }, { "formula_coordinates": [ 5, 56.64, 432.12, 229.72, 57.21 ], "formula_id": "formula_8", "formula_text": "D u,v = H-1 i=0 W -1 j=0 d i,j cos( πu U (i + 1 2 )) cos( πv V (j + 1 2 )) s.t. u ∈ {0, 1, • • • , U -1}, v ∈ {0, 1, • • • , V -1},(6)" }, { "formula_coordinates": [ 5, 50.11, 528.02, 236.25, 20.86 ], "formula_id": "formula_9", "formula_text": "D n (•), in which n ∈ {0, 1, • • • , N -1}" }, { "formula_coordinates": [ 5, 50.11, 586.39, 236.25, 25.12 ], "formula_id": "formula_10", "formula_text": "F n A ∈ R c×h×w is defined f n i ∈ R h×w , i ∈ {0, 1, • • • , c-1} and c = C N . Then, every f n i is transformed through D n (•)" }, { "formula_coordinates": [ 5, 66.2, 628.77, 220.16, 12.69 ], "formula_id": "formula_11", "formula_text": "F n A = Cat([D n (f n 0 ), D n (f n 1 ), • • • , D n (f n c-1 )]),(7)" }, { "formula_coordinates": [ 5, 97.68, 701.97, 188.68, 13.31 ], "formula_id": "formula_12", "formula_text": "F A = Cat([ F 0 A , F 1 A , • • • , F N -1 A ]),(8)" }, { "formula_coordinates": [ 5, 396, 438.59, 149.11, 9.82 ], "formula_id": "formula_13", "formula_text": "P A = C 3 ( F A ),(9)" }, { "formula_coordinates": [ 5, 385.92, 532.86, 159.2, 12.69 ], "formula_id": "formula_14", "formula_text": "F class = P A ⊗ F ⊤ A .(10)" }, { "formula_coordinates": [ 5, 385.57, 664.34, 159.54, 14.34 ], "formula_id": "formula_15", "formula_text": "F ′ class = G (F class ),(11)" }, { "formula_coordinates": [ 6, 97.66, 343.53, 188.7, 14.34 ], "formula_id": "formula_16", "formula_text": "W = Sof tmax(F A ⊗ (F ′ class ) ⊤ ),(12)" }, { "formula_coordinates": [ 6, 129.16, 414.73, 157.2, 14.34 ], "formula_id": "formula_17", "formula_text": "F ′ A = W ⊗ F ′ class .(13)" }, { "formula_coordinates": [ 6, 105.57, 485.94, 180.79, 14.34 ], "formula_id": "formula_18", "formula_text": "F M = Conv(Cat([F A , F ′ A ])).(14)" }, { "formula_coordinates": [ 6, 389.75, 352.88, 155.36, 14.34 ], "formula_id": "formula_19", "formula_text": "P ′ A = Conv(P A ),(15)" }, { "formula_coordinates": [ 6, 372.54, 433.9, 168.42, 9.65 ], "formula_id": "formula_20", "formula_text": "L = L cls + L seg + L mscc . (16" }, { "formula_coordinates": [ 6, 540.96, 434.22, 4.15, 8.64 ], "formula_id": "formula_21", "formula_text": ")" } ]
2023-05-18
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b1", "b2", "b5", "b6", "b7", "b8", "b5", "b9", "b1", "b2", "b11", "b12" ], "table_ref": [], "text": "Electronic health records (EHR), e.g., radiology images, lab and test results, and patient demographics, are often used in clinical diagnosis. For instance, to diagnose Alzheimer's Disease (AD), apart from brain imaging, physicians also use physical and neurological exams and diagnostic tests, with these test results presented in the text form. In past decades, researchers have gradually collected a large number of EHRs, e.g., ADNI Petersen et al. [2010], NACC Beekly et al. [2007], OASIS Marcus et al. [2007], for studying AD. However, learning how to make diagnoses based on these EHRs, especially how to fuse these medical data from different resources and in different forms, e.g., images and texts, is still a challenging task in computer-aided diagnosis (CAD).\nRecently, large vision language pre-training (VLP) models, e.g., CLIP Radford et al. [2021], BLIP Li et al. [2022], BLIP-2 Li et al. [2023], have achieved great success in many downstream computer vision applications, such as classification Bao et al. [2022], segmentation Xu et al. [2021]. These VLP models learn multi-modal representations from large image and text datasets, by aligning their features into a common space for learning. In the medical domain, researchers propose Medical Bootstrapping Language-Image Pre-training (MedCLIP) Wang et al. [2022], which learns generic representation from large-scale medical image-text pairs. This pre-trained medical model presents its generalization to various medical tasks, especially where limited medical data or labels are available for learning. However, most existing VLP models handle the situation that texts are corresponding textual descriptions of their paired images, such as image captions or medical reports.\nIn this paper, we consider another scenario where images and texts provide complementary information, that is, texts include additional information except for medical scans in EHRs, e.g., the age, gender, and lab results of a subject, to make an informed CAD decision. Our goal is to learn a VLM that suits this CAD scenario, which has multi-model intelligence to fuse different types of medical data, e.g., 3D medical scans and texts that contain complementary information from EHRs for CAD. Here, we need to address three problems: (1) How to extend a 2D image encoder to extract features from 3D medical images? (2) How to align image and text features and learn multi-model representations? (3) How to obtain a lightweight language model for our CAD purpose? Inspired by BLIP-2 Li et al. [2023], we propose MedBLIP as shown in Fig. 1, a bootstrapping language-image pre-training model to fuse 3D medical images and texts based on a query mechanism. We first adopt a learnable patch embedding to bridge the gap between 3D medical images and a pre-trained image encoder, which greatly reduces the amount of image data required for learning. Then, we propose a MedQFormer, which contains learnable queries to allow aligning visual features with textural ones desired by a language model. Lastly, we choose BioMedLM Venigalla et al. [2022] as our basic language model and fine-tune it using the LoRA Hu et al. [2021] technique. Our CAD model MedBLIP is lightweight and trainable on a single NVIDIA RTX 3090 GPU.\nTo train and evaluate the effectiveness of our proposed MedBLIP model, we collect more than 30,000 medical image volumes from five public AD datasets, including ADNI Petersen et al. [2010], NACC Beekly et al. [2007], OASIS Marcus et al. [2007], AIBL Ellis et al. [2009], and MIRIAD Malone et al. [2013]. After pre-training on most of the images from ADNI, NACC, and OASIS datasets, we evaluate our MedBLIP on two tasks: (1) zero-shot classification, which directly applies pre-trained MedBLIP to classify unseen subjects from AIBL and MIRIAD datasets into three classes, i.e., normal controls (NC), mild cognitive impairment (MCI), and AD; and (2) zero-shot medical visual question answering (VQA), which generates an initial diagnosis for an unseen AIBL or MIRIAD subject based on input images and text descriptions and also provides some reasons for making such decision.\nOverall, our contributions of this paper are summarized below:\n• We propose a lightweight CAD system MedBLIP, which is pre-trained on electronic health records in the form of images and texts, performs zero-shot classification, and makes medical VQA. The architecture of our CAD system is general and has the potential to incorporate more modalities and extend to other diseases beyond AD. • We propose a MedQFormer module, which extracts 3D medical image features and aligns them with textural features to be fused into a language model (LM). This module provides a way to align different types of medical data into the common space of LM, which is generic and could be used in other medical applications. • To our best knowledge, we have collected the largest public dataset for studying AD. On this dataset, our MedBLIP achieves the SOTA performance on separating AD and MCI subjects from healthy controls. Besides, we directly work on raw images without any preprocessing, which makes our system easy to use in practice." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b3", "b20", "b21", "b22", "b5", "b25", "b26", "b27", "b29" ], "table_ref": [], "text": "Vision Language Pre-Training. Data collected from different modalities typically provide different views about the data, which often complement each other and provide more complete information to facilitate a holistic understanding of the data. Vision-language pre-training (VLP) aims to learn multimodal foundation models, showing improved performance on various vision-and-language tasks Radford et al. [2021]. Roughly, we can divide current VLP models into two categories when fusing multi-modal inputs: light fusion and heavy fusion.\nThe approaches in the light fusion category focus on multi-modal alignment, which facilitates text matching, retrieval, and other downstream tasks, with representative methods like CLIP Radford et al. [2021] and ALIGN Jia et al. [2021]. These methods directly align image representations with the corresponding text representations using a contrastive loss. DeCLIP Li et al. [2021a] exploits inter/intra-modality supervision to train a CLIP-like model with fewer data. On the other hand, the heavy fusion category focuses on incorporating multi-modal information with an attention mechanism to perform additional tasks. For instance, ALBEF Li et al. [2021b] proposes a contrastive alignment, which is followed by deeper fusion with a multi-modal encoder. Methods such as BLIP LLMs for Multi-Modal Understanding. Recently, using large language models (LLMs) as decoders in vision-language tasks has gained significant attention. This approach takes advantage of cross-modal transfer, which allows sharing knowledge between language and multi-modal domains.\nVisualGPT Chen et al. [2022] and Frozen Tsimpoukelli et al. [2021] have demonstrated the advantage of employing a pre-trained language model as a vision-language model decoder. Flamingo Alayrac et al. [2022] freezes a pre-trained vision encoder and language model and then fuses vision and language modalities with gated cross-attention. BLIP-2 Li et al. [2023] designs a Q-Former to align the visual features from the frozen visual encoder with large language models, like FLAN-T5 Chung et al. [2022] and OPT Zhang et al. [2022]. FROMAGe Koh et al. [2023] freezes large language models and visual encoders, and fine-tunes linear mapping layers to achieve cross-modality interactions. This method shows strong zero-shot performances on contextual image retrieval and multi-modal dialogue tasks. Built upon PaLM Chowdhery et al. [2022], PaLM-E Driess et al. [2023] employs features from sensor modalities and integrates real-world continuous sensor modalities into an LLM, thereby establishing a connection between real-world perceptions and human languages. GPT-4 OpenAI [2023] presents powerful visual understanding and reasoning abilities after pre-training on a vast collection of image-text data.\nMost recently, several domain-specific multi-modal LLMs have been developed. ChatCAD Wang et al. [2023] combines visual and linguistic information processed by various networks as inputs of large language models to develop a medical-image CAD model, which provides a condensed report and offers interactive explanations and medical recommendations. Open-ended MedVQA van Sonsbeek et al.\n[2023] employs a multi-layer perceptron (MLP) network that maps the extracted visual features from a frozen vision encoder to a set of learnable tokens, which develops an openended VQA for diagnoses and treatment decisions. Differently, our MedBLIP explores a lightweight framework that works on 3D medical scans and aligns different types of medical data for CAD." }, { "figure_ref": [], "heading": "MedBLIP", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "We design a CAD system in the form of dialogue, with application to automatic AD diagnosis. Given inputs of a brain image scan I collected from a subject and a textual description T generated from this subject's EHRs in natural language, for a question asked in natural language Q, our CAD aims to\nsequentially generate an answer A = {A 0 , A 1 , ..., A N } composed of N tokens, by conditioning on all inputs {I, T, Q}. To achieve this goal, we build a CAD model based on a large language model and find its optimal parameters θ * by maximizing the conditional log-likelihood below:\nθ * = arg max θ N i=1 log p θ (A i | I, T, Q, A i-1\n) .\n(1)" }, { "figure_ref": [ "fig_0" ], "heading": "Network Framework", "publication_ref": [ "b5", "b5", "b32", "b33" ], "table_ref": [], "text": "Our CAD model is designed as an encoder-decoder architecture, with a two-stream encoder and a language model (LM) as a decoder, as illustrated in Fig. 1. Specifically, the two-stream encoder takes inputs from two modalities, namely a vision sub-encoder for the image I and the text sub-encoder for the textual description T and the question Q. The language model is defined as a causal language transformer, which generates the answer A in an auto-regressive manner.\nVision Encoding Stream. To encode a brain image volume and fully leverage an existing large model for reducing data requirements, we employ a pre-trained 2D vision encoder to extract its 3D visual features. To make this work, we need to address two problems: (1) bridging the domain and dimension gaps between a 2D vision encoder and a 3D medical scan, and (2) aligning image features with textural ones, which allows mapping all inputs into the latent space of the LM decoder for learning multi-modal representations. Inspired by Li et al. [2023], we propose a query network based on a transformer encoder, which maps the visual features into a visual prefix\nH v = {v 1 , v 2 , • • • , v v } ∈ R v ×e\nfor the language model, where v is the length of the vision embedding sequence and e is the embedding size. Also, we have a lightweight projection, which is learnable and adapts 3D image volumes to inputs of a pre-trained image encoder. This medical query transformer (MedQFormer) tackles the above two problems and will be discussed in detail in Sect. 3.3.\nLanguage Encoding Stream. Regarding the textural description of subjects' EHRs except for image scans and the asked questions, we first utilize a standard tokenization process as in Jain [2022] to obtain a sequence of tokens, i.e., the textual description\nT = {t 1 , t 2 , • • • , t t } ∈ R t×e , the question Q = {q 1 , q 2 , • • • , q q } ∈ R q ×e\n, and the answer A = {a 1 , a 2 , • • • , a a } ∈ R a×e , where t , q , a indicate the length of the embedding sequence of the text, question, and answer, respectively. These tokens are embedded later using the embedding function provided in a pre-trained language model.\nPrompt Structure. To create a structured prompt, following current VQA methods used in language models Li et al. [2023], van Sonsbeek et al. [2023], we prepend the question and answer tokens with tokenized descriptive strings, namely in the form of question: and answer:. We choose to place the embeddings of the image and text description before the question tokens. As a result, we have the following prompt template:\np =[v 1 , v 2 , • • • , v x , t 1 , t 2 , • • • , t t , Question :W hat will this subject be diagnosed with?Answer:],(2)\nwhich is fed as input to the language model below.\nLanguage Model. Following standard language modeling systems Venigalla et al.\n[2022], we treat VQA as a process of a conditional generation of text, and we optimize the standard maximum likelihood objective during training. The language model receives the prompt sequence as input and outputs the answer A, token by token. Specifically, at each time step i, the outputs of the model are the logits, which parameterize a categorical distribution p θ (A) over the vocabulary tokens. This distribution is represented as follows:\nlog p θ (A) = la log p θ a i | v 1 , . . . v v , t 1 , . . . t t , q 1 , . . . q q , a 1 , . . . a i-1 .(3)\nThe parameters of the language model are initialized from a pre-trained model, which has been previously pre-trained on huge web-collected datasets Dodge et al. [2021], Gao et al. [2020]." }, { "figure_ref": [ "fig_1" ], "heading": "MedQFormer", "publication_ref": [ "b5" ], "table_ref": [], "text": "To bridge the gap between 3D medical images and 2D vision encoders pre-trained on natural images, inspired by BLIP-2 Li et al. [2023], we employ a query encoder to extract and align vision features. Image Feature Extraction. We first divide the input image I into a set of 3D sub-volumes {Iv i } Nv i=1 , followed by a linear projection f ϕ1 that projects 3D cubes into 1D image embeddings\n{E i = f ϕ1 (Iv i )} Nv i=1\n. With the addition of learnable position embeddings f ϕ2 , the image embeddings can be received as inputs of a standard pre-trained vision encoder to extract desired image features. Although the pre-trained vision encoder f φ has fixed parameters φ, we have learnable linear projection and position embedding to transfer a 2D vision encoder to a 3D medical domain. Hence, we have a medical vision encoder with learnable parameters ϕ 1 and ϕ 2 , which maps a volumetric image\nI into N v visual features f 1 , • • • , f Nv = {f φ (f ϕ1 (Iv i ), f ϕ2 (Iv i ))} Nv i=1 .\nAs a result, we obtain the final image embeddings\nIE = (f i , • • • , f Nv ) for each input image volume I.\nQuery Encoder. To map the visual features {f i } Nv i=1 into the common language space, we use a set of L learnable queries qry i ∈ R de , where d e is the dimension of query embeddings. These queries have a transformer structure that interacts with the image encoder for adjusting visual feature extraction and a text transformer as a textural feature extractor. As shown in Fig. 2, these learnable queries interact with each other through self-attention layers, then interact with image features through cross-attention layers. As a result, we obtain a visual prefix H v that is aligned with textural features and can be taken by a language model." }, { "figure_ref": [], "heading": "Training MedBLIP", "publication_ref": [ "b5" ], "table_ref": [], "text": "Learnable Parameters. Standard fine-tuning of a language model could hurt its generalization capability, especially if a dataset used for fine-tuning has a small size and is domain-specific as in our case. Therefore, we consider two parameter-efficient strategies that adapt the attention blocks of language models:\n• Frozen LM. The parameters of the language model are kept entirely frozen during training.\nIn this setting, only the 3D vision query network is updated through backpropagation. • Low-Rank Adaptation (LoRA). We add learnable weight matrices to the query Q w and value V w of the attention blocks in each layer of the frozen language model as W + W following Hu et al. [2021]. In this setting, the 3D vision query network function is trained together with the learnable weight matrices.\nObjective Functions. We have loss functions for MedQformer and LM modules in our MedBLIP model. As discussed in Sect. (4) Similar to BLIP-2 Li et al. [2023], we select the one that has the highest similarity with text from multiple output query embeddings to compute the ITC Loss. To supervise the LM component, we use cross entropy to compute language generation loss L LG . Hence, the final loss function is defined as:\nL total = L F A + λ LG L LG ,(5)\nwhere λ LG is a hyperparameter to balance these two terms. 4 Experiments" }, { "figure_ref": [], "heading": "Datasets and Experimental Settings", "publication_ref": [ "b0", "b1", "b2", "b11", "b12", "b34", "b5", "b9", "b32", "b33" ], "table_ref": [ "tab_1" ], "text": "We collect more than 30,000 image volumes from five public datasets for studying AD/Dementia and evaluate our CAD system MedBLIP on separating subjects with AD or mild cognitive impairment (MCI) from normal controls (NC). Table 1 reports the demographic statistics of these five datasets.\nADNI Petersen et al. [2010]. This dataset has 10,387 volumetric T1 MRI scans that went through a series of pre-processing steps, including denoising, bias field correction, skull stripping, and affine registration to the SRI24 atlas, with an image size of 138 × 176 × 138. For testing, we subject-wisely sample a subset of 200 images in each class (i.e., NC, MCI, AD), which is named ADNI-3x200.\nNACC Beekly et al. [2007]. This dataset has a large amount of raw volumetric T1 MRI scans with a variety of resolutions. We select those MRIs having 100∼256 slices in all three dimensions, resulting in 15,354 images. Unlike the ADNI dataset, we directly use the raw data; but similarly, we sample subject-wisely a NACC-3x200 dataset for testing.\nOASIS Marcus et al. [2007]. We collect 3020 volumetric T1 MRIs from OASIS 1&2. These scans went through pre-processing with denoising and skull stripping and have a size of 256 × 256 × 256.\nSince OASIS 1 only releases some clinical reports but with no diagnoses (e.g. NC, MCI or dementia), we use all images from OASIS 1 for pre-training. For testing, we sample subject-wisely an OASIS-2x200 subset from OASIS 2 to separate demented and non-demented subjects.\nAIBL Ellis et al. [2009]. This dataset has 1002 volumetric T1 MRI scans with sizes of 160 × 240 × 256, which are collected from demented, MCI, or healthy subjects. We do not use this data for training; for testing, we sample a balanced subset with 200 images each for NC, MCI, and dementia classes.\nMIRIAD Malone et al. [2013]. We collect 708 raw volumetric T1 MRI scans, which have an image size of 124 × 256 × 256. This is a binary classification dataset with two labels, i.e., demented and not-demented subjects. We sample a balanced subset with a 1:1 positive and negative ratio, resulting in 2 × 200 images for testing. No images are used for training to perform zero-shot experiments.\nAs a result, we have most images from ADNI, NACC, and OASIS datasets for pretraining and save images from AIBL and MIRIAD datasets for zero-shot testing. In total, we held 1000 subjects with 2600 samples out for evaluation. To simplify the preprocessing step, all images are first padded to a cube and then scaled to a unified size of 224 × 224 × 224 as inputs.\nImplementation Details. For the frozen image encoder, we choose state-of-the-art pre-trained ViT-G/14 from EVA-CLIP Fang et al. [2022], which is demonstrated to be effective in BLIP-2 Li et al. [2023]. For the input image with a size of 224 × 224 × 224, the patch size and the stride are both set as 32, resulting in image features with the size of 344 × 1408. For the MedQformer, we use 32 learnable queries, where each query has a dimension of 768 and the hidden layers N is set to 12.\nRegarding language models, we have three options, i.e., FLAN-T5 Chung et al. [2022], BioGPT Luo et al. [2022], and BioMedLM Venigalla et al. [2022]. FLAN-T5 is an instruction-trained model with 3B parameters trained on C4 WebText Dodge et al. [2021]. BioGPT and BioMedLM are both GPT models relying on GPT-2 architecture, pre-trained on PubMed and biomedical data from the Pile Gao et al. [2020], with a size of 1.5B and 2.7B parameters, respectively. All our models are able to fine-tune on a single NVIDIA RTX 3090 GPU. We use the AdamW optimizer with a learning rate of 5e-3. The hyperparameter λ LG is set to 1. The CDR is 0.5. The logical memory score is 2.\nQ: What will this subject be diagnosed with?\nGround Truth A: Dementia.\nOur A: Dementia." }, { "figure_ref": [], "heading": "I:", "publication_ref": [], "table_ref": [], "text": "T: 85-year-old Female. The MMSE score is 30. The CDR is 0.\nQ: What will this subject be diagnosed with?\nGround Truth A: Mild cognitive impairment.\nOur A: Mild cognitive impairment." }, { "figure_ref": [], "heading": "I:", "publication_ref": [], "table_ref": [], "text": "T: 78-year-old Female. The MMSE score is 27. The CDR is 0.5. The logical memory score is 5.\nQ: What will this subject be diagnosed with?\nGround Truth A: Healthy. Q: What will this subject be diagnosed with?\nGround Truth A: Mild cognitive impairment.\nOur A: Mild cognitive impairment." }, { "figure_ref": [], "heading": "Q: Why have you made this decision?", "publication_ref": [], "table_ref": [], "text": "Our A: CDR score is 0.5." }, { "figure_ref": [], "heading": "I: d", "publication_ref": [], "table_ref": [], "text": "T: 63-year-old Female. The MMSE score is 18.\nThe CDR is 1. The logical memory score is 5.\nQ: What will this subject be diagnosed with?\nGround Truth A: Dementia.\nOur A: Dementia." }, { "figure_ref": [], "heading": "Q:", "publication_ref": [], "table_ref": [], "text": "What is abnormal in the brain imaging?\nOur A: The brain is atrophic.\nI: e T: 71.6-year-old Male, with 16 years of education. The MMSE score is 30. The CDR score is 0. The logical memory score is 14.\nQ: What will this subject be diagnosed with?\nGround Truth A: Healthy.\nOur A: Healthy." }, { "figure_ref": [], "heading": "Q:", "publication_ref": [], "table_ref": [], "text": "What is the reason for you decision?\nOur A: The MMSE socre is higher than the CDR score and the logical memory score is higher than the Dementia score. " }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "Experimental Results", "publication_ref": [ "b34" ], "table_ref": [ "tab_2", "tab_3", "tab_4", "tab_5" ], "text": "Zero-shot Medical CAD. Table 2 reports the evaluation of our MedBLIP using different language models and settings. The language models, i.e., FLAN-T5, BioGPT, and BioMedLM, show their capability of performing monomodal medical CAD, i.e., using text descriptions only, to some extent. Among these three language models, BioMedLM performs the best, showing that it captures some dependencies between prompts and inherent knowledge when generating answers. By adding the visual modality even without fine-tuning, the performance of our model on all datasets has improved significantly. The accuracy improvement varies within [4.0%, 44.8%], and the BioGPT benefits the most from the visual input. This result indicates the necessity of using image scans in diagnosis.\nUsing the fine-tuning technique LoRA, our performance is further improved, with at least 1.3% and Figure 3 (a-c) visualizes the zero-shot CAD process on unseen subjects sampled from the AIBL dataset. Take Fig. 3(b) for example, although the text description of this subject shows no significant difference from those of healthy subjects, in brain scans the hippocampus and ventricle show the presence of abnormal atrophy. Our MedBLIP provides the correct diagnosis of MCI.\nZero-shot Medical VQA. Figure 3 (d-f) shows the zero-shot Medical Visual question answering(VQA) ability of our MedBLIP. Since our approach is generative, after a simple classificationbased question, MedBLIP provides a natural way of performing VAQ and presents the chain of thoughts. MedBLIP may also generate unsatisfactory answers to users' questions due to various reasons, including inaccurate medical knowledge from the LLM, activating the incorrect reasoning path, or not having up-to-date information about new image content.\nAblation Study. We perform ablation studies from three aspects to answer the following three questions: (1) Why use a 2D pre-trained vision encoder instead of a trainable large vision encoder? (2) Will a prompt structure make a difference in the final CAD result? and (3) Why need the ITC loss between the image and diagnosis Q&A?\n(1) Benefit of using a frozen 2D pre-trained vision encoder. To demonstrate the effectiveness of our lightweight image encoder based on the 2D pre-trained model, we take the query output embedding from MedQformer and compare it with features extracted from trainable ViT-G Fang et al. [2022] on ADNI. We add a linear classification head with the cross-entropy loss. Table 3 reports that MedQFormer achieves slightly reduced performances, i.e., 0.6% lower than ViT-G in accuracy, but with much fewer parameters (only 15.1% of ViT-G's). This lightweight module benefits downstream tasks and allows building our model on language models and training it on one GPU. We can also see that benefiting from this lightweight visual encoder, our MedBLIP outputs ViT-G by an improvement of 6.5% in the classification accuracy on ADNI.\n(2) Effect of using different prompt structures. To answer the second question above, we investigate the order of three prompting components, i.e., image and text features, the question, and the answer, and its effect on our model's performance. We treat the one with the question in the middle as the regular prompt structure and compare it to the one starting with the question. Table 4 shows that on some datasets our MedBLIP prefers the regular prompt, but this is not always the case. We conclude that the prompt strategy will not make a huge difference in the final performance of our model.\n(3) Necessity of using two ITC loss functions. Besides the regular ITC loss between image and text pairs, we have another one between image and diagnosis Q&A, as presented in Eq. 4. Table 5 demonstrates that by adding the second ITC loss function, the classification accuracy improves on all datasets. This result is consistent with our motivation of adding the ITC loss between image and diagnosis Q&A, since it enforces the learnable queries to extract image features related to CAD." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a novel CAD system MedBLIP that fuses medical multi-modal data, i.e., image and text, from EHRs and shows its capability of performing zero-shot classification and medical VQA. Our MedBLIP introduces MedQFormer, a lightweight trainable 3D vision encoder that acts as a bridge between 3D medical images and a large frozen 2D vision encoder and as a bridge between 3D medical images and language models. Moreover, MedBLIP operates with low computational costs, as it smartly combines large pre-trained vision and language models with no need of training them from scratch or a large dataset in a specific medical domain. Our experiments demonstrate the effectiveness of our approach by outperforming several baselines, which sheds new light on further exploring medical multi-modal CAD.\nLimitations and Future Work. LLMs can perform in-context learning given domain-specific fewshot examples. However, in our experiments with MedBLIP, we do not observe an improved VQA performance when asking something about the input brain scan context, even though it made the correct diagnosis. We attribute the unsatisfactory VQA results to the lack of corresponding textural descriptions of brain scans in our dataset. Without a description of a 3D brain MRI scan, the LLMs have difficulty describing what they \"observe\" in this image, such as the shrunken hippocampus or the enlarged ventricles. Currently, no such dataset or model is available to provide an image caption/description for a brain scan. We will explore this direction in our future work.\nBesides, degenerative diseases like AD are often studied in the longitudinal setting since longitudinal atrophy has probably happened at an early stage of AD, making it easier to separate MCI subjects from normal controls. In future work, we will extend our model to take longitudinal inputs and further improve our classification accuracy. In addition, in our experiments, we only consider two modalities, i.e., MRIs and texts, other medical data sources, like positron emission tomography (PET) images, and audio, are also useful in diagnosing AD. Fortunately, the architecture of our MedBLIP is flexible enough to incorporate additional modalities, which is also left as our future work." } ]
Vision-language pre-training (VLP) models have been demonstrated to be effective in many computer vision applications. In this paper, we consider developing a VLP model in the medical domain for making computer-aided diagnoses (CAD) based on image scans and text descriptions in electronic health records, as done in practice. To achieve our goal, we present a lightweight CAD system MedBLIP, a new paradigm for bootstrapping VLP from off-the-shelf frozen pre-trained image encoders and frozen large language models. We design a MedQFormer module to bridge the gap between 3D medical images and 2D pre-trained image encoders and language models as well. To evaluate the effectiveness of our MedBLIP, we collect more than 30,000 image volumes from five public Alzheimer's disease (AD) datasets, i.e., ADNI, NACC, OASIS, AIBL, and MIRIAD. On this largest AD dataset we know, our model achieves the SOTA performance on the zero-shot classification of healthy, mild cognitive impairment (MCI), and AD subjects, and shows its capability of making medical visual question answering (VQA). The code and pre-trained models is available online: https://github.com/Qybc/MedBLIP.
MedBLIP: Bootstrapping Language-Image Pre-training from 3D Medical Images and Texts
[ { "figure_caption": "Figure 1 :1Figure 1: Architecture overview of our proposed MedBLIP, a CAD system designed for medical diagnosis with electronic health records via multimodel representation learning in a language model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of our proposed MedQformer that aligns 3D visual and textural features for learning in the unified latent space of language model.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "3.3, MedQformer includes both a transformer image encoder E I and a transformer text encoder E T . During training, we have a set of image-text pairs (I, T ) and a set of image and diagnosis Q&A pairs (I, Q&A). We use the image-text contrastive learning (ITC) loss in Radford et al. [2021] to align multi-modal representation, resulting in our feature alignment loss: L F A = contrastive (E I (I), E T (T )) + contrastive (E I (I), E T (Q&A)) .", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": ".2-year-old Female, with 16 years of education. The MMSE score is 27. The CDR score is 0.5.", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Samples of zero-shot results on the AIBL dataset, which are generated by our MedBLIP built on BioMedLM with LoRA fine-tuning.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Demographic statistics of used AD datasets. F: female, M: male, Educ: Education level, SES: Socio-Economic Status, MMSE: Mini-Mental State Examination, CDR: Clinical Dementia Rate, E/L/S/PMCI: early, late, stable, and progressive MCI, IMCI: Impaired not MCI, and DEM: demented. # indicates the number.", "figure_data": "Datasets #Images#F/#MAge(#)Texts Educ(#) SES(#) MMSE(#) CDR(#) Logical Memory(#)Diagnosis(#)ADNI10387 4710/5677 45-95(10386) 9860-938594017189NC,MCI,E/L/S/PMCI,AD (10387)NACC15354 9058/6296 19-102(15354) 15329-7867153547654NC, IMCI, MCI, DEM (14277)OASIS3020 1798/1222 18-98(3020)2300215322932300-DEM, Non-DEM (336)AIBL1002471/531 42-96(1002)--100210021002NC, MCI, AD (997)MIRIAD708393/31555-87(708)--26846-NC, AD(708)", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Experimental results of our MedBLIP on five datasets, including zero-shot CAD on the last two datasets. The classification performance is measured in the mean accuracy (ACC) with five runs. The best scores are in bold.", "figure_data": "MethodsLM size#Learnable paramsADNI -3x200NACC -3x200OASIS -2x200AIBLMIRIADFLAN-T5 Chung et al. [2022]Text only-37.0%39.5%46.7%33.3%60.0%Ours w/ T5Frozen LoRA3.4B151M 156M50.5% 64.0%69.2% 77.3%61.3% 75.8%54.7% 59.2%64.0% 66.8%BioGPT Luo et al. [2022]Text only-25.7%21.7%28.3%26.7%50.0%Ours w/ BioGPTFrozen LoRA1.5B151M 156M56.3% 62.2%66.5% 72.3%66.0% 71.7%60.7% 62.4%55.2% 59.7%BioMedLM Venigalla et al. [2022]Text only-62.5%63.5%61.8%65.7%46.3%Ours w/ BioMedLMFrozen LoRA2.7B151M 154M71.2% 78.7%82.0% 83.3%79.8% 85.3%77.8% 80.8%66.1% 71.0%", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison between a large vision encoder and our MedQFormer on ADNI.", "figure_data": "Visual features#Params AccuracyViT-G Fang et al. [2022]1B72.2%Our MedQFormer151M71.6%", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison between different prompt structures. 5% improvement in accuracy. Overall, our MedBLIP built upon BioMedLM and LoRA fine-tuning shows the best performance on all datasets.", "figure_data": "SettingADNI -3x200NACC -3x200OASIS -2x200AIBLMIRIADRegular (I&T, Q, A)78.7%83.3%85.3%80.8%71.0%Alternative (Q, I&T, A) 79.3%(+0.6) 82.8%(-0.5) 82.5%(-1.8) 82.8%(+2.0) 70.8%(+0.2)at most 14.", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Ablation study on loss functions.", "figure_data": "Loss FunctionADNI -3x200NACC -3x200OASIS -2x200AIBLMIRIADcontrastive(I, T )71.7%80.5%82.5%74.7%66.8%contrastive(I, T ) + contrastive(I, Q&A)78.7%(+7.0)83.3%(+2.8)85.3%(+2.8)80.8%(+6.1)71.0%(+4.2)", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" } ]
Qiuhui Chen; Xinyue Hu; Zirui Wang; Yi Hong
[ { "authors": "Ronald Carl Petersen; Laurel A Paul S Aisen; Michael C Beckett; Anthony Collins Donohue; Danielle J Gamst; Clifford R Harvey; William J Jack; Leslie M Jagust; Arthur W Shaw; Toga", "journal": "Neurology", "ref_id": "b0", "title": "Alzheimer's disease neuroimaging initiative (adni): clinical characterization", "year": "2010" }, { "authors": "Duane L Beekly; Erin M Ramos; William W Lee; Woodrow D Deitrich; Mary E Jacka; Joylee Wu; Janene L Hubbard; Thomas D Koepsell; John C Morris; Walter A Kukull", "journal": "Alzheimer Disease & Associated Disorders", "ref_id": "b1", "title": "The national alzheimer's coordinating center (nacc) database: the uniform data set", "year": "2007" }, { "authors": "Tracy H Daniel S Marcus; Jamie Wang; John G Parker; John C Csernansky; Randy L Morris; Buckner", "journal": "Journal of cognitive neuroscience", "ref_id": "b2", "title": "Open access series of imaging studies (oasis): cross-sectional mri data in young, middle aged, nondemented, and demented older adults", "year": "2007" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b3", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Junnan Li; Dongxu Li; Caiming Xiong; Steven Hoi", "journal": "PMLR", "ref_id": "b4", "title": "Blip: Bootstrapping language-image pretraining for unified vision-language understanding and generation", "year": "2022" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b5", "title": "Blip-2: Bootstrapping language-image pretraining with frozen image encoders and large language models", "year": "2023" }, { "authors": "Hangbo Bao; Wenhui Wang; Li Dong; Qiang Liu; Owais Khan Mohammed; Kriti Aggarwal; Subhojit Som; Songhao Piao; Furu Wei", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b6", "title": "Vlmo: Unified vision-language pre-training with mixtureof-modality-experts", "year": "2022" }, { "authors": "Mengde Xu; Zheng Zhang; Fangyun Wei; Yutong Lin; Yue Cao; Han Hu; Xiang Bai", "journal": "", "ref_id": "b7", "title": "A simple baseline for zero-shot semantic segmentation with pre-trained vision-language model", "year": "2021" }, { "authors": "Zifeng Wang; Zhenbang Wu; Dinesh Agarwal; Jimeng Sun", "journal": "", "ref_id": "b8", "title": "Medclip: Contrastive learning from unpaired medical images and text", "year": "2022" }, { "authors": " Venigalla; M Frankle; Carbin", "journal": "MosaicML. Accessed", "ref_id": "b9", "title": "Biomedlm: a domain-specific large language model for biomedical text", "year": "2022-12-23" }, { "authors": "J Edward; Yelong Hu; Phillip Shen; Zeyuan Wallis; Yuanzhi Allen-Zhu; Shean Li; Lu Wang; Weizhu Wang; Chen", "journal": "", "ref_id": "b10", "title": "Lora: Low-rank adaptation of large language models", "year": "2021" }, { "authors": "Kathryn A Ellis; Ashley I Bush; David Darby; Daniela De Fazio; Jonathan Foster; Peter Hudson; Nicola T Lautenschlager; Nat Lenzo; Ralph N Martins; Paul Maruff", "journal": "International psychogeriatrics", "ref_id": "b11", "title": "The australian imaging, biomarkers and lifestyle (aibl) study of aging: methodology and baseline characteristics of 1112 individuals recruited for a longitudinal study of alzheimer's disease", "year": "2009" }, { "authors": "Ian B Malone; David Cash; Gerard R Ridgway; David G Macmanus; Sebastien Ourselin; Nick C Fox; Jonathan M Schott", "journal": "NeuroImage", "ref_id": "b12", "title": "Miriad-public release of a multiple time point alzheimer's mr imaging dataset", "year": "2013" }, { "authors": "Chao Jia; Yinfei Yang; Ye Xia; Yi-Ting Chen; Zarana Parekh; Hieu Pham; Quoc Le; Yun-Hsuan Sung; Zhen Li; Tom Duerig", "journal": "PMLR", "ref_id": "b13", "title": "Scaling up visual and vision-language representation learning with noisy text supervision", "year": "2021" }, { "authors": "Yangguang Li; Feng Liang; Lichen Zhao; Yufeng Cui; Wanli Ouyang; Jing Shao; Fengwei Yu; Junjie Yan", "journal": "", "ref_id": "b14", "title": "Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm", "year": "2021" }, { "authors": "Junnan Li; Ramprasaath Selvaraju; Akhilesh Gotmare; Shafiq Joty; Caiming Xiong; Steven Chu; Hong Hoi", "journal": "Advances in neural information processing systems", "ref_id": "b15", "title": "Align before fuse: Vision and language representation learning with momentum distillation", "year": "2021" }, { "authors": "Kaiming He; Haoqi Fan; Yuxin Wu; Saining Xie; Ross Girshick", "journal": "", "ref_id": "b16", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "Jiahui Yu; Zirui Wang; Vijay Vasudevan; Legg Yeung; Mojtaba Seyedhosseini; Yonghui Wu", "journal": "", "ref_id": "b17", "title": "Coca: Contrastive captioners are image-text foundation models", "year": "2022" }, { "authors": "Ekin Tiu; Ellie Talius; Pujan Patel; Curtis P Langlotz; Andrew Y Ng; Pranav Rajpurkar", "journal": "Nature Biomedical Engineering", "ref_id": "b18", "title": "Expertlevel detection of pathologies from unannotated chest x-ray images via self-supervised learning", "year": "2022" }, { "authors": "Shruthi Bannur; Stephanie Hyland; Qianchu Liu; Fernando Perez-Garcia; Maximilian Ilse; C Daniel; Benedikt Castro; Harshita Boecking; Kenza Sharma; Anja Bouzid; Thieme", "journal": "", "ref_id": "b19", "title": "Learning to exploit temporal structure for biomedical vision-language processing", "year": "2023" }, { "authors": "Jun Chen; Han Guo; Kai Yi; Boyang Li; Mohamed Elhoseiny", "journal": "", "ref_id": "b20", "title": "Visualgpt: Data-efficient adaptation of pretrained language models for image captioning", "year": "2022" }, { "authors": "Maria Tsimpoukelli; Jacob L Menick; Serkan Cabi; Oriol Eslami; Felix Vinyals; Hill", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b21", "title": "Multimodal few-shot learning with frozen language models", "year": "2021" }, { "authors": "Jean-Baptiste Alayrac; Jeff Donahue; Pauline Luc; Antoine Miech; Iain Barr; Yana Hasson; Karel Lenc; Arthur Mensch; Katherine Millican; Malcolm Reynolds", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b22", "title": "Flamingo: a visual language model for few-shot learning", "year": "2022" }, { "authors": "Chung Hyung Won; Le Hou; Shayne Longpre; Barret Zoph; Yi Tay; William Fedus; Eric Li; Xuezhi Wang; Mostafa Dehghani; Siddhartha Brahma", "journal": "", "ref_id": "b23", "title": "Scaling instruction-finetuned language models", "year": "2022" }, { "authors": "Susan Zhang; Stephen Roller; Naman Goyal; Mikel Artetxe; Moya Chen; Shuohui Chen; Christopher Dewan; Mona Diab; Xian Li; Xi Victoria Lin", "journal": "", "ref_id": "b24", "title": "Opt: Open pre-trained transformer language models", "year": "2022" }, { "authors": "Jing Yu Koh; Ruslan Salakhutdinov; Daniel Fried", "journal": "", "ref_id": "b25", "title": "Grounding language models to images for multimodal generation", "year": "2023" }, { "authors": "Aakanksha Chowdhery; Sharan Narang; Jacob Devlin; Maarten Bosma; Gaurav Mishra; Adam Roberts; Paul Barham; Hyung Won Chung; Charles Sutton; Sebastian Gehrmann", "journal": "", "ref_id": "b26", "title": "Palm: Scaling language modeling with pathways", "year": "2022" }, { "authors": "Danny Driess; Fei Xia; S M Mehdi; Corey Sajjadi; Aakanksha Lynch; Brian Chowdhery; Ayzaan Ichter; Jonathan Wahid; Quan Tompson; Tianhe Vuong; Yu", "journal": "", "ref_id": "b27", "title": "Palm-e: An embodied multimodal language model", "year": "2023" }, { "authors": " Openai", "journal": "", "ref_id": "b28", "title": "", "year": "2023" }, { "authors": "Sheng Wang; Zihao Zhao; Xi Ouyang; Qian Wang; Dinggang Shen", "journal": "", "ref_id": "b29", "title": "Chatcad: Interactive computer-aided diagnosis on medical image using large language models", "year": "2023" }, { "authors": "Mohammad Tom Van Sonsbeek; Ivona Mahdi Derakhshani; Najdenkoska; G M Cees; Marcel Snoek; Worring", "journal": "", "ref_id": "b30", "title": "Open-ended medical visual question answering through prefix tuning of language models", "year": "2023" }, { "authors": "Mohan Shashank; Jain", "journal": "Springer", "ref_id": "b31", "title": "Hugging face. In Introduction to Transformers for NLP: With the Hugging Face Library and Models to Solve Problems", "year": "2022" }, { "authors": "Jesse Dodge; Maarten Sap; Ana Marasovic; William Agnew; Gabriel Ilharco; Dirk Groeneveld; Matt Gardner", "journal": "", "ref_id": "b32", "title": "Documenting the english colossal clean crawled corpus", "year": "2021" }, { "authors": "Leo Gao; Stella Biderman; Sid Black; Laurence Golding; Travis Hoppe; Charles Foster; Jason Phang; Horace He; Anish Thite; Noa Nabeshima", "journal": "", "ref_id": "b33", "title": "The pile: An 800gb dataset of diverse text for language modeling", "year": "2020" }, { "authors": "Yuxin Fang; Wen Wang; Binhui Xie; Quan Sun; Ledell Wu; Xinggang Wang; Tiejun Huang; Xinlong Wang; Yue Cao", "journal": "", "ref_id": "b34", "title": "Eva: Exploring the limits of masked visual representation learning at scale", "year": "2022" }, { "authors": "Renqian Luo; Liai Sun; Yingce Xia; Tao Qin; Sheng Zhang; Hoifung Poon; Tie-Yan Liu", "journal": "Briefings in Bioinformatics", "ref_id": "b35", "title": "Biogpt: generative pre-trained transformer for biomedical text generation and mining", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 209.11, 114.25, 184.99, 30.32 ], "formula_id": "formula_0", "formula_text": "θ * = arg max θ N i=1 log p θ (A i | I, T, Q, A i-1" }, { "formula_coordinates": [ 4, 342.15, 302.59, 133.04, 11.84 ], "formula_id": "formula_1", "formula_text": "H v = {v 1 , v 2 , • • • , v v } ∈ R v ×e" }, { "formula_coordinates": [ 4, 108, 385.98, 396, 22.24 ], "formula_id": "formula_2", "formula_text": "T = {t 1 , t 2 , • • • , t t } ∈ R t×e , the question Q = {q 1 , q 2 , • • • , q q } ∈ R q ×e" }, { "formula_coordinates": [ 4, 195.96, 496.87, 308.04, 22.67 ], "formula_id": "formula_3", "formula_text": "p =[v 1 , v 2 , • • • , v x , t 1 , t 2 , • • • , t t , Question :W hat will this subject be diagnosed with?Answer:],(2)" }, { "formula_coordinates": [ 4, 155.61, 619.01, 348.39, 20.14 ], "formula_id": "formula_4", "formula_text": "log p θ (A) = la log p θ a i | v 1 , . . . v v , t 1 , . . . t t , q 1 , . . . q q , a 1 , . . . a i-1 .(3)" }, { "formula_coordinates": [ 5, 108, 261.82, 396, 23.97 ], "formula_id": "formula_5", "formula_text": "{E i = f ϕ1 (Iv i )} Nv i=1" }, { "formula_coordinates": [ 5, 108, 318.49, 395.22, 23.28 ], "formula_id": "formula_6", "formula_text": "I into N v visual features f 1 , • • • , f Nv = {f φ (f ϕ1 (Iv i ), f ϕ2 (Iv i ))} Nv i=1 ." }, { "formula_coordinates": [ 5, 185.48, 341.75, 209.06, 9.65 ], "formula_id": "formula_7", "formula_text": "IE = (f i , • • • , f Nv ) for each input image volume I." }, { "formula_coordinates": [ 5, 251.53, 698.89, 252.47, 9.65 ], "formula_id": "formula_8", "formula_text": "L total = L F A + λ LG L LG ,(5)" } ]
2023-05-18
[ { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b47", "b41", "b3", "b33", "b2", "b53", "b20", "b12", "b7", "b59", "b39", "b5", "b71", "b29", "b67", "b71" ], "table_ref": [], "text": "Pedestrian detection is a critical yet challenging research field in computer vision. It is widely applied in autonomous driving and surveillance. In the last decade, CNN-based detectors have dominated the development of pedestrian detection, which has hugely succeeded [Tian et al., 2015;Stewart et al., 2016;Brazil et al., 2017;Noh et al., 2018;Brazil and Liu, 2019;Wu et al., 2020;Hasan et al., 2021;Zhang et al., 2022a]. The performance of pedestrian detection on the Reasonable subset of Caltech dataset [Dollar et al., 2011] is nearly saturated, with 2.31% MR by PedHunter [Chi et al., 2020].\nHowever, the performance of pedestrian detection drops a lot in crowded scenes. Taking PedHunter as an example, its MR is 8.32% when it is applied to the Reasonable subset of Citypersons [Zhang et al., 2017]. However, its MR is 43.53% on the Heavy Occlusion subset of Citypersons.\nSo far, almost all pedestrian detectors are CNN-based. The main challenge comes from a component of CNN-based pedestrian detectors, non-maximum suppression (NMS). Since CNN-based pedestrian detectors produce multiple predictions for each pedestrian, CNN-based pedestrian detectors need NMS to remove redundant predictions. In crowded scenes, correct predictions are removed inevitably by NMS due to heavy overlaps. For example, on Crowdhuman dataset [Shao et al., 2018], Sun et. al [Sun et al., 2021a] applied NMS on annotation boxes only obtains 95% recall. Hence, CNNbased pedestrian detectors with NMS are difficult to achieve the ideal results.\nRecently, a novel object detection framework, DETR [Carion et al., 2020] was proposed. Subsequently, its variations achieved dominant performance in general object detection [Zhu et al., 2020;Sun et al., 2021b]. DETRs are ideal pedestrian detectors in theory because they are NMSfree. These works inspired some pioneers [Lin et al., 2020;Zheng et al., 2022] to research DETRs for crowded pedestrian detection. The two studies found that compared with Faster-R-CNN-FPN [Lin et al., 2017a], Deformable DETR [Zhu et al., 2020] produces more false positives in crowded pedestrian detection.\nWe argue that DETRs produce more false positives in crowded pedestrian detection mainly due to the sample selection method of DETRs. Figure 1 shows two examples. In the first example, as shown in Figure 1(a)(b), pedestrian g 3 is heavily occluded by pedestrians g 1 and g 2 . Due to DETRs' detection mechanism, the certain decoder layer may not produce a learnable positive training sample for pedestrian g 3 . In this case, DETRs still select p 3 as the positive training sample for g 3 . We argue that p 3 is not learnable because p 3 lacks effective features. Using p 3 as a positive training sample leads DETRs to detect the background as pedestrians. It is a kind of false positive. In the second example, as shown in To solve the problem mentioned above, we propose a simple but effective sample selection method, Sample Selection for Crowded Pedestrians (SSCP). The proposed SSCP consists of two parts, the constraint-guided label assignment scheme (CGLA) and the utilizability-aware focal loss (UAFL). In CGLA, we design a new cost function. This cost function contains two constraints. First, the cost between each sample and ground-truth pair is calculated. Then the Hungarian algorithm is used to select positive and negative training samples based on the cost. Finally, CGLA selects learnable positive training samples for training and filters out the positive training samples that do not satisfy the constraints and convert them into negative training samples, as shown in Figure 1(c)(f). UAFL turns the fixed label y and γ in Focal Loss [Lin et al., 2017b] into adaptive variables. In UAFL, the label y changes with the IoU of the sample. γ is related to the IoU and the gradient ratio. More importantly, SSCP can be plugged into any DETRs and does not participate in inference, so it improves DETRs without any overhead. Extensive experiments on Crowdhuman and Citypersons datasets support our analysis and conclusions. With the proposed SSCP, the state-of-the-art performance is improved to 39.7% MR on Crowdhuman and 31.8% MR on Citypersons." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Crowded Pedestrian Detection", "publication_ref": [ "b23", "b9", "b29", "b67", "b29", "b67" ], "table_ref": [], "text": "Faster-R-CNN-FPN is the broadest baseline in pedestrian detection. N2NMS [Huang et al., 2020] improves Faster-R-CNN-FPN to simultaneously detect the visible part and the full body of a pedestrian and use the less overlapped visible part for NMS. MIP [Chu et al., 2020] improves Faster-R-CNN-FPN to detect two objects by one proposal and does not apply NMS to the objects which are from the same proposal.\nDETR is a kind of NMS-free detector. However, due to the fact that DETR originated in 2020, only a few pioneers have researched pedestrian detection based on DETR. They [Lin et al., 2020;Zheng et al., 2022] found DETR produces more false positives than CNN-based pedestrian detectors in crowded scenes. They believed the fundamental problem lies in the inability of the cross-attention module to distinguish well between the pedestrian and the background. Liu et al. [Lin et al., 2020] proposed a novel cross-attention module that utilized visible parts to enhance the feature extraction of crowded pedestrians. Zheng et al.[Zheng et al., 2022] designed a relation network to model the relationship among detected pedestrians. The relation network utilized highconfidence pedestrians to detect near low-confidence pedestrians. The two pieces of research greatly inspired us to research DETRs' potential in crowded pedestrian detection." }, { "figure_ref": [], "heading": "Label Assignment", "publication_ref": [ "b35", "b49", "b14", "b16", "b11" ], "table_ref": [], "text": "Faster-R-CNN [Ren et al., 2015] is the most representative anchor-based two-stage object detector. It selects positive training samples based on IoU between each anchor box and ground-truth pair. FCOS [Tian et al., 2019] defines the anchor points near the center of ground-truths as positive training samples. CenterNet [Duan et al., 2019] defines the anchor point which is the closest to the center point of the groundtruth as the positive training sample. Recently, specialized studies about label assignment began to emerge, since the importance of label assignment is realized. Zhang et al. [Zhang et al., 2020a] proposed an adaptive label assignment scheme, ATSS, to select positive training samples based on the statistical characteristics of predictions. With ATSS, the performance gap between RetinaNet [Lin et al., 2017b] and FCOS has narrowed significantly. Ge et al. [Ge et al., 2021] regarded label assignment as optimal transport assignment, which selects positive training samples by Sinkhorn-Knopp Iteration [Cuturi, 2013]. Label assignment of DETR is different from CNN-based detectors, which not only considers position such as IoU but also classification. The cost between each sample and ground-truth pair is calculated and DETR selects training samples based on the cost by the Hungarian algorithm." }, { "figure_ref": [], "heading": "Loss Function", "publication_ref": [ "b57", "b37", "b51", "b45" ], "table_ref": [], "text": "Loss function plays an important role in computer vision. In pedestrian detection, a loss function commonly consists of two parts: regression loss function and classification loss function. Commonly used regression loss functions include SmoothL1 loss [Girshick, 2015], IoU loss [Yu et al., 2016], GIoU loss [Rezatofighi et al., 2019], and the most commonly used classification loss function is Focal Loss. Adding extra items or designing a new loss function is a common method to improve crowded pedestrian detection. Wang et al. [Wang et al., 2018] proposed repulsion loss to promote the predicted bounding box farther away from other predicted bounding boxes and other ground-truths. Tang et al. [Tang et al., 2021] proposed a search scheme to automatically search parameters for loss function in pedestrian detection." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we briefly analyse the pipeline of DETRs and point out the drawbacks in crowded pedestrian detection. And then, we elaborate our sample selection method for DETRs in crowded pedestrian detection, Sample Selection for Crowded Pedestrians (SSCP), which consists of the constraint-guided label assignment scheme (CGLA) and the utilizability-aware focal loss(UAFL)." }, { "figure_ref": [ "fig_2" ], "heading": "Pipeline of Deformable DETR", "publication_ref": [], "table_ref": [], "text": "To illustrate the proposed method, we choose Deformable DETR as an example to introduce the pipeline of DETRs and analyze its defect in crowded pedestrian detection. Its pipeline can be formulated as:\nx f pn ← Backbone(img) x enc ← Enc(x f pn ) ref 0 , e q ← Split(query) p t , c t , x dec t , ref t ← Dec t-1 (x ecn , ref t-1 , e q , x dec t-1 ) cost t ← Cost(p t , gt) index t ← H(cost t ) loss t ← Loss(p t , gt, index t ).\n(\n)1\nAn image is input into a backbone to produce muti-scale feature maps x f pn . The feature maps x f pn are input into transformer encoder layers Enc to produce feature map x enc . query denotes the trainable object query. query is split into two parts, reference points ref 0 and query embedding e q . The reference point is like the anchor point or anchor box. Dec denotes the decoder layer. In decoder layers, the query embedding e q plus x dec are used to predict the offsets. The reference points ref t-1 extract features from x ecn based on the offsets. The extracted features is x dec t . x dec t is input into detection heads to produce predictions p t . The predicted bounding boxes of prediction p t and ref t are the same things. ref t is used for the next decoder layer for refinement. Each prediction of p t is a sample. Each sample of p t and groundtruth gt pair is computed to produce a cost matrix cost t . The cost formulation is as follows:\nCost = λ 1 (C cls + C L1 ) -λ 2 C GIoU .\n(2)\nThe Hungarian algorithm H selects positive and negative training samples based on the cost matrix cost t , as shown in the left of Figure 2. To diagrammatize clearly, we only draw the center points of p t instead of the bounding boxes. Finally, the loss of each training sample and ground-truth pair is computed." }, { "figure_ref": [ "fig_3", "fig_2" ], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "In decoder layers, predicted bounding boxes of p t gradually converge to pedestrians. As shown in Figure 3, predicted bounding boxes of p t may be not able to cover all pedestrians, because of severe occlusion and crowding, especially in the forward decoder layers such as decoder layer 1. The Hungarian algorithm is a one-to-one assignment algorithm. Each ground-truth must be assigned with a sample even though the sample doesn't contain effective features or causes extremely large regression loss, as shown in the left of Figure 2. These positive training samples promote DETRs to produce more false positives in crowded scenes. Therefore, these samples are not learnable. There are only two kinds of samples in pedestrian detection, pedestrians or the background. Therefore, we argue that these positive training samples which are not learnable are negative training samples." }, { "figure_ref": [ "fig_2", "fig_4" ], "heading": "Constraint-guided Label Assignment Scheme", "publication_ref": [], "table_ref": [], "text": "To select learnable positive training samples and filter out positive training samples which are not learnable and convert them into negative training samples, we propose the constraint-guided label assignment scheme (CGLA). We argue that a learnable positive training sample should contain corresponding effective features and should not cause ex- treme regression loss. Therefore, a learnable positive sample should overlap with the corresponding ground-truth in some degree. We design two constraints, the center constraints and the position constraint. If the positive training sample satisfies the two constraints, we think it is learnable. Otherwise, the positive training sample will be converted into a negative training sample, as shown in Figure 2. The details are described below.\nFirstly, we design a novel cost for the Hungarian algorithm. In DETRs, the original cost consists of three parts: classification cost, GIoU cost, and L1 cost. L1 cost and GIoU cost have a similar function, as mentioned in Equation ( 2). However, L1 cost is difficult to determine whether the sample contains the corresponding features. For example, with the same GIoU cost, L1 cost of the larger scale sample is more than the smaller scale sample. Therefore, we replace the L1 cost with the center constraints cost and the position constraint cost. The formulation is as follows:\nCost = λ 1 C cls -λ 2 (C GIoU + C pos + C cenx + C ceny ), (3)\nwhere C cenx , C ceny and C pos denote two center constraints cost and the position constraint cost. According to the feature extraction mechanism of DETRs mentioned in 3.1, we argue that if a sample contains sufficient effective features, the distance between the center of the sample and the corresponding ground-truth center should be within a range. Therefore, we design the center constraints. The specific formulation is as follows:\nC ij cenx = -1, L1(x i p , x j g ) > αw j g 0, L1(x i p , x j g ) <= αw j g ,(4)\nC ij ceny = -1, L1(y i p , y j g ) > αh j g 0, L1(y i p , y j g ) <= αh j g ,(5)\nwhere α denotes the parameter of the center constraints cost.\nx i p and x j g denote the center coordinates of a sample i and a ground-truth j on x-axis. L1 denotes L1 distance between x i p and x j g . w j g denotes the width of ground truth j. Equation ( 5) is similar to Equation (4).\nThe IoU between the sample and the ground-truth not only reflects the number of effective features but also the regres-Algorithm 1 Constraint-guided label assignment scheme Input:\nG is a set of ground truths P is a set of samples α is a parameter of the center constraint cost β is a parameter of the position constraint cost Output:\nPos is a set of positive training samples Neg is a set of negative training samples 1: pairwise center constraint cost on x axis: \nC ij pos = -1, IoU (p i , g j ) <= β 0, IoU (p i , g j ) > β,(6)\nwhere β denotes the IoU threshold of the position constraint cost. p i and g j denote bounding boxes of a sample i and a ground truth j, respectively.\nIn CGLA, the cost between each sample and ground-truth pair is calculated to produce the cost matrix based on our new cost function, as shown in Equation (3). Based on the cost matrix, the Hungarian algorithm assigns each ground-truth to a sample. The next step of our algorithm is to filter out positive training samples which are not learnable by the constraints. Figure 4 is an example to show a positive training sample, which satisfies all constraints. The positive training samples which satisfy the constraints are learnable positive training samples. Otherwise, they are not learnable and are converted to negative training samples. Algorithm 1 describes how CGLA works." }, { "figure_ref": [], "heading": "Utilizability-aware Focal Loss", "publication_ref": [], "table_ref": [], "text": "CGLA filters out positive training samples which are not learnable. However, the utilizability of positive training samples is different. The positive training sample with a larger IoU is easier to learn. That the positive training sample makes a higher gradient ratio means the sample is not learned well. The gradient ratio donates the ratio of the gradient produced by a sample as a positive and negative training sample in the loss function. We combine the two factors to represent the utilizability of the positive training samples and improve Focal Loss based on the utilizability.\nIn Focal Loss, each sample corresponds to a fixed label y and a fixed parameter γ. A typical Focal Loss is as follows(we ignore α in the original paper for simplicity):\nL F L = -(1 -p) γ ylogp -p γ (1 -y)log(1 -p). (7)\nSpecifically, the positive label y = 1 in Focal Loss is changed to the soft label 0 < y < 1 in UAFL. The value of the soft label y is the IoU between the positive training sample and the corresponding gound-truth. The core idea is not to force a sample with a smaller IoU to predict a high confidence, because it contains fewer effective features. We convert γ to an adaptive parameter. Each positive training sample corresponds to a γ j whose value adaptively changes with its IoU and gradient ratio. The formula of γ is:\nγ =γ o + γ i g =γ o + (g i -t g )(β -y),(8)\nwhere γ o is a fixed parameter that is like an anchor. The parameter g i indicates the gradient ratio of the ith training sample. The t g is a self-adaptive threshold which is the mean value of g. The β indicates a threshold that is as same as β in CGLA. To insure the stability of training, we clamp the value of g i -t g in the range [0,3]. The value of g i -t g reflects if the sample is learned well. For example, the larger value of g i -t g means the sample is learned worse. The value of β -y reflects if the sample is worth utilizing. For example, the smaller value of β -y means the sample is more worth utilizing. Combining the two factors, UAFL can adaptively regulate loss weights. The final formula of UAFL is:\nL U AF L = -|y -p| γ (ylogp + (1 -y)log(1 -p)),(9)\nWhen the confidence of the prediction equals its IoU, the loss is optimal." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b67" ], "table_ref": [], "text": "Datasets. we conduct our method on two challenging pedestrian detection datasets, Crowdhuman and Citypersons, to demonstrate the generality and effectiveness of the proposed methods in different scenes.\nCrowdhuman is a heavily crowded pedestrian detection dataset published by Megvii with 15000 images in the training set, 4370 images in the validation set and 5000 images in the test set. In Crowdhuman, on average, each image contains 22.6 pedestrians. Pairwise pedestrians with IoU over 0.3 is about 9 pairs per image, and triples of pedestrians with IoU over 0.3 is about 0.5 pairs per image.\nCitypersons is a recently released challenging pedestrian detection dataset with 2975 images in the training set, 500 images in the validation set and 1575 images in the test set. The bounding box annotations of the full body and visible part are provided. In Citypersons, about 70% of the pedestrian instances are occluded in different degrees, so it is also quite suitable for verifying the effectiveness of pedestrian detectors in crowded scenes. Evaluation metrics. Log-average miss rate(MR) is most the commonly used evaluation metric in pedestrian detection. MR evaluates the comprehensive performance of the detector in high-confidence intervals. The smaller MR indicates a better performance.\nImplementation Details. The proposed method is implemented in PyTorch and trained on a RTX 3090 GPU. Most training details are the same with Iter Deformable DETR [Zheng et al., 2022]. The differences include: (1) At the stage of fine-tuning, we introduce CGLA and UAFL for 20 epochs because at the beginning of training, few positive training samples satisfy the constraints, which leads to training instability. (2) For Citypersons, the training and testing size are 1024×2048 and we only use horizontal flip as the data augmentation." }, { "figure_ref": [], "heading": "Comparison with State-of-the-Arts", "publication_ref": [ "b9", "b23", "b63", "b45", "b25", "b22", "b55" ], "table_ref": [ "tab_0" ], "text": "For Crowdhuman, we compare our method with two baselines, Deformable DETR and Iter Deformable DETR. The former is a general object detector. The latter is a crowded pedestrian detector. Deformable DETR is the baseline of Iter Deformable DETR. With our methods, Deformable DETR achieves similar performance to Iter Deformable DETR without introducing any overhead in inference. We make comparisons with the state-with-the-arts: Faster-R-CNN-FPN [Lin et al., 2017a], MIP [Chu et al., 2020], R2NMS [Huang et al., 2020], AEVB [Zhang et al., 2021], AutoPedestrian [Tang et al., 2021], OAF-Net [Li et al., 2022] and DMSFLN [He et al., 2022]. In Table 1, our method achieves state-of-the-art performance, 39.7% MR, which fully outperforms the best Iter Deformable DETR by 1.8%.\nFor Citypersons, We compare our methods with several state-of-the-art pedestrian detectors on Reasonable(R) subset, Heavy Occlusion(HO) subset and Heavy Occlusion(HO) † subset, including AP2M [Liu et al., 2021a], CounpleNet [Liu et al., 2021b], FAPD [Zhou et al., 2022b], DPFD [Zhou et al., 2022a], FC-Net [Zhang et al., 2022b], OAF-Net, MGAN+ [Xie et al., 2021] ,KGSNet [Zhang et al., 2020b] and DMS-FLN. The pedestrians are all over 50 pixels in height. R indicates the pedestrians are with occlusion less than 35%. HO † Table 2: Comparison with the state-of-the-arts on Citypersons.The superscript † indicates the pedestrians over 50 pixels in height with more than 35% occlusion, instead of pedestrians over 50 pixels in height with 35-80% occlusion. Thus, † suggests higher difficulty. indicates the pedestrians are with occlusion more than 35%. HO indicates the pedestrians are with occlusion between 35% to 80%. In Table 2, our method achieves state-of-the-art performance, 40.0% MR on HO † subset and 31.8% MR on HO subset, which outperforms the best Iter Deformable DETR by 0.8% MR on HO † subset and 0.4% MR on HO subset." }, { "figure_ref": [], "heading": "Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Ablation studies", "publication_ref": [], "table_ref": [], "text": "We conduct ablation studies on Crowdhuman and use Iter Deformable DETR as our baseline. As shown in Table 3, our implemented Iter Deformable DETR achieves 41.9% MR on Crowdhuman. To evaluate the effectiveness of SSCP, we first conducted the center constraints alone, which improves MR by 1.3%. We further add the position constraint and MR reaches 40.0% MR. Finally, We further introduce UAFL and achieve 39.7% MR." }, { "figure_ref": [ "fig_7" ], "heading": "Analysis of the Center Constraint", "publication_ref": [], "table_ref": [], "text": "To analyze the effectiveness of the center constraints, we set α from 0.1 to 0.5 for experiments. As shown in Figure 5(a), the model gets improved results when α is set as 0.2, 0.3 and 0.4. When α is greater than 0.4 or less than 0.2, MR tends to go worse. That α is greater than 0.4 results in insufficient constraints. That α is less than 0.2 results in insufficient positive samples." }, { "figure_ref": [ "fig_7" ], "heading": "Analysis of the Position Constraint", "publication_ref": [], "table_ref": [], "text": "To analyze the effectiveness of the position constraint, we set α to 0.3 and set β from 0 to 0.8 for experiments. As shown in Figure 5(b), β from 0 to 0.4 does not show a difference in the results obviously. We think it is related to the evaluation metrics. MR uses 0.5 as the threshold to discriminate between false positives and true positives. The best result is achieved when the β is set to 0.6. MR reaches 40.0%. When the β exceeded 0.6, performance shows a significant decline. We believe that it is due to insufficient positive samples for training." }, { "figure_ref": [], "heading": "Analysis of the combination of α and β", "publication_ref": [], "table_ref": [], "text": "We set 4 sets of α and β, α = 0.2, 0.3 and β = 0.5, 0.6, for experiments. When α = 0.3 and β = 0.6, the model achieves the best performance 40.0% MR on Crowdhuman dataset. When α = 0.3 and β = 0.5, the model achieves a relatively poor performance, 40.5% MR on Crowdhuman dataset. However, the performance is still obviously better than the baseline. Our approach is robust to the change of α and β, because these four sets of parameters do not cause significant performance changes." }, { "figure_ref": [], "heading": "Analysis of UAFL", "publication_ref": [], "table_ref": [], "text": "To analyze the effectiveness of UAFL, we separately apply soft label y and the adaptive γ for experiments, as shown in Table 3. Applying the soft label y to the baseline improves MR by 1.2%. Applying the adaptive γ to the baseline improves MR by 0.9%. β of the adaptive γ is set as the same as the best β in CGLA, β = 0.6. Without CGLA, UAFL improves MR by 1.3%. Finally, with SSCP, the baseline is improved to 39.7% MR." }, { "figure_ref": [ "fig_8" ], "heading": "Analysis of the improvement on MR", "publication_ref": [ "b0" ], "table_ref": [], "text": "Inspired by TIDE [Bolya et al., 2020], we design a similar experiment to illustrate our improvement on MR. First, we extracted the same number of predictions from SSCP and the baseline. The number depends on the smaller of the two models' valid predictions for MR. Secondly, the interval of the predicted boxes' IoU for the gound-truth is [0,1]. We divide the interval [0,1] into 5 intervals. Finally, we count false positives in each interval, as shown in Figure 6. Except for the interval [0.6,0.8), in the other intervals, our SSCP produces fewer false positives. In terms of the total number of false positives, SSCP is still superior to the baseline. This is the main reason why MR is improved. In the interval [0.6,0.8), the extra false positives attribute to the missing annotation of Crowdhuman dataset. In fact, in the interval [0.6,0.8), some false positives are true positives." }, { "figure_ref": [ "fig_9", "fig_9", "fig_9", "fig_9" ], "heading": "Visualization", "publication_ref": [], "table_ref": [], "text": "In Figure 7, we can see the visualization of our SSCP and our baseline Iter Deformable DETR. The results are consistent with our expectations. Our method effectively reduces false positives in two aspects. The box with the check mark or cross represents the correct and false results. In Figure 7(a), to show the results more clearly, we only show the true positives detected by the SSCP as common true posi-tives which are the green boxes. In Figure 7(a), both SSCP and Iter Deformable DETR detect all pedestrians. However, Iter Deformable DETR redundantly detects the background as pedestrians which are the red boxes. In Figure 7(b), both SSCP and Iter Deformable DETR detect all pedestrians. However, the predictions of Iter Deformable DETR which are the red boxes are judged as negative samples due to localization problems. In evaluation, the IoU of a true positive is more than 0.5 with the corresponding ground-truth. Otherwise, it is a false positive even though it has detected the body of the pedestrian. In summary, our SSCP effectively ameliorates the false positives problem of DETRs in crowded pedestrian detection." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we first analyze the pipeline of DETRs and point out that selecting positive training samples which are not learnable is the key factor to making DETRs produce more false positives in crowded pedestrian detection. Then, we propose a simple but effective sample selection method SSCP to improve DETRs which consists of CGLA and UAFL. Assembled with our method, Iter Deformable DETR achieves state-of-the-art results on two challenging pedestrian detection benchmarks Crowdhuman and Citypersons without introducing any additional overhead." } ]
DEtection TRansformer (DETR) and its variants (DETRs) achieved impressive performance in general object detection. However, in crowded pedestrian detection, the performance of DETRs is still unsatisfactory due to the inappropriate sample selection method which results in more false positives. To settle the issue, we propose a simple but effective sample selection method for DETRs, Sample Selection for Crowded Pedestrians (SSCP), which consists of the constraint-guided label assignment scheme (CGLA) and the utilizabilityaware focal loss (UAFL). Our core idea is to select learnable samples for DETRs and adaptively regulate the loss weights of samples based on their utilizability. Specifically, in CGLA, we proposed a new cost function to ensure that only learnable positive training samples are retained and the rest are negative training samples. Further, considering the utilizability of samples, we designed UAFL to adaptively assign different loss weights to learnable positive samples depending on their gradient ratio and IoU. Experimental results show that the proposed SSCP effectively improves the baselines without introducing any overhead in inference.
Selecting Learnable Training Samples is All DETRs Need in Crowded Pedestrian Detection
[ { "figure_caption": "Figure 1 :1Figure 1: The sample selection method of DETRs may select positive training samples which are not learnable. gi are ground-truth boxes. pj are predictions. The solid box means it is selected as a positive training sample. The dashed box means it is selected as a negative training sample. Our SSCP selects learnable positive training samples and filters out positive training samples which are not learnable and converts them into negative training samples.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 1(c)(d), DETRs select p 4 as the positive training sample for g 4 . Although p 4 obviously contains the effective features of g 4 , we still argue that p 4 is not learnable because the extremely large location error confuses the training. Using p 4 as a positive training sample leads DETRs to produce false positives due to the inaccurate location.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Label assignment visualization of DETRs and CGLA. The left is label assignment of DETRs. The whole is our proposed CGLA.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visualization of center points of predicted boxes in decoder layers.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: An example of the proposed center and position constraints.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "assignment based on Hungarian Algorithm: Pos,Neg=H(c giou ,c cls ,c cenx ,c ceny ,c pos ) 7: filter positive samples which are not learnable out from Pos: for m in Pos: if c m cenx ==-1 or c m ceny ==-1 or c m pos ==-1: delete m from Pos and add m into Neg 8: return Pos, Neg sion loss. Therefore, we design the position constraint. The formulation is as follows:", "figure_data": "", "figure_id": "fig_5", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "-DETR (CVPR'22) ResNet-50 10.4 32.2 Iter-D-DETR+CGLA ResNet-50 10.5 32.0 Iter-D-DETR+SSCP ResNet-50 10.4 31.8 Table 3: Contributions of each component on Crowdhuman. The CC indicates the center constraints cost. The PC indicates the position constraint cost. The SL indicates the soft label y. The AG indicates the adaptative γ.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Analysis of α and β.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Analysis of the improvement on MR.", "figure_data": "", "figure_id": "fig_8", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The visualization results of our SSCP and our baseline Iter Deformable DETR. The box with the check mark or cross represents the correct and false results. The green boxes are true positives produced by our SSCP. The red boxes are false positives produced by our baseline Iter Deformable DETR. In (a), Iter Deformable DETR detects the background as a pedestrian. In (b), IoU between the ground-truth and the prediction is less than 0.5, which causes the prediction to be a false positive.", "figure_data": "", "figure_id": "fig_9", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Comparison with the state-of-the-arts on Crowdhuman.", "figure_data": "ModelBackboneMR↓FPN (CVPR'17)ResNet-5042.9MIP (CVPR'20)ResNet-5041.4R2NMS (CVPR'20)ResNet-5043.4AEVB (CVPR'21)ResNet-5040.7AutoPedestrian (TIP'21)ResNet-5040.6DMSFLN(TITS'21)VGG-1643.6OAF-Net(TITS'22)HRNet-w32 45.0D-DETR (ICLR'21)ResNet-5044.6D-DETR+OursResNet-5042.0Iter-D-DETR (CVPR'22)ResNet-5041.5Iter-D-DETR (our implementation)ResNet-5041.9Iter-D-DETR+OursResNet-5039.7", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Feng Gao; Jiaxu Leng; Ji Gan; Xinbo Gao
[ { "authors": " Bolya", "journal": "", "ref_id": "b0", "title": "", "year": "2020" }, { "authors": "Daniel Bolya; Sean Foley; James Hays; Judy Hoffman", "journal": "Springer", "ref_id": "b1", "title": "Tide: A general toolbox for identifying object detection errors", "year": "2020" }, { "authors": "Liu Brazil; Garrick Brazil; Xiaoming Liu", "journal": "", "ref_id": "b2", "title": "Pedestrian detection with autoregressive network phases", "year": "2019" }, { "authors": " Brazil", "journal": "", "ref_id": "b3", "title": "", "year": "2017" }, { "authors": "Garrick Brazil; Xi Yin; Xiaoming Liu", "journal": "", "ref_id": "b4", "title": "Illuminating pedestrians via simultaneous detection & segmentation", "year": "2017" }, { "authors": " Carion", "journal": "", "ref_id": "b5", "title": "", "year": "2020" }, { "authors": "Nicolas Carion; Francisco Massa; Gabriel Synnaeve; Nicolas Usunier; Alexander Kirillov; Sergey Zagoruyko", "journal": "Springer", "ref_id": "b6", "title": "End-to-end object detection with transformers", "year": "2020" }, { "authors": "Chi ", "journal": "", "ref_id": "b7", "title": "", "year": "2020" }, { "authors": "Cheng Chi; Shifeng Zhang; Junliang Xing; Zhen Lei; Stan Z Li; Xudong Zou", "journal": "", "ref_id": "b8", "title": "Pedhunter: Occlusion robust pedestrian detector in crowded scenes", "year": "2020" }, { "authors": " Chu", "journal": "", "ref_id": "b9", "title": "", "year": "2020" }, { "authors": "Xuangeng Chu; Anlin Zheng; Xiangyu Zhang; Jian Sun", "journal": "", "ref_id": "b10", "title": "Detection in crowded scenes: One proposal, multiple predictions", "year": "2020" }, { "authors": "Marco Cuturi; Cuturi", "journal": "Advances in neural information processing systems", "ref_id": "b11", "title": "Sinkhorn distances: Lightspeed computation of optimal transport", "year": "2013" }, { "authors": " Dollar", "journal": "", "ref_id": "b12", "title": "", "year": "2011" }, { "authors": "Piotr Dollar; Christian Wojek; Bernt Schiele; Pietro Perona", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b13", "title": "Pedestrian detection: An evaluation of the state of the art", "year": "2011" }, { "authors": " Duan", "journal": "", "ref_id": "b14", "title": "", "year": "2019" }, { "authors": "Kaiwen Duan; Song Bai; Lingxi Xie; Honggang Qi; Qingming Huang; Qi Tian", "journal": "", "ref_id": "b15", "title": "Centernet: Keypoint triplets for object detection", "year": "2019" }, { "authors": " Ge", "journal": "", "ref_id": "b16", "title": "", "year": "2021" }, { "authors": "Zheng Ge; Songtao Liu; Zeming Li; Osamu Yoshie; Jian Sun", "journal": "", "ref_id": "b17", "title": "Ota: Optimal transport assignment for object detection", "year": "2021" }, { "authors": " Girshick", "journal": "", "ref_id": "b18", "title": "", "year": "2015" }, { "authors": "Ross Girshick", "journal": "", "ref_id": "b19", "title": "Fast r-cnn", "year": "2015" }, { "authors": " Hasan", "journal": "", "ref_id": "b20", "title": "", "year": "2021" }, { "authors": "Irtiza Hasan; Shengcai Liao; Jinpeng Li; Saad Ullah Akram; Ling Shao", "journal": "", "ref_id": "b21", "title": "Generalizable pedestrian detection: The elephant in the room", "year": "2021" }, { "authors": " He", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b22", "title": "Occluded pedestrian detection via distribution-based mutualsupervised feature learning", "year": "2022" }, { "authors": " Huang", "journal": "", "ref_id": "b23", "title": "", "year": "2020" }, { "authors": "Xin Huang; Zheng Ge; Zequn Jie; Osamu Yoshie", "journal": "", "ref_id": "b24", "title": "Nms by representative region: Towards crowded pedestrian detection by proposal pairing", "year": "2020" }, { "authors": " Li", "journal": "", "ref_id": "b25", "title": "", "year": "2022" }, { "authors": "Qiming Li; Yijing Su; Yin Gao; Feng Xie; Jun Li", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b26", "title": "Oaf-net: An occlusion-aware anchor-free network for pedestrian detection in a crowd", "year": "2022" }, { "authors": "Lin ", "journal": "", "ref_id": "b27", "title": "Feature pyramid networks for object detection", "year": "2017" }, { "authors": "Lin ", "journal": "", "ref_id": "b28", "title": "Focal loss for dense object detection", "year": "2017" }, { "authors": "Lin ", "journal": "", "ref_id": "b29", "title": "", "year": "2020" }, { "authors": "Matthieu Lin; Chuming Li; Xingyuan Bu; Ming Sun; Chen Lin; Junjie Yan; Wanli Ouyang; Zhidong Deng", "journal": "", "ref_id": "b30", "title": "Detr for crowd pedestrian detection", "year": "2020" }, { "authors": " Liu", "journal": "", "ref_id": "b31", "title": "Adaptive pattern-parameter matching for robust pedestrian detection", "year": "2021" }, { "authors": " Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b32", "title": "Coupled network for robust pedestrian detection with gated multilayer feature extraction and deformable occlusion handling", "year": "2021" }, { "authors": " Noh", "journal": "", "ref_id": "b33", "title": "", "year": "2018" }, { "authors": "Junhyug Noh; Soochan Lee; Beomsu Kim; Gunhee Kim", "journal": "", "ref_id": "b34", "title": "Improving occlusion and hard negative handling for single-stage pedestrian detectors", "year": "2018" }, { "authors": " Ren", "journal": "", "ref_id": "b35", "title": "", "year": "2015" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "Advances in neural information processing systems", "ref_id": "b36", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": " Rezatofighi", "journal": "", "ref_id": "b37", "title": "", "year": "2019" }, { "authors": "Hamid Rezatofighi; Nathan Tsoi; Junyoung Gwak; Amir Sadeghian; Ian Reid; Silvio Savarese", "journal": "", "ref_id": "b38", "title": "Generalized intersection over union: A metric and a loss for bounding box regression", "year": "2019" }, { "authors": " Shao", "journal": "", "ref_id": "b39", "title": "", "year": "2018" }, { "authors": "Shuai Shao; Zijian Zhao; Boxun Li; Tete Xiao; Gang Yu; Xiangyu Zhang; Jian Sun", "journal": "", "ref_id": "b40", "title": "Crowdhuman: A benchmark for detecting human in a crowd", "year": "2018" }, { "authors": " Stewart", "journal": "", "ref_id": "b41", "title": "", "year": "2016" }, { "authors": "Russell Stewart; Mykhaylo Andriluka; Andrew Y Ng", "journal": "", "ref_id": "b42", "title": "End-to-end people detection in crowded scenes", "year": "2016" }, { "authors": " Sun", "journal": "PMLR", "ref_id": "b43", "title": "What makes for end-to-end object detection?", "year": "2021" }, { "authors": " Sun", "journal": "", "ref_id": "b44", "title": "Sparse r-cnn: Endto-end object detection with learnable proposals", "year": "2021" }, { "authors": " Tang", "journal": "", "ref_id": "b45", "title": "", "year": "2021" }, { "authors": "Yi Tang; Baopu Li; Min Liu; Boyu Chen; Yaonan Wang; Wanli Ouyang", "journal": "IEEE transactions on image processing", "ref_id": "b46", "title": "Autopedestrian: an automatic data augmentation and loss function search scheme for pedestrian detection", "year": "2021" }, { "authors": " Tian", "journal": "", "ref_id": "b47", "title": "", "year": "2015" }, { "authors": "Yonglong Tian; Ping Luo; Xiaogang Wang; Xiaoou Tang", "journal": "", "ref_id": "b48", "title": "Pedestrian detection aided by deep learning semantic tasks", "year": "2015" }, { "authors": " Tian", "journal": "", "ref_id": "b49", "title": "", "year": "2019" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b50", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": " Wang", "journal": "", "ref_id": "b51", "title": "", "year": "2018" }, { "authors": "Xinlong Wang; Tete Xiao; Yuning Jiang; Shuai Shao; Jian Sun; Chunhua Shen", "journal": "", "ref_id": "b52", "title": "Repulsion loss: Detecting pedestrians in a crowd", "year": "2018" }, { "authors": " Wu", "journal": "", "ref_id": "b53", "title": "", "year": "2020" }, { "authors": "Jialian Wu; Chunluan Zhou; Ming Yang; Qian Zhang; Yuan Li; Junsong Yuan", "journal": "", "ref_id": "b54", "title": "Temporalcontext enhanced detection of heavily occluded pedestrians", "year": "2020" }, { "authors": " Xie", "journal": "", "ref_id": "b55", "title": "", "year": "2021" }, { "authors": "Jin Xie; Yanwei Pang; Muhammad Haris Khan; Rao Muhammad Anwer; Fahad Shahbaz Khan; Ling Shao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b56", "title": "Mask-guided attention network and occlusionsensitive hard example mining for occluded pedestrian detection", "year": "2021" }, { "authors": " Yu", "journal": "", "ref_id": "b57", "title": "", "year": "2016" }, { "authors": "Jiahui Yu; Yuning Jiang; Zhangyang Wang; Zhimin Cao; Thomas Huang", "journal": "", "ref_id": "b58", "title": "Unitbox: An advanced object detection network", "year": "2016" }, { "authors": " Zhang", "journal": "", "ref_id": "b59", "title": "", "year": "2017" }, { "authors": "Shanshan Zhang; Rodrigo Benenson; Bernt Schiele", "journal": "", "ref_id": "b60", "title": "Citypersons: A diverse dataset for pedestrian detection", "year": "2017" }, { "authors": " Zhang", "journal": "", "ref_id": "b61", "title": "Bridging the gap between anchor-based and anchor-free detection via adaptive training sample selection", "year": "2020" }, { "authors": " Zhang", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b62", "title": "Kgsnet: key-point-guided super-resolution network for pedestrian detection in the wild", "year": "2020" }, { "authors": " Zhang", "journal": "", "ref_id": "b63", "title": "", "year": "2021" }, { "authors": "Yuang Zhang; Huanyu He; Jianguo Li; Yuxi Li; John See; Weiyao Lin", "journal": "", "ref_id": "b64", "title": "Variational pedestrian detection", "year": "2021" }, { "authors": " Zhang", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b65", "title": "Feature calibration network for occluded pedestrian detection", "year": "2022" }, { "authors": " Zhang", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b66", "title": "Feature calibration network for occluded pedestrian detection", "year": "2022" }, { "authors": " Zheng", "journal": "", "ref_id": "b67", "title": "", "year": "2022" }, { "authors": "Anlin Zheng; Yuang Zhang; Xiangyu Zhang; Xiaojuan Qi; Jian Sun", "journal": "", "ref_id": "b68", "title": "Progressive end-toend object detection in crowded scenes", "year": "2022" }, { "authors": " Zhou", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b69", "title": "Enhanced multi-task learning architecture for detecting pedestrian at far distance", "year": "2022" }, { "authors": " Zhou", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b70", "title": "A unified multi-task learning architecture for fast and accurate pedestrian detection", "year": "2022" }, { "authors": " Zhu", "journal": "", "ref_id": "b71", "title": "", "year": "2020" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "", "ref_id": "b72", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 65.84, 191.64, 207.7, 100.98 ], "formula_id": "formula_0", "formula_text": "x f pn ← Backbone(img) x enc ← Enc(x f pn ) ref 0 , e q ← Split(query) p t , c t , x dec t , ref t ← Dec t-1 (x ecn , ref t-1 , e q , x dec t-1 ) cost t ← Cost(p t , gt) index t ← H(cost t ) loss t ← Loss(p t , gt, index t )." }, { "formula_coordinates": [ 3, 289.26, 238.31, 7.74, 8.64 ], "formula_id": "formula_1", "formula_text": ")1" }, { "formula_coordinates": [ 3, 99.4, 480.93, 152.2, 9.65 ], "formula_id": "formula_2", "formula_text": "Cost = λ 1 (C cls + C L1 ) -λ 2 C GIoU ." }, { "formula_coordinates": [ 4, 58.98, 450.38, 238.02, 9.65 ], "formula_id": "formula_3", "formula_text": "Cost = λ 1 C cls -λ 2 (C GIoU + C pos + C cenx + C ceny ), (3)" }, { "formula_coordinates": [ 4, 104.35, 560.65, 192.65, 24.24 ], "formula_id": "formula_4", "formula_text": "C ij cenx = -1, L1(x i p , x j g ) > αw j g 0, L1(x i p , x j g ) <= αw j g ,(4)" }, { "formula_coordinates": [ 4, 105.97, 595.19, 191.03, 24.24 ], "formula_id": "formula_5", "formula_text": "C ij ceny = -1, L1(y i p , y j g ) > αh j g 0, L1(y i p , y j g ) <= αh j g ,(5)" }, { "formula_coordinates": [ 4, 369.79, 396.89, 188.21, 21.83 ], "formula_id": "formula_6", "formula_text": "C ij pos = -1, IoU (p i , g j ) <= β 0, IoU (p i , g j ) > β,(6)" }, { "formula_coordinates": [ 5, 71.42, 117.95, 225.58, 11.72 ], "formula_id": "formula_7", "formula_text": "L F L = -(1 -p) γ ylogp -p γ (1 -y)log(1 -p). (7)" }, { "formula_coordinates": [ 5, 121.16, 246.42, 175.84, 28.26 ], "formula_id": "formula_8", "formula_text": "γ =γ o + γ i g =γ o + (g i -t g )(β -y),(8)" }, { "formula_coordinates": [ 5, 67.51, 429.63, 229.5, 11.72 ], "formula_id": "formula_9", "formula_text": "L U AF L = -|y -p| γ (ylogp + (1 -y)log(1 -p)),(9)" } ]
2023-11-20
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b14", "b25", "b60", "b20", "b8", "b54", "b27", "b29", "b33", "b22", "b105", "b107", "b10", "b46", "b10", "b105", "b10", "b99", "b87", "b42", "b16", "b80", "b66", "b64", "b99", "b87", "b42", "b74", "b38", "b56", "b87", "b42", "b93" ], "table_ref": [], "text": "The problems of visual regression such as estimation of 6D pose of object instances (i.e. their orientation and translation with respect to the camera optical center) and configuration of human body parts given RGB images are widely encountered in numerous fields such as robotics [Collet et al., 2011;He et al., 2020], augmented reality [Marchand et al., 2015] and autonomous driving [Geiger et al., 2012;Chen et al., 2017], which can be typically addressed by learning a single or multi-output regression mapping on deep representations of visual observations. Recent regression algorithms have gained remarkable success to handle with inconsistent lighting conditions and heavy occlusions in-between foreground Middle: one bin with the highest confidence (i.e. highlighted with the largest red dish) with further local refinement (i.e. the gray lines) are adopted to generate pseudo pose labels in our MAST. Bottom: the t-SNE visualization of feature distribution w/ and w/o the proposed CTC, which can verify the effectiveness of introduction of target manifolds with a more smooth distribution; and comparison with our MAST and its backbone with an example from the Occluded LineMOD dataset is given. and contextual objects in uncontrolled and cluttered environment, owing to recent development of representation learning of visual regression, such as introduction of self-supervised regularization [Lin et al., 2022] and powerful network architectures [He et al., 2021].\nIn those regression tasks, visual observations, i.e. RGB images, can be easily acquired in practice or directly collected from the Internet, but it is laborious or even unfeasible for manual noise-free annotation with continuous targets. As a result, the size of real training data with precise labels is typically limited and less scalable, e.g. eggbox and holepuncher training samples in the LineMOD [Hinterstoisser et al., 2012] for 6D pose estimation, which increases the difficulty of learning good representations. The synthesis of images can be a powerful solution to cope with data sparsity, which can be gained via photorealistic rendering [Hodaň et al., 2019] with CAD models. However, domain discrepancy between synthetic and real data, e.g. appearance difference between CAD models and real objects, scene illumination, and systematic imaging noises, can lead to collapse of regression performance, which encourages the practical setting of unsupervised domain adaptation on visual regression (UDAVR), i.e. samples in the source and target domains cannot satisfy the i.i.d. condition.\nDifferent from the widely-investigated problem of unsupervised domain adaptation on visual classification (UDAVC) [Gopalan et al., 2011;Zou et al., 2018;Zou et al., 2021], only a few works [Chen et al., 2021;Lee et al., 2022] have explored the vital factors of representation learning of visual regression that different from classification in the context of UDA. [Chen et al., 2021] revealed and exploited the sensitivity of feature scaling on domain adaptation regression performance to regularize representation learning, which can achieve promising results to bridge domain gap. We argue that cumulative dependent nature and piece-wise manifolds in target space are two key factors of UDA regression yet missing in the existing algorithms. To this end, this paper proposes a Manifold-Aware Self-Training (MAST) scheme to decompose the problem of learning a domain-invariant regression mapping into a combination of a feature-scalingrobust globally coarse classification of discretized target anchors via self-training based feature alignment and a locally regression-based refinement less sensitive to inconsistent feature scale, as shown in Figure 1.\nFor exploiting the cumulative dependent nature of regression targets different from those in classification, the selftraining method (e.g. the self-paced self-training [Zou et al., 2018]) originally designed for the UDAVC problem is now adapted to the coarse classification on discretization of continuous target space, with incorporating a novel piece-wise manifold regularization on domain-invariant representation learning, namely a self-supervised cumulative target correlation regularization. Intuitively, appearance ambiguities across domains in representation learning can be mitigated via leveraging consistent target correlation under certain distance metrics in target space (e.g. the Euclidean distance in R(3) translation space). Furthermore, considering the risk of sensitivity to varying feature scaling in the UDAVR problem [Chen et al., 2021], learning unified local regression functions with those shared features of the classification of discretized target bins (typically having inconsistent feature scales) can achieve superior robustness against large scale variations of transferable representations. Extensive experiments on three popular benchmarks of the challenging UDA on 6D pose estimation can confirm the effectiveness of our MAST scheme, consistently outperforming the state-of-the-art.\nThe novelties of our paper are summarized as follows.\n• This paper proposes a novel and generic manifold-aware self-training scheme for unsupervised domain adaptation on visual regression, which exploits cumulative correlation and piece-wise manifolds in regression target space for domain-invariant representation learning. Zakharov et al., 2019] and regression based [Xiang et al., 2018;Labbé et al., 2020;Wang et al., 2021b]. The former relied on learning a 2D-to-3D correspondence mapping between object keypoints in 3D space and their 2D projection on images with the Perspective-n-Point (PnP) [Fischler and Bolles, 1981]. Such a correspondence can be achieved by either detecting a limited size of landmarks [Tekin et al., 2018;Peng et al., 2019] or pixel-wise voting from a heatmap [Park et al., 2019;Zakharov et al., 2019]. The latter concerned on deep representation learning for direct pose regression with the point-matching loss for optimizing output pose [Xiang et al., 2018;Labbé et al., 2020] or proposing a differentiable PnP paradigm in an end-to-end training style [Wang et al., 2021b;Chen et al., 2022a]. Alternatively, the problem can also be formulated into ordinal classification via discretization of SE(3) space into class bins [Su et al., 2015;Kehl et al., 2017]. To alleviate representation ambiguities, the estimated 6D pose of objects can be further refined via either an iterative refinement with residual learning [Li et al., 2018b;Manhardt et al., 2018] or simply the Iterative Closest Point [Xiang et al., 2018], while some work introduced crossview fusion based refinement [Labbé et al., 2020;Li et al., 2018a]. Existing refinement strategies are typically employed as a post-processing step following the main module of 6D pose estimation, some of which such as [Li et al., 2018b;Xu et al., 2022] can be designed in an end-to-end learning cascade to obtain significant performance gain, but they are not designed for bridging domain gap and therefore cannot ensure good performance under the UDAVR setting. Alternatively, [Li et al., 2018a] introduced a combined scheme of both coarse classification and local regression-based refinement simultaneously, which is similar to our MAST method. However, the main differences lie in the introduction of the cumulative target correlation regularization in our scheme to encourage domain-invariant pose representations revealing the dependent nature of regression targets." }, { "figure_ref": [], "heading": "Unsupervised Domain Adaptation on Visual Regression.", "publication_ref": [ "b91", "b0", "b38", "b56", "b107", "b97", "b95", "b83", "b0", "b101", "b10", "b44", "b44", "b72", "b68", "b72", "b89", "b4", "b105", "b78", "b107" ], "table_ref": [], "text": "Most of regression methods [Xu et al., 2019;Bao et al., 2022] employ annotated real data for model training, but manual annotations on real data are usually laboriously expensive or even unfeasible. Lack of sufficient annotated real data encourages the practical setting of Simulation-to-Reality (Sim2Real) UDAVR, i.e. learning a domain-agnostic representation given annotated synthetic data as source domain and unlabeled real data as target domain.\nA simple yet effective way to narrow Sim2Real domain gap can rely on domain randomization [Kehl et al., 2017;Manhardt et al., 2018], while recent success of self-supervised learning for UDAVC [Zou et al., 2021;Yue et al., 2021] inspired a number of self-supervised regressors [Wang et al., 2021a;Yang et al., 2021] in the context of Regression. Self6D [Wang et al., 2020] and its extension Self6D++ [Wang et al., 2021a] leveraged a differentiable renderer to conduct self-supervised visual and geometrical alignment on visible and amodal object mask predictions. Bao et al. [Bao et al., 2022] introduced a self-supervised representation learning of relative rotation estimation to adapt one gaze regressor to the target domain.\nZhang et al. [Zhang et al., 2021] utilized a Graph Convolutional Network to model domain-invariant geometry structure among key-points, which is applied to guide training of the object pose estimator on real images. These mentioned algorithms were designed for only one specific task and cannot be directly applied to other visual regression problems. [Chen et al., 2021] proposed the representation subspace distance (RSD) generic to multiple UDAVR problems, but cannot perform well on the challenging task having severe representation ambiguities, e.g. 6D pose estimation investigated in this paper (see Table 3). In contrast, the proposed MAST scheme is generic to UDAVR owing to exploiting explicit target correlation in the style of local manifolds to regularize deep representation learning agnostic to domains. Self-Training. Self-training methods utilize a trained model on labeled data to make predictions of unannotated data as pseudo labels [Lee and others, 2013] (i.e. supervision signals assigned to unlabeled data), which is widely used in semi-supervised learning [Lee and others, 2013;Sohn et al., 2020] and UDA [Roy et al., 2019]. [Sohn et al., 2020] generated pseudo labels from weakly augmented images, which are adopted as supervision of strongly augmented variants in semi-supervised learning; similar scripts are shared with the noisy student training [Xie et al., 2020]. [Chen et al., 2011] proposed the co-training for domain adaptation that slowly adding to the training set both the target features and instances in which the current algorithm is the most confident. [Zou et al., 2018] proposed the self-paced selftraining (SPST) for unsupervised domain adaptation classification that can perform a self-paced learning [Tang et al., 2012] with latent variable objective optimization. The representative SPST has inspired a number of follow-uppers such as [Zou et al., 2021] and [Chen et al., 2022b]. Nevertheless, all of existing self-training algorithms were designed for classification or segmentation, while self-training for the UDAVR remains a promising yet less explored direction." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "Given a source domain {I i S , y i S } N S i=1 with N S labeled samples and a target domain {I i T } N T i=1 with N T unlabeled sam-ples, tasks of UDAVR aim at learning a domain-invariant regression mapping to a shared continuous label space Y. In the context of our focused 6D object pose estimation, the source and target domains are often the synthetic and realworld data, respectively, while the shared label space between two domains is the whole learning space of SE(3).\nTo deal with the problems of UDAVR introduced in Sec. 1, e.g. , cumulative dependent nature and piece-wise manifolds in target space, we propose in this paper a manifoldaware self-training scheme, which decomposes the learning of SE(3) space into a global classification on discretized pose anchors and a local pose refinement for feature scaling robustness, and incorporates a self-supervised manifold regularization to the self-training." }, { "figure_ref": [ "fig_1" ], "heading": "The Design of Network Architecture", "publication_ref": [ "b42", "b103" ], "table_ref": [], "text": "Given a batch of B object-centric RGB images {I b } B b=1 as input, the proposed scheme is designed to predict 6D object poses {T b } B b=1 , with each pose T = [R|t] represented by a 3D rotation R ∈ SO(3) and a 3D translation t ∈ R 3 . The whole network architecture is shown in Fig. 2, which consists of three main modules, including a Feature Extractor, a Coarse Classifier of discretized pose anchors, and a Fine Regressor of Residual Poses to the nearest anchor.\nMore specifically, we employ the same feature extractor as [Labbé et al., 2020] to learn the pose-sensitive feature vectors {f b ∈ R C } B b=1 from each frame, which are then fed into the decoupled coarse classifier and fine regressor individually, whose output are combined together as final pose predictions. The former learns coarse poses via classification on the discretized pose anchors, while the latter learns residual poses to refine the coarse ones of pose anchors locally; both modules share the same input features, achieving superior robustness against inconsistent feature scaling. We will take a single image as an example to detail the two pose estimation modules shortly, and thus omit the superscript b of the notations for simplicity in the following subsections. Coarse Classification on Discretized Pose Anchors. Given the pose-sensitive feature f of I, the goal of this module is to globally make coarse predictions of R and t via classification on their pre-defined anchors, respectively. For the rotation R, we generate N R anchors that are uniformly distributed on the whole SO(3) space as [Li et al., 2018a], which are denoted as\n{R 1 a , • • • , R N R a }.\nFor the translation t, we factorize it into three individual classification targets, including the two translation components v x and v y on the image coordinate system along X-axis and Y-axis, with the remaining component z along Z-axis; for each classification target t ∈ {v x , v y , z}, we discretize the range of [d min t , d max t ] into N t bins uniformly, and use the bin centers {t 1 a , • • • , t Nt a } as the anchors of t. We implement the classifier as four Multilayer Perceptrons (MLPs) with N R , N vx , N vy , N z output neurons, which are collectively denoted as the probabilities S R ∈ R N R , S vx ∈ R Nv x , S vy ∈ R Nv y , and S z ∈ R Nz of R, v x , v y and z, respectively. Denoting their indexes of maximal probabilities as i max R , i max vx , i max vy and i max z , the classifier finally gives out their coarse pose predictions as Fine Regressor of Residual Poses. This module shares the same input feature f as the coarse classifier to make the learning more robust to feature scale variations, and is implemented as four MLPs with N R × 6, N vx , N vy , N z output neurons to regress the residuals of the pose anchors. We collectively denote the outputs as\nR cls = R i max R a , v x,cls = v i max vx x,a , v y,cls = v\n{R i reg,6D } N R i=1 , {v i x,reg } Nv x i=1 , {v i y,reg } Nv y\ni=1 , and {z i reg } Nz i=1 ; here we use the continuous 6D representations of rotation [Zhou et al., 2019] as the regression target, which can be transformed into rotation matrices\n{R i reg } N R i=1 .\nAccording to probabilities of the classifier, the fine regressor refines the coarse predictions via the residu- \nals R reg = R i max R reg , v x,reg = v i max vx x,reg , v y,reg = v\n       R = R reg • R cls x = (v x,cls + v x,reg ) • z/f x y = (v y,cls + v y,reg ) • z/f y z = z cls + z reg ,(1)\nwhere f x and f y are the focal lengths along X-axis and Yaxis, respectively." }, { "figure_ref": [], "heading": "Manifold-Aware Objective", "publication_ref": [ "b42", "b6" ], "table_ref": [], "text": "To train our network, we formulate the following manifoldaware objective L via combining a coarse-to-fine pose decomposition loss L pose with a cumulative target correlation regularization L ctc :\nL = L pose + L ctc ,(2)\nwhere L pose favors for domain-invariant representations in 6D pose estimation across domains, while L ctc enforces target manifolds into representation learning.\nCoarse-to-fine Pose Decomposition Loss. L pose consists of two loss terms L cls and L reg for the coarse classifier and the fine regressor, respectively, as follows:\nL pose = 1 B B b=1 L b cls + L b reg .(3)\nFor simplicity, we introduce L pose on single input, and thus omit the batch index b accordingly.\nFor the coarse classifier, given the ground truth pose T = [ R| t], with t = [x, ỹ, z] (and ṽx , ṽy ), we first adopt a sparse scoring strategy to assign the labels for S R , S vx , S vy and S z , resulting in SR , Svx , Svy and Sz , respectively, with each element si t (t ∈ {R, v x , v y , z}) assigned as follows:\nsi t =    θ t,1 , i ∈ NN 1 ( t) θ t,2 , i ∈ NN kt ( t)\\NN 1 ( t) 0, Otherwise ,(4)\nwhere θ t,1 ≫ θ t,2 , and\nθ t,1 + (k t -1)θ t,2 = 1. NN kt ( t)\ndenotes the set of indexes of the k t nearest anchors of t.1 With the assigned labels, we use the cross-entropy loss H on top of the classifier as follows:\nL cls = t∈{R,vx,vy,z} H(S t , St ).(5)\nFor the fine regressor, we make individual predictions on each anchor of t ∈ {R, (v x , v y ), z} by combining the paired classification and regression results, and supervise the predic-tions of their top K nearest anchors of t as follows:\nL reg = i∈NN k R ( R) D(T R i , T ) + i∈NN kz (z) D(T z i , T ) + i∈NN kv xvy (ṽx ṽy) D(T v i x v i y , T ),(6)\nwhere t i denotes the prediction of the anchor i of t, and T t i denotes the object pose computed by t i and other ground truths { R, (ṽ x , ṽy ), z}\\ t. D(•, •) is the L 1 distance between the point sets transformed by two object poses from the same object point cloud O, as follows:\nD(T , T ) = 1 |O| x∈O ∥T x -T x∥ 1 .(7)\nFollowing [Labbé et al., 2020], we combine the supervision of v x and v y for convenience in (6), and also employ the same strategy to handle object symmetries by finding the closest ground truth rotation to the predicted one.\nCumulative Target Correlation Regularization. For regression tasks, continuous targets preserve latent cumulative dependency [Chen et al., 2013]. When we discretize the continuously changing targets into discretized labels as classification, the assumption of independence across targets is adopted, which is invalid in regressing continuous targets. As a result, each class cannot seek support from samples of correlated class, which can significantly reduce performance especially for sparse and imbalanced data distributions. To better cope with this problem, we propose to regularize the features by an explicit relation in the regression target space.\nGiven the pose-sensitive feature vectors {f b ∈ R C } B b=1 of a mini-batch inputs {I b } B b=1 , we first build the feature correlation graph G ∈ R B×B across the data batch via feature cosine similarities, with the element g ij indexed by (i, j) computed as follows:\ng ij = < f i , f j > ||f i || 2 • ||f j || 2 ,(8)\nwhere < •, • > denotes inner product. We then build the ground truth G based on a pre-computed correlation graph G0 ∈ R N ×N with N pose classes; assuming the classes of I i and I j are n i and n j , respectively, we assign the value of gij ∈ G as that of gninj 0 . Finally, the proposed target correlation regularizer can be simply written as the squared L 2 distance between G and G:\nL ctc = ∥G -G∥ 2 2 . (9\n)\nThere are multiple choices for building the pose-related correlation graph G0 ; here we introduce a simple but effective one, which utilizes the similarity of depth components of translations along Z-axis to initialize G0 , with N = N z . Specifically, for the anchors {z 1 a , • • • , z N a } of z, we map them linearly to the angles {ϕ 1 , • • • , ϕ N } as follows:\nϕ n = z n a z max -z min • π 2 , (10\n)\nand the element gninj 0 of G0 indexed by (n i , n j ) can be defined as the cosine of difference between the angles:\ngninj 0 = cos(|ϕ ni -ϕ nj |).(11)\nWhen z ni a and z nj a are close, the difference of their corresponding angles is small, and thus the correlation value of gninj 0 will be large. The reason for choosing z is that the learning of this component is very challenging in 6D pose estimation without depth information. Experimental results in Sec. 4.2 also verify the effectiveness of our regularization." }, { "figure_ref": [], "heading": "Manifold-Aware Self-training", "publication_ref": [ "b105", "b105" ], "table_ref": [], "text": "To reduce the Sim2Real domain gap, we design a manifoldaware self-training scheme for unsupervisedly adapting the pose estimator, which adaptively incorporates our proposed manifold-aware training objective in (2) with Self-Paced Self-Training [Zou et al., 2018] to select target samples in an easyto-hard manner. More specifically, we first train a teacher model M T on the labeled synthetic data (source domain) as a pseudo-label annotator for the unlabeled real-world data (target domain), and select the training samples from the real data with pseudo labels for the learning of a student model M S . Both teacher and student models share the same networks introduced in Sec. 3.1, and are trained by solving the problems of min M T L and min M S L, respectively.\nThe core of sample selection on the target domain lies on the qualities of pseudo labels. For the tasks of visual classification, the categorical probabilities are usually used as the measurement of qualities, while for those of visual regression tasks, e.g. , object pose estimation in this paper, direct usage of the typical mean square error (MSE) can be less effective due to lack of directional constraints for adaptation. In geometric viewpoint, the surface of a super ball can have the same MSE distance to its origin, but the optimal regions of object surface for domain adaptation exist, which can be omitted by the MSE metric. Owing to the decomposition of object pose estimation into coarse classification and fine regression in our MAST scheme, we can flexibly exploit the classification scores to indicate the qualities of pseudo labels, since the coarse classification points out the overall direction of pose estimation. In practice, we use the probabilities S z as confidence scores because UDA on classification can perform more stably and robustly, and set a threshold τ to select the samples with scores larger than τ for training M S . Larger score indicates higher quality of the pseudo label. Following [Zou et al., 2018], the threshold τ is gradually decreased during training, realizing the learning in an easy-to-hard manner and making M S generalized to harder target samples." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b29", "b2", "b2", "b76", "b95", "b95", "b70", "b37", "b83", "b35", "b33", "b83", "b29", "b31", "b24", "b40", "b42" ], "table_ref": [], "text": "Datasets and Settings. The LineMOD dataset [Hinterstoisser et al., 2012] provides individual videos of 13 textureless objects, which are recorded in cluttered scenes with challenging lighting variations. For each object, we follow [Brachmann et al., 2014] to use randomly sampled 15% of the sequence as the real-world training data of the target domain, and the remaining images are set aside for testing. The Occluded LineMOD dataset [Brachmann et al., 2014] [Sundermeyer et al., 2020] 4.0 20.9 30.5 35.9 17.9 24.0 4.9 81.0 45.5 17.6 32.0 60.5 33.8 31.4 DSC-PoseNet [Yang et al., 2021] [Yang et al., 2021] 13.9 15.1 19.4 40.5 6.9 38.9 24.0 16.3 21.9 72.9 40.6 18.5 44.0 Sock et al. [Sock et al., 2020] 12.0 27.5 12.0 20.5 23.0 of the LineMOD with 8 different objects, which is formed by the images with severe object occlusions and self-occlusions.\nWe follow [Wang et al., 2021a] to split the training and test sets. The HomebrewedDB dataset [Kaskman et al., 2019] provides newly captured test images of three objects in the LineMOD, including bvise, driller and phone. Following the Self-6D [Wang et al., 2020], the second sequence of Home-brewedDB is used to test our models which are trained on the LineMOD, to evaluate the robustness of our method on different variations, e.g. , scene layouts and camera intrinsics. In the experiments, the above three real-world datasets are considered as the target domains, all of which share the same synthetic source domain. We employ the publicly available synthetic data provided by BOP challenge [Hodaň et al., 2020] as the source data, which contains 50k images generated by physically-based rendering (PBR) [Hodaň et al., 2019].\nEvaluation Metrics. Following [Wang et al., 2020], we employ the Average Distance of model points (ADD) [Hinterstoisser et al., 2012] as the evaluation metric of the 6D poses for asymmetric objects, which measures the average devia-tion of the model point set O transformed by the estimated pose T = [R|t] and that transformed by the ground-truth pose T = [ R| t]:\nD ADD (T , T ) = 1 |O| x∈O ∥(Rx + t) -( Rx + t)∥ 2 . (12\n)\nFor symmetric objects, we employ the metric of Average Distance of the closest points (ADD-S) [Hodaň et al., 2016]:\nD ADD-S (T , T ) = 1 |O| x1∈O min x2∈O ∥(Rx 1 +t)-( Rx 2 + t)∥ 2 .\n(13) Combining ( 12) and ( 13), we report the Average Recall (%) of ADD(-S) less than 10% of the object's diameter on all the three datasets. Implementation Details. For object detection, we use Mask R-CNN [He et al., 2017] [Wang et al., 2021a], we train individual networks for all the objects with the Adam optimizer [Kingma and Ba, 2014]. The teacher model M T is firstly pre-trained on the synthetic images of all objects, and then fine-tuned on the single object, while the parameters of the student model M S is initialized as those of M T ; their initial learning rates are 3×10 -4 and 3×10 -5 , respectively. The training batch size is set as B = 32. We also include the same data augmentation as [Labbé et al., 2020] during training." }, { "figure_ref": [], "heading": "Comparative Evaluation", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We compare our method with the existing ones on three benchmarks for 6D object pose estimation with RGB images.\nOn the LineMOD, we conduct experiments under three settings of training data, including 1) labeled synthetic data, 2) labeled synthetic and real data, and 3) labeled synthetic data and unlabeled real data. Results of the first two settings are the lower and upper bounds of that of the last setting. We report qualitative results of comparative methods in Table 1, where our method outperforms its competitors by large margins under all the settings, e.g. , with the respective improvements of 4.7%, 3.5% and 1.6% over the stateof-the-art Self6D++ [Wang et al., 2021a]. On the Occluded LineMOD and the HomebrewedDB, results are shown in Table 2, where our method consistently performs better than the existing ones on both datasets, demonstrating superior robustness of our method against occlusion and the generalization to new scenes and cameras." }, { "figure_ref": [], "heading": "Ablation Studies and Analyses", "publication_ref": [], "table_ref": [], "text": "Effects of Decomposing into Coarse Classification and Fine Regression. We decompose the problem of UDA on estimating object poses into a coarse classification on discretized anchors and a residual regression. As shown in Table 3, for the models trained purely on synthetic data, the design of pose decomposition realizes 4.0% and 5.7% improvements on the LineMOD and the Occluded LineMOD, respectively, compared to direct regression of object poses, since the global classification eases the difficulty in learning along with feature-scaling robustness, and the local regression achieves pose refinement." }, { "figure_ref": [ "fig_0", "fig_4" ], "heading": "Effects of Cumulative Target Correlation Regularization.", "publication_ref": [ "b82", "b10", "b62", "b24", "b42", "b42" ], "table_ref": [], "text": "As shown in Table 3, L ctc consistently improves the results under different settings across different datasets, e.g. , 5.6% improvement on the Occluded LineMOD for the model trained on synthetic data, which demonstrates the effectiveness of L ctc on mining latent correlation across regression targets. We also visualize the feature distribution of an example via the t-SNE [ Van der Maaten and Hinton, 2008] in Fig. 1, where, with L ctc , features assigned to different pose anchors preserve smooth and continuously changing nature of regression targets in the feature space. Compared to the RSD [Chen et al., 2021] designed for the problem of UDA on regression, our MAST scheme can significantly beat the competing RSD (see results in Table 3), where the only difference lies in replacing self-training on coarse classification with RSD on whole regression. Such an observation can again confirm the superiority of the proposed MAST scheme, consistently outperforming the state-of-theart UDA on regression.\nOn More Visual Regression Tasks. We conduct more experiments on the dSprites dataset [Matthey et al., 2017] for assessing UDAVR performance. For simplicity, the problem aims to regress the \"scale\" variable of a shape from images.\nUsing the same backbone as RSD, under the UDA setting from the scream (S) domain to the noisy (N) domain, our MAST can achieve 0.024 in terms of mean absolute error, while the RSD only obtains 0.043.\nRun-time analysis. On a server with NVIDIA GeForce RTX 3090 GPU, given a 640 × 480 image, the run-time of our network is up to 5.8 ms/object including object detection and pose estimation when using Mask R-CNN [He et al., 2017] as detector. Pose estimation takes around 5 ms/object.\nDetails of output pose. We employ a render-andcompare style pose refinement process as [Labbé et al., 2020] to get final object pose. An initial guess pose [R init , x init , y init , z init ] is calculated from bounding box and object CAD model using the same strategy as [Labbé et al., 2020]. Given the network output [R, x, y, z], the estimated object pose [R obj , x obj , y obj , z obj ] can be calculated by: On selecting samples with pseudo pose labels. We choose the probability S z as confidence scores in practice, Fig. 3 shows the average recall of selected samples with pseudo pose labels via S R , S vx , S vy , S z , which tells that as the confidence threshold becomes larger, only red line (S z ) grows in terms of the average recall while others remain unchanged or decreased.\n       R obj = R • R init x obj = x + x init y obj = y + y init z obj = z • z init ,(14)" }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes a novel and generic manifold-aware selftraining scheme for UDA on regression, which is applied to the challenging 6D pose estimation of object instances. We address the UDAVR problem via decomposing it into coarse classification and fine regression, together with a cumulative target correlation regularization. Experiment results on three popular benchmarks can verify the effectiveness of our MAST scheme, outperforming the state-of-the-art methods with significant margins. It is worth pointing out that our MAST scheme can readily be applied to any UDA regression tasks, as the UDA on coarse classification making our method robust against feature scaling while maintaining latent cumulative correlation underlying in regression target space." }, { "figure_ref": [], "heading": "Acknowledgments", "publication_ref": [], "table_ref": [], "text": "This work is supported in part by the National Natural Science Foundation of China (Grant No.: 61902131), the Guangdong Youth Talent Program (Grant No.: 2019QN01X246), the Guangdong Basic and Applied Basic Research Foundation (Grant No.: 2022A1515011549), the Program for Guangdong Introducing Innovative and Enterpreneurial Teams (Grant No.: 2017ZT07X183), and the Guangdong Provincial Key Laboratory of Human Digital Twin (Grant No.: 2022B1212010004)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "//github.com/Gorilla" } ]
Domain gap between synthetic and real data in visual regression (e.g. 6D pose estimation) is bridged in this paper via global feature alignment and local refinement on the coarse classification of discretized anchor classes in target space, which imposes a piece-wise target manifold regularization into domain-invariant representation learning. Specifically, our method incorporates an explicit self-supervised manifold regularization, revealing consistent cumulative target dependency across domains, to a self-training scheme (e.g. the popular Self-Paced Self-Training) to encourage more discriminative transferable representations of regression tasks. Moreover, learning unified implicit neural functions to estimate relative direction and distance of targets to their nearest class bins aims to refine target classification predictions, which can gain robust performance against inconsistent feature scaling sensitive to UDA regressors. Experiment results on three public benchmarks of the challenging 6D pose estimation task can verify the effectiveness of our method, consistently achieving superior performance to the state-of-the-art for UDA on 6D pose estimation.
Manifold-Aware Self-Training for Unsupervised Domain Adaptation on Regressing 6D Object Pose
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of the proposed Manifold-Aware Self Training (MAST) for UDA on 6D pose estimation. Top: a novel cumulative target correlation (CTC) regularization on representation learning.Middle: one bin with the highest confidence (i.e. highlighted with the largest red dish) with further local refinement (i.e. the gray lines) are adopted to generate pseudo pose labels in our MAST. Bottom: the t-SNE visualization of feature distribution w/ and w/o the proposed CTC, which can verify the effectiveness of introduction of target manifolds with a more smooth distribution; and comparison with our MAST and its backbone with an example from the Occluded LineMOD dataset is given.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The pipeline of our manifold-aware self-training scheme.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "predictions and their residuals, our proposed network can generate the final object pose T = [R|t], with t = [x, y, z], as follows:", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "trained purely on synthetic PBR images to generate the object bounding boxes for the target real data. For pose estimation, we set the numbers of anchors as N R = 60, N vx = N vy = 20, N z = 40, and set the ranges of v x , v y and z as [d min vx , d max vx ] = [d min vy , d max vy ] = [-200, 200], and [d min z , d max z] = [0.0, 2.0], respectively. To train our network, we choose θ R 1 = 0.7, θ R 2 = 0.1 and k R = 4 for rotation in (4), and also set θ vx 1 075, and k vx = k vy = k z = 7 for translation. Following the popular setting", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The effect of selecting samples via S R , Sv x , Sv y , Sz confidence w.r.t. the recall of the ADD(-S) among selected samples on driller object of LineMOD dataset training set.", "figure_data": "", "figure_id": "fig_4", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "is a subset", "figure_data": "MethodApe Bvise Cam Can CatDrill Duck Eggbox Glue Holep Iron Lamp Phone MeanData: syn (w/ GT)AAE", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparative evaluation on the LineMOD dataset w.r.t. the Average Recall (%) of ADD(-S). Symmetric object classes are in italic. 'MAR' (manifold-aware regression) denotes our method without self-training.", "figure_data": "23.4 75.611.7 40.1 26.7 53.814.073.626.719.556.2 39.420.037.0MHP [Manhardt et al., 2019]11.9 66.222.4 59.8 26.9 44.68.355.754.615.560.8-34.438.8DPOD [Zakharov et al., 2019]35.1 59.415.5 48.8 28.1 59.325.651.234.617.784.7 45.020.940.5SD-Pose [Li et al., 2021]54.0 76.450.2 81.2 71.0 64.254.093.992.624.077.0 82.653.767.3Self6D++ [Wang et al., 2021a]50.9 99.489.2 97.2 79.9 98.724.681.181.241.998.8 98.964.377.4MAR (ours)68.6 97.479.4 98.3 87.1 94.261.382.087.156.794.3 92.368.882.1Data: syn (w/ GT) + real (w/ GT)DPOD [Zakharov et al., 2019]53.3 95.290.0 94.1 60.4 97.466.099.693.864.999.8 88.171.482.6DSC-PoseNet [Yang et al., 2021] 59.2 98.188.0 92.1 79.4 94.551.798.593.978.496.2 96.390.085.9Self6D++ [Wang et al., 2021a]85.0 99.896.5 99.3 93.0 100.0 65.399.998.173.486.9 99.686.391.0MAR (ours)81.4 99.990.7 99.6 94.6 98.185.597.698.589.297.1 99.796.094.5Data: syn (w/ GT) + real (w/o GT)Self6D-RGB [Wang et al., 2020]0.010.13.10.00.07.50.133.00.20.05.920.72.46.4DSC-PoseNet [Yang et al., 2021] 35.9 83.151.5 61.0 45.0 68.027.689.252.526.456.3 68.746.354.7Zhang et al. [Zhang et al., 2021]-------------60.4Sock et al. [Sock et al., 2020]37.6 78.665.5 65.6 52.5 48.835.189.264.541.580.9 70.760.560.6Self6D++ [Wang et al., 2021a]76.0 91.697.1 99.8 85.6 98.856.591.092.235.499.5 97.491.885.6MAST (ours)73.5 97.280.8 98.6 89.1 93.966.995.395.469.895.5 98.679.187.2MethodOccluded LineMOD Ape Can Cat Drill Duck Eggbox Glue Holep Mean Bvise Drill Phone Mean HomebrewedDBData: syn (w/ GT)DPOD [Zakharov et al., 2019]2.34.01.27.210.54.412.97.56.352.937.87.332.7CDPN [Li et al., 2019]20.0 15.1 16.4 22.25.036.127.924.020.8----SD-Pose [Li et al., 2021]21.5 56.7 17.0 44.4 27.642.845.221.634.6----SSD6D+Ref. [Manhardt et al., 2018]---------82.022.924.943.3Self6D++ [Wang et al., 2021a]44.0 83.9 49.1 88.5 15.033.975.034.052.97.12.20.13.1MAR (ours)44.9 78.4 40.3 73.5 47.926.972.158.055.392.691.580.088.0Data: syn (w/ GT) + real (w/o GT)DSC-PoseNet", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparative evaluation on the Occluded LineMOD and HomebrewedDB datasets w.r.t. the Average Recall (%) of the ADD(-S). Symmetric object classes are in italic. 'MAR' (manifold-aware regression) denotes our method without self-training.", "figure_data": ".127.035.022.857.346.641.552.0", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Effects of Manifold-Aware Self-Training on Coarse Classification. The self-training schemes have been verified their effectiveness on reducing the Sim2Real domain gap by incorporating the unlabeled real data into training via pseudo label generation and training sample selection. Taking our network with L ctc as example, the results are improved from 55.3% to 61.4% on the Occluded LineMOD via self-training.", "figure_data": "Pose Estimator L ctcMethod of UDADataset LM LMOData: syn (w/ GT)Reg.×-75.3 44.0Cls. + Reg.×-79.3 49.7Cls. + Reg.✓-82.1 55.3Data: syn (w/ GT) + real (w/o GT)Cls. + Reg.✓RSD [Chen et al., 2021] 83.9 55.0Cls. + Reg.×Self-Training85.6 60.1Cls. + Reg.✓Self-Training87.2 61.4Table 3: Ablation studies on LineMOD (LM) and OccludedLineMOD (LMO) datasets w.r.t. the Average Recall (%) of ADD(-S). 'Reg.' denotes direct regression of object poses, while 'Cls. +Reg.' denotes the combined use of coarse classification and fine re-gressor for pose estimation.", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Yichen Zhang; Jiehong Lin; Ke Chen; Zelin Xu; Yaowei Wang; Kui Jia
[ { "authors": "Bao ", "journal": "", "ref_id": "b0", "title": "", "year": "2022" }, { "authors": "Yiwei Bao; Yunfei Liu; Haofei Wang; Feng Lu", "journal": "", "ref_id": "b1", "title": "Generalizing gaze estimation with rotation consistency", "year": "2022" }, { "authors": " Brachmann", "journal": "", "ref_id": "b2", "title": "", "year": "2014" }, { "authors": "Eric Brachmann; Alexander Krull; Frank Michel; Stefan Gumhold; Jamie Shotton; Carsten Rother", "journal": "", "ref_id": "b3", "title": "Learning 6d object pose estimation using 3d object coordinates", "year": "2014" }, { "authors": "Chen ", "journal": "", "ref_id": "b4", "title": "", "year": "2011" }, { "authors": "Minmin Chen; Kilian Q Weinberger; John Blitzer", "journal": "NeurIPS", "ref_id": "b5", "title": "Co-training for domain adaptation", "year": "2011" }, { "authors": "Chen ", "journal": "", "ref_id": "b6", "title": "", "year": "2013" }, { "authors": "Ke Chen; Shaogang Gong; Tao Xiang; Chen Change Loy", "journal": "", "ref_id": "b7", "title": "Cumulative attribute space for age and crowd density estimation", "year": "2013" }, { "authors": "Chen ", "journal": "", "ref_id": "b8", "title": "", "year": "2017" }, { "authors": "Xiaozhi Chen; Huimin Ma; Ji Wan; Bo Li; Tian Xia", "journal": "", "ref_id": "b9", "title": "Multi-view 3d object detection network for autonomous driving", "year": "2017" }, { "authors": "Chen ", "journal": "", "ref_id": "b10", "title": "", "year": "2021" }, { "authors": "Xinyang Chen; Sinan Wang; Jianmin Wang; Mingsheng Long", "journal": "", "ref_id": "b11", "title": "Representation subspace distance for domain adaptation regression", "year": "2021" }, { "authors": "Chen ", "journal": "", "ref_id": "b12", "title": "Epro-pnp: Generalized end-to-end probabilistic perspective-n-points for monocular object pose estimation", "year": "2022" }, { "authors": "Chen ", "journal": "", "ref_id": "b13", "title": "Quasi-balanced self-training on noise-aware synthesis of object point clouds for closing domain gap", "year": "2022" }, { "authors": " Collet", "journal": "", "ref_id": "b14", "title": "", "year": "2011" }, { "authors": "Alvaro Collet; Manuel Martinez; Siddhartha S Srinivasa", "journal": "", "ref_id": "b15", "title": "The moped framework: Object recognition and pose estimation for manipulation", "year": "2011" }, { "authors": "Bolles Fischler", "journal": "", "ref_id": "b16", "title": "", "year": "1981" }, { "authors": "A Martin; Robert C Fischler; Bolles", "journal": "Communications of the ACM", "ref_id": "b17", "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": " Gao", "journal": "", "ref_id": "b18", "title": "", "year": "2018" }, { "authors": "Ge Gao; Mikko Lauri; Jianwei Zhang; Simone Frintrop", "journal": "", "ref_id": "b19", "title": "Occlusion resistant object rotation regression from point cloud segments", "year": "2018" }, { "authors": " Geiger", "journal": "", "ref_id": "b20", "title": "", "year": "2012" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "", "ref_id": "b21", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": " Gopalan", "journal": "", "ref_id": "b22", "title": "", "year": "2011" }, { "authors": "Raghuraman Gopalan; Ruonan Li; Rama Chellappa", "journal": "", "ref_id": "b23", "title": "Domain adaptation for object recognition: An unsupervised approach", "year": "2011" }, { "authors": " He", "journal": "", "ref_id": "b24", "title": "Mask r-cnn", "year": "2017" }, { "authors": " He", "journal": "", "ref_id": "b25", "title": "", "year": "2020" }, { "authors": "Yisheng He; Wei Sun; Haibin Huang; Jianran Liu; Haoqiang Fan; Jian Sun", "journal": "", "ref_id": "b26", "title": "Pvn3d: A deep point-wise 3d keypoints voting network for 6dof pose estimation", "year": "2020" }, { "authors": " He", "journal": "", "ref_id": "b27", "title": "", "year": "2021" }, { "authors": "Yisheng He; Haibin Huang; Haoqiang Fan; Qifeng Chen; Jian Sun", "journal": "", "ref_id": "b28", "title": "Ffb6d: A full flow bidirectional fusion network for 6d pose estimation", "year": "2021" }, { "authors": " Hinterstoisser", "journal": "", "ref_id": "b29", "title": "", "year": "2012" }, { "authors": "Stefan Hinterstoisser; Vincent Lepetit; Slobodan Ilic; Stefan Holzer; Gary Bradski; Kurt Konolige; Nassir Navab", "journal": "", "ref_id": "b30", "title": "Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes", "year": "2012" }, { "authors": " Hodaň", "journal": "", "ref_id": "b31", "title": "", "year": "2016" }, { "authors": "Tomáš Hodaň; Jiří Matas; Štěpán Obdržálek", "journal": "", "ref_id": "b32", "title": "On evaluation of 6d object pose estimation", "year": "2016" }, { "authors": " Hodaň", "journal": "", "ref_id": "b33", "title": "", "year": "2019" }, { "authors": "Tomáš Hodaň; Vibhav Vineet; Ran Gal; Emanuel Shalev; Jon Hanzelka; Treb Connell; Pedro Urbina; Sudipta N Sinha; Brian Guenter", "journal": "", "ref_id": "b34", "title": "Photorealistic image synthesis for object instance detection", "year": "2019" }, { "authors": " Hodaň", "journal": "", "ref_id": "b35", "title": "", "year": "2020" }, { "authors": "Tomáš Hodaň; Martin Sundermeyer; Bertram Drost; Yann Labbé; Eric Brachmann; Frank Michel; Carsten Rother; Jiří Matas", "journal": "", "ref_id": "b36", "title": "BOP challenge 2020 on 6D object localization", "year": "2020" }, { "authors": " Kaskman", "journal": "", "ref_id": "b37", "title": "Homebreweddb: Rgbd dataset for 6d pose estimation of 3d objects", "year": "2019" }, { "authors": " Kehl", "journal": "", "ref_id": "b38", "title": "", "year": "2017" }, { "authors": "Wadim Kehl; Fabian Manhardt; Federico Tombari; Slobodan Ilic; Nassir Navab", "journal": "", "ref_id": "b39", "title": "Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again", "year": "2017" }, { "authors": "Ba Kingma", "journal": "", "ref_id": "b40", "title": "", "year": "2014" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b41", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": " Labbé", "journal": "", "ref_id": "b42", "title": "", "year": "2020" }, { "authors": "Yann Labbé; Justin Carpentier; Aubry Mathieu; Josef Sivic", "journal": "", "ref_id": "b43", "title": "Cosypose: Consistent multi-view multi-object 6d pose estimation", "year": "2020" }, { "authors": "Others Lee", "journal": "", "ref_id": "b44", "title": "", "year": "2013" }, { "authors": "Dong-Hyun Lee", "journal": "ICML", "ref_id": "b45", "title": "Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks", "year": "2013" }, { "authors": " Lee", "journal": "", "ref_id": "b46", "title": "", "year": "2022" }, { "authors": "Taeyeop Lee; Byeong-Uk Lee; Inkyu Shin; Jaesung Choe; Ukcheol Shin; In So Kweon; Kuk-Jin Yoon", "journal": "", "ref_id": "b47", "title": "Uda-cope: Unsupervised domain adaptation for category-level object pose estimation", "year": "2022" }, { "authors": " Li", "journal": "", "ref_id": "b48", "title": "A unified framework for multi-view multi-class object pose estimation", "year": "2018" }, { "authors": " Li", "journal": "", "ref_id": "b49", "title": "Deepim: Deep iterative matching for 6d pose estimation", "year": "2018" }, { "authors": " Li", "journal": "", "ref_id": "b50", "title": "", "year": "2019" }, { "authors": "Zhigang Li; Gu Wang; Xiangyang Ji", "journal": "", "ref_id": "b51", "title": "Cdpn: Coordinates-based disentangled pose network for real-time rgb-based 6-dof object pose estimation", "year": "2019" }, { "authors": " Li", "journal": "", "ref_id": "b52", "title": "", "year": "2021" }, { "authors": "Zhigang Li; Yinlin Hu; Mathieu Salzmann; Xiangyang Ji", "journal": "", "ref_id": "b53", "title": "Sd-pose: Semantic decomposition for cross-domain 6d object pose estimation", "year": "2021" }, { "authors": "Lin ", "journal": "", "ref_id": "b54", "title": "", "year": "2022" }, { "authors": "Jiehong Lin; Zewei Wei; Changxing Ding; Kui Jia", "journal": "", "ref_id": "b55", "title": "Category-level 6d object pose and size estimation using self-supervised deep prior deformation networks", "year": "2022" }, { "authors": " Manhardt", "journal": "", "ref_id": "b56", "title": "", "year": "2018" }, { "authors": "Fabian Manhardt; Wadim Kehl; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b57", "title": "Deep model-based 6d pose refinement in rgb", "year": "2018" }, { "authors": " Manhardt", "journal": "", "ref_id": "b58", "title": "", "year": "2019" }, { "authors": "Fabian Manhardt; Diego Martin Arroyo; Christian Rupprecht; Benjamin Busam; Tolga Birdal; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b59", "title": "Explaining the ambiguity of object detection and 6d pose from visual data", "year": "2019" }, { "authors": " Marchand", "journal": "", "ref_id": "b60", "title": "", "year": "2015" }, { "authors": "Eric Marchand; Hideaki Uchiyama; Fabien Spindler", "journal": "IEEE transact. on visualization and computer graphics", "ref_id": "b61", "title": "Pose estimation for augmented reality: a hands-on survey", "year": "2015" }, { "authors": " Matthey", "journal": "", "ref_id": "b62", "title": "", "year": "2017" }, { "authors": "Loic Matthey; Irina Higgins; Demis Hassabis; Alexander Lerchner", "journal": "", "ref_id": "b63", "title": "dsprites: Disentanglement testing sprites dataset", "year": "2017" }, { "authors": " Park", "journal": "", "ref_id": "b64", "title": "", "year": "2019" }, { "authors": "Kiru Park; Timothy Patten; Markus Vincze", "journal": "", "ref_id": "b65", "title": "Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation", "year": "2019" }, { "authors": " Peng", "journal": "", "ref_id": "b66", "title": "", "year": "2019" }, { "authors": "Sida Peng; Yuan Liu; Qixing Huang; Xiaowei Zhou; Hujun Bao", "journal": "", "ref_id": "b67", "title": "Pvnet: Pixel-wise voting network for 6dof pose estimation", "year": "2019" }, { "authors": " Roy", "journal": "", "ref_id": "b68", "title": "", "year": "2019" }, { "authors": "Subhankar Roy; Aliaksandr Siarohin; Enver Sangineto; Samuel Rota Bulo; Nicu Sebe; Elisa Ricci", "journal": "", "ref_id": "b69", "title": "Unsupervised domain adaptation using featurewhitening and consensus loss", "year": "2019" }, { "authors": " Sock", "journal": "", "ref_id": "b70", "title": "", "year": "2020" }, { "authors": "Juil Sock; Guillermo Garcia-Hernando; Anil Armagan; Tae-Kyun Kim", "journal": "", "ref_id": "b71", "title": "Introducing pose consistency and warp-alignment for self-supervised 6d object pose estimation in color images", "year": "2020" }, { "authors": " Sohn", "journal": "", "ref_id": "b72", "title": "", "year": "2020" }, { "authors": "Kihyuk Sohn; David Berthelot; Nicholas Carlini; Zizhao Zhang; Han Zhang; Colin A Raffel; Ekin Dogus Cubuk; Alexey Kurakin; Chun-Liang Li", "journal": "NeurIPS", "ref_id": "b73", "title": "Fixmatch: Simplifying semi-supervised learning with consistency and confidence", "year": "2020" }, { "authors": " Su", "journal": "", "ref_id": "b74", "title": "", "year": "2015" }, { "authors": "Hao Su; Yangyan Charles R Qi; Leonidas J Li; Guibas", "journal": "", "ref_id": "b75", "title": "Render for cnn: Viewpoint estimation in images using cnns trained with rendered 3d model views", "year": "2015" }, { "authors": " Sundermeyer", "journal": "", "ref_id": "b76", "title": "", "year": "2020" }, { "authors": "Martin Sundermeyer; Zoltan-Csaba Marton; Maximilian Durner; Rudolph Triebel", "journal": "IJCV", "ref_id": "b77", "title": "Augmented autoencoders: Implicit 3d orientation learning for 6d object detection", "year": "2020" }, { "authors": " Tang", "journal": "", "ref_id": "b78", "title": "", "year": "2012" }, { "authors": "Kevin Tang; Vignesh Ramanathan; Li Fei-Fei; Daphne Koller", "journal": "NeurIPS", "ref_id": "b79", "title": "Shifting weights: Adapting object detectors from image to video", "year": "2012" }, { "authors": " Tekin", "journal": "", "ref_id": "b80", "title": "", "year": "2018" }, { "authors": " Bugra Tekin; N Sudipta; Pascal Sinha; Fua", "journal": "", "ref_id": "b81", "title": "Real-time seamless single shot 6d object pose prediction", "year": "2018" }, { "authors": " Van Der Maaten; ; Hinton; Geoffrey Maaten; Hinton", "journal": "Journal of machine learning research", "ref_id": "b82", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": " Wang", "journal": "", "ref_id": "b83", "title": "", "year": "2020" }, { "authors": "Gu Wang; Fabian Manhardt; Jianzhun Shao; Xiangyang Ji; Nassir Navab; Federico Tombari", "journal": "", "ref_id": "b84", "title": "Self6d: Self-supervised monocular 6d object pose estimation", "year": "2020" }, { "authors": " Wang", "journal": "IEEE TPAMI", "ref_id": "b85", "title": "Occlusionaware self-supervised monocular 6d object pose estimation", "year": "2021" }, { "authors": " Wang", "journal": "", "ref_id": "b86", "title": "Gdr-net: Geometry-guided direct regression network for monocular 6d object pose estimation", "year": "2021" }, { "authors": " Xiang", "journal": "", "ref_id": "b87", "title": "", "year": "2018" }, { "authors": "Yu Xiang; Tanner Schmidt; Venkatraman Narayanan; Dieter Fox", "journal": "RSS", "ref_id": "b88", "title": "Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes", "year": "2018" }, { "authors": " Xie", "journal": "", "ref_id": "b89", "title": "", "year": "2020" }, { "authors": "Qizhe Xie; Minh-Thang Luong; Eduard Hovy; Quoc V Le", "journal": "", "ref_id": "b90", "title": "Self-training with noisy student improves imagenet classification", "year": "2020" }, { "authors": " Xu", "journal": "", "ref_id": "b91", "title": "", "year": "2019" }, { "authors": "Zelin Xu; Ke Chen; Kui Jia", "journal": "", "ref_id": "b92", "title": "Wposenet: Dense correspondence regularized pixel pair pose regression", "year": "2019" }, { "authors": " Xu", "journal": "", "ref_id": "b93", "title": "", "year": "2022" }, { "authors": "Yan Xu; Kwan-Yee Lin; Guofeng Zhang; Xiaogang Wang; Hongsheng Li", "journal": "", "ref_id": "b94", "title": "Rnnpose: Recurrent 6-dof object pose refinement with robust correspondence field estimation and pose optimization", "year": "2022" }, { "authors": "Yang ", "journal": "", "ref_id": "b95", "title": "", "year": "2021" }, { "authors": "Zongxin Yang; Xin Yu; Yi Yang", "journal": "", "ref_id": "b96", "title": "Dsc-posenet: Learning 6dof object pose estimation via dual-scale consistency", "year": "2021" }, { "authors": " Yue", "journal": "", "ref_id": "b97", "title": "", "year": "2021" }, { "authors": "Xiangyu Yue; Zangwei Zheng; Shanghang Zhang; Yang Gao; Trevor Darrell; Kurt Keutzer; Alberto Sangiovanni Vincentelli", "journal": "", "ref_id": "b98", "title": "Prototypical cross-domain self-supervised learning for few-shot unsupervised domain adaptation", "year": "2021" }, { "authors": " Zakharov", "journal": "", "ref_id": "b99", "title": "", "year": "2019" }, { "authors": "Sergey Zakharov; Ivan Shugurov; Slobodan Ilic", "journal": "", "ref_id": "b100", "title": "Dpod: 6d pose object detector and refiner", "year": "2019" }, { "authors": " Zhang", "journal": "", "ref_id": "b101", "title": "", "year": "2021" }, { "authors": "Shaobo Zhang; Wanqing Zhao; Ziyu Guan; Xianlin Peng; Jinye Peng", "journal": "", "ref_id": "b102", "title": "Keypoint-graphdriven learning framework for object pose estimation", "year": "2021" }, { "authors": " Zhou", "journal": "", "ref_id": "b103", "title": "", "year": "2019" }, { "authors": "Yi Zhou; Connelly Barnes; Jingwan Lu; Jimei Yang; Hao Li", "journal": "", "ref_id": "b104", "title": "On the continuity of rotation representations in neural networks", "year": "2019" }, { "authors": " Zou", "journal": "", "ref_id": "b105", "title": "", "year": "2018" }, { "authors": "Yang Zou; Zhiding Yu; Jinsong Kumar; Wang", "journal": "", "ref_id": "b106", "title": "Unsupervised domain adaptation for semantic segmentation via class-balanced self-training", "year": "2018" }, { "authors": " Zou", "journal": "", "ref_id": "b107", "title": "", "year": "2021" }, { "authors": "Longkun Zou; Hui Tang; Ke Chen; Kui Jia", "journal": "", "ref_id": "b108", "title": "Geometry-aware self-training for unsupervised domain adaptation on object point clouds", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 378.24, 532.89, 71.45, 12.19 ], "formula_id": "formula_0", "formula_text": "{R 1 a , • • • , R N R a }." }, { "formula_coordinates": [ 3, 315, 679.47, 243, 25.37 ], "formula_id": "formula_1", "formula_text": "R cls = R i max R a , v x,cls = v i max vx x,a , v y,cls = v" }, { "formula_coordinates": [ 4, 54, 353.5, 243, 30.26 ], "formula_id": "formula_2", "formula_text": "{R i reg,6D } N R i=1 , {v i x,reg } Nv x i=1 , {v i y,reg } Nv y" }, { "formula_coordinates": [ 4, 54, 406.42, 46.88, 13.34 ], "formula_id": "formula_3", "formula_text": "{R i reg } N R i=1 ." }, { "formula_coordinates": [ 4, 54, 429.46, 203.3, 15.68 ], "formula_id": "formula_4", "formula_text": "als R reg = R i max R reg , v x,reg = v i max vx x,reg , v y,reg = v" }, { "formula_coordinates": [ 4, 110.65, 500.62, 186.35, 51.52 ], "formula_id": "formula_5", "formula_text": "       R = R reg • R cls x = (v x,cls + v x,reg ) • z/f x y = (v y,cls + v y,reg ) • z/f y z = z cls + z reg ,(1)" }, { "formula_coordinates": [ 4, 137.73, 655.61, 159.27, 9.65 ], "formula_id": "formula_6", "formula_text": "L = L pose + L ctc ,(2)" }, { "formula_coordinates": [ 4, 378.61, 342.75, 179.39, 30.55 ], "formula_id": "formula_7", "formula_text": "L pose = 1 B B b=1 L b cls + L b reg .(3)" }, { "formula_coordinates": [ 4, 353.06, 479.07, 204.94, 34.24 ], "formula_id": "formula_8", "formula_text": "si t =    θ t,1 , i ∈ NN 1 ( t) θ t,2 , i ∈ NN kt ( t)\\NN 1 ( t) 0, Otherwise ,(4)" }, { "formula_coordinates": [ 4, 415.48, 526.35, 142.52, 11.49 ], "formula_id": "formula_9", "formula_text": "θ t,1 + (k t -1)θ t,2 = 1. NN kt ( t)" }, { "formula_coordinates": [ 4, 372.65, 588.21, 185.35, 23.07 ], "formula_id": "formula_10", "formula_text": "L cls = t∈{R,vx,vy,z} H(S t , St ).(5)" }, { "formula_coordinates": [ 5, 64.33, 75.1, 232.67, 57.52 ], "formula_id": "formula_11", "formula_text": "L reg = i∈NN k R ( R) D(T R i , T ) + i∈NN kz (z) D(T z i , T ) + i∈NN kv xvy (ṽx ṽy) D(T v i x v i y , T ),(6)" }, { "formula_coordinates": [ 5, 104.55, 204.54, 192.45, 26.8 ], "formula_id": "formula_12", "formula_text": "D(T , T ) = 1 |O| x∈O ∥T x -T x∥ 1 .(7)" }, { "formula_coordinates": [ 5, 130.56, 471.5, 166.44, 24.8 ], "formula_id": "formula_13", "formula_text": "g ij = < f i , f j > ||f i || 2 • ||f j || 2 ,(8)" }, { "formula_coordinates": [ 5, 138.9, 590.97, 154.23, 13.14 ], "formula_id": "formula_14", "formula_text": "L ctc = ∥G -G∥ 2 2 . (9" }, { "formula_coordinates": [ 5, 293.13, 593.81, 3.87, 8.64 ], "formula_id": "formula_15", "formula_text": ")" }, { "formula_coordinates": [ 5, 125.77, 684.2, 167.08, 23.89 ], "formula_id": "formula_16", "formula_text": "ϕ n = z n a z max -z min • π 2 , (10" }, { "formula_coordinates": [ 5, 292.85, 692.83, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 5, 383.45, 84.24, 174.55, 12.89 ], "formula_id": "formula_18", "formula_text": "gninj 0 = cos(|ϕ ni -ϕ nj |).(11)" }, { "formula_coordinates": [ 6, 323.57, 516.59, 230.28, 26.8 ], "formula_id": "formula_20", "formula_text": "D ADD (T , T ) = 1 |O| x∈O ∥(Rx + t) -( Rx + t)∥ 2 . (12" }, { "formula_coordinates": [ 6, 553.85, 523.65, 4.15, 8.64 ], "formula_id": "formula_21", "formula_text": ")" }, { "formula_coordinates": [ 6, 315, 575.47, 243, 26.8 ], "formula_id": "formula_22", "formula_text": "D ADD-S (T , T ) = 1 |O| x1∈O min x2∈O ∥(Rx 1 +t)-( Rx 2 + t)∥ 2 ." }, { "formula_coordinates": [ 7, 393.11, 655.02, 164.89, 51.52 ], "formula_id": "formula_23", "formula_text": "       R obj = R • R init x obj = x + x init y obj = y + y init z obj = z • z init ,(14)" } ]
10.18653/v1/P17-1074
2023-10-17
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b23", "b34", "b22", "b21", "b18", "b36", "b7", "b9", "b1", "b27", "b10", "b14" ], "table_ref": [], "text": "Grammatical Error Correction (GEC) is a task that involves making local substitutions to correct grammatical errors in a given ungrammatical text (Bryant et al., 2022;Ma et al., 2022;Ye et al., 2022;Ma et al., 2023). The practical value of GEC in daily life has led to increasing attention being paid to this task (Li et al., 2021(Li et al., , 2022a,b,b et al., 2022;Li et al., 2023;Zhang et al., 2023). However, it is intractable to evaluate GEC systems due to the highly subjective nature of the task and the low inter-annotator agreement (IAA) (Choshen and Abend, 2018). Therefore, most datasets improve compatibility by incorporating multiple references to guarantee a more realistic evaluation of the model performance.\nThere are two broad categories of GEC metrics: reference-based and reference-less. Referencebased metrics evaluate GEC systems by comparing their hypotheses and human-annotated references in terms of edits (Dahlmeier and Ng, 2012;Bryant et al., 2017) or n-grams (Napoles et al., 2015). Reference-less metrics are proposed to evaluate GEC systems without references. However, Deutsch et al. (2022) demonstrate that reference-less metrics are inherently biased and limited in their ability to evaluate generated text. Therefore, we focus on reference-based metrics, which can evaluate in an interpretable manner, thus providing useful insights for model analysis.\nFigure 1 illustrates how existing reference-based metrics, such as ERRANT, extract the edit and then compute the F 0.5 score by comparing hypotheses and references. However, these metrics often fail to consider multiple references, which can result in bias during multi-reference evaluation. We argue that this bias arises because the current approach rewards equally good corrections unfairly. For in-stance, the ungrammatical phrase the technologies were is equally well-corrected by both Ref. 1 and Ref. 2. However, if a hypothesis aligns with Ref. 1's corrections (i.e.,[the → ϵ] and [were → have], TP=2), it will be rewarded less than the corrections of Ref. 2 (i.e.,[the → ϵ], [technologies → technology] and [were → has], TP=3).\nIn this paper, we propose Chunk-LEvel Multireference Evaluation (CLEME), which enables unbiased F 0.5 scores for GEC multi-reference evaluation. Inspired by (Gotou et al., 2020), CLEME transforms the source, the hypothesis and all the references into chunk sequences with consistent boundaries, thereby eliminating the bias in GEC multi-reference evaluation.\nExisting metrics assume that corrections of grammatical errors are dependent. That is, whenever there is more than one reference for a source, the metrics try each reference in turn, and then the highest score is taken as the final score. However, we observe that grammatical errors corrections in terms of chunks can be considered approximately independent. For example, the ungrammatical phrases the technologies were and for shown in Figure 1 can be corrected independently, i.e., the correction of the technologies were has no bearing on the correction of for. Based on this observation, we compute F 0.5 scores following the assumption that corrections of grammatical errors are independent. Specifically, we iterate through the chunks of a hypothesis and consider a chunk correct if it matches any of the corresponding chunks in the references. In this case, the hypothesis in Figure 1 would be rewarded 2TP, rather than 1TP and 1FP, which is the traditional case. To demonstrate the effectiveness and robustness of CLEME, we conduct experiments on six English reference sets with varying numbers of references and annotation styles, either calculating the F 0.5 score at the corpus-or sentence-level.\nIn summary, our contributions are three folds:\n(1) We propose CLEME, a reference-based metric that evaluates GEC systems at the chunk-level, aiming to provide unbiased F 0.5 scores for GEC multi-reference evaluation.\n(2) We observe that the corrections of grammatical errors in terms of chunks are approximately independent. Therefore, we propose to compute F 0.5 scores based on the correction independence assumption.\n(3) Extensive experiments and human evaluation are conducted to confirm the effectiveness and robustness of our approach.\n2 Preliminary Study" }, { "figure_ref": [ "fig_0" ], "heading": "Consistent Boundaries", "publication_ref": [], "table_ref": [], "text": "We determine consistent chunk-level boundaries by chunk partition process to debias the multireference evaluation, as depicted in Figure 2. We first extract the edit sets of the hypothesis and references, and then merge the overlapping edits into a chunk. It's worth noting that the source, hypothesis and references are all segmented into chunk sequences with the same number of chunks, regardless of the number of their tokens. This process is straightforward since we can locate and examine all possible corrections of an erroneous chunk.\nFor example, the chunk by the can be corrected in two ways, i.e., with in Ref. 1 and through in Ref.\n2. The resulting chunks fall into three categories: 1) unchanged chunks, which contain the same text segments as the source sentence, 2) corrected chunks, which consist of non-empty text segments different from the source sentence, and 3) dummy chunks are empty chunks." }, { "figure_ref": [ "fig_0" ], "heading": "Boundaries of Grammatical Errors", "publication_ref": [ "b2", "b38" ], "table_ref": [ "tab_1" ], "text": "Figure 2 illustrates the merging of overlapping edits into either corrected or dummy chunks, which are then separated by unchanged chunks. This raises the question, are chunk boundaries the boundaries of grammatical errors?\nDataset. To answer the question, we conduct experiments on BN-10GEC (Bryant and Ng, 2015). The dataset comprises 1,312 source sentences that are identical to the CoNLL-2014 test data (Ng et al., 2014). Each source sentence is associated with 10 references annotated by 10 native English speakers, including two official annotators of CoNLL-2014, the first author of the paper, and seven freelancers recruited via an online recruitment website.\nExperiment Setup. For each source sentence, we sample 9 references and run the chunk partition process described in Section 2.1. The resulting chunk sequences are determined collectively by all 9 references. The edits of the remaining reference {e 1 , • • • , e M } are then used to calculate the following three statistics: 1) The In-Corrected-Chunk (ICC) ratio indicates the proportion of edits included by corrected/dummy chunks of the other references. An edit is included by a chunk if the interval of the edit falls within that of the chunk.\n2) The In-Unchanged-Chunk (IUC) ratio gives the proportion of edits included by unchanged chunks of the other references.\n3) The Cross-Chunk (CC) ratio computes the proportion of edits that extend beyond the original boundaries. These statistics are calculated as follows:\nICC = 1 M M i=1 f1(ei),(1)\nIUC = 1 M M i=1 f2(ei),(2)\nCC = 1 -ICC -IUC, (3\n)\nwhere M is the number of edits from the remaining reference. If the edit e i is included in a corrected/dummy chunk, the function f 1 (e i ) returns 1, otherwise 0. Likewise, if the edit e i is included in an unchanged chunk, the function f 2 (e i ) returns 1, otherwise 0. We sample 9 different references for chunk partition in each run and repeatedly calculate the statistics using the remaining reference.\nResults. As shown in Table 1, the number of corrected and dummy chunks are less than that of edits since overlapping edits are merged into a chunk. A total of 90.66% edits are included by the corrected/dummy chunks, which suggests the grammatical errors to be corrected have been considered by the other references. However, only 7.74% edits are included by corrected chunks, indicating that these edits may be over-corrected since the other references believe no grammatical errors needed correction. Interestingly, 1.61% edits cross the chunk boundaries, suggesting that the chunk boundaries are stable enough to serve as the boundaries of grammatical errors to some extent. Additionally, human evaluation in Section 4.2 could be used as another argument to support this conclusion. Therefore, we have the following assumption.\nCorrection independence assumption: grammatical error corrections are independent.\nThat is, the correction of a grammatical error does not impact the correction of other grammatical errors. With this assumption, F 0.5 scores can be calculated using an alternate method, which will be introduced in Section 3. " }, { "figure_ref": [ "fig_1" ], "heading": "Length Weighting", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "The average length of chunks is much longer than that of edits shown in Table 1, resulting in the unfairness of chunk evaluation if a longer chunk is rewarded equally with a shorter one. Therefore, we add length weighting to the chunk evaluation. The intuition of length weighting is to compensate for long chunk matching. The weights of True Positives (TPs), False Positives (FPs), and False Negatives (FNs) are computed as follows: 2\nw TP = clip α1 1 + (α1 -1) exp(ℓ -x)\n, cmin, cmax , (4)\nw FP = clip α2 1 + (α2 -1) exp(x -ℓ)\n, cmin, cmax , (5) \nw FN = clip α3 1 + (α3 -1) exp(ℓ -x) , cmin, cmax ,(6)\nwhere α 1 , α 2 and α 3 are scale factors for TPs, FPs and FNs respectively, x is the length of the chunk, ℓ is the average length of chunks, and the function clip(v, a, b) clips the value v between a and b. The curves of length weighting are depicted in Figure 3. Formally, given a system corrected/dummy chunk set C H and a gold corrected/dummy chunk set C R , we apply length weighting on each chunk to compute precision, recall and F 0.5 as follows:\nP = c∈C H ∩C R w TP c c∈C H ∩C R w TP c + c∈C H \\C R w FP c ,(7)\nR = c∈C H ∩C R w TP c c∈C H ∩C R w TP c + c∈C R \\C H w FN c ,(8)\nF β = (1 + β 2 ) • P • R (β 2 • P ) + R ,(9)\nwhere β = 0.5 is usually used, which weighs precision twice as much as recall. A curve with a larger scale factor has a greater slope.\n0 1 2 3 4 5 6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Length of Chunk x Length Weight w α 1 = 2 α 1 = 3 α 1 = 5 α 2 = 2 α 2 = 3 α 2 = 5" }, { "figure_ref": [], "heading": "Corpus-level v.s. Sentence-level", "publication_ref": [ "b13" ], "table_ref": [], "text": "We compute F 0.5 scores of GEC systems at both corpus-level and sentence-level following (Gong et al., 2022). Corpus-level metrics compute an F 0.5 score over the entire dataset. Sentence-level metrics compute an F 0.5 score over each sentence of the dataset and evaluate GEC systems by us-ing the average F 0.5 score. CLEME-dependent and CLEME-independent are corpus-level metrics, and their sentence-level variants are respectively SentCLEME-dependent and SentCLEMEindependent. Both levels of the GEC metric are developed to provide more user-friendly options. Sentence-level metrics should be used if consistent evaluation weight for each sample is desired. This ensures that the evaluation result of each sample has the same influence on the final score. On the other hand, if harder samples containing more edits should have larger weight, then corpus-level metrics should be used instead." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Correlations with Human Judgments", "publication_ref": [ "b15", "b2", "b32", "b38", "b15", "b24", "b33", "b13", "b6", "b27", "b1", "b13", "b32" ], "table_ref": [ "tab_3" ], "text": "Dataset. To verify the effectiveness of CLEME, we measure correlations between reference-based metrics and human judgments on multiple English reference sets, including CoNLL-2014(Grundkiewicz et al., 2015), BN-10GEC (Bryant and Ng, 2015) and SN-8GEC (Sakaguchi et al., 2016). All the reference sets are based on CoNLL-2014 (Ng et al., 2014), consisting of 1,312 source sentences. SN-8GEC collected 8 references sets of annotations from both experts and non-experts, including 4 sets of minimal edits and 4 sets of fluency edits (2 by experts and 2 by non-experts). Reference sets statistics are reported in Appendix A.\nThe human judgments for the outputs of 13 GEC systems (including the unchanged source text) are presented by (Grundkiewicz et al., 2015), where eight native speaker were asked to rank the output of all the systems from best to worst. Two system ranking lists are generated using Expected Wins (EW) (Macháček and Bojar, 2013) and TrueSkill (TS) (Sakaguchi et al., 2014) respectively.\nExperiment Settings. Following (Gong et al., 2022;Chollampatt and Ng, 2018), we compute the Pearson γ and Spearman correlation coefficient ρ between reference-based metrics and human judgments based on corpus-level ranking. We tune the hyperparameters on CoNLL-2014 and keep the hyperparameters on the other reference sets, in order to demonstrate the adaptability of our approach. The detailed hyperparameters of CLEME are reported in Appendix B.\nEvaluation Metrics. We compare our approach with the following reference-based metrics, includ-ing corpus-and sentence-level variants3 :\n• GLEU and SentGLEU (Napoles et al., 2015) are n-gram based metrics, which reward hypothesis n-grams that overlap with the reference but not the source and penalize hypothesis n-grams that overlap with the source but not the reference.\n• M 2 and SentM 2 (Dahlmeier and Ng, 2012) dynamically extract the hypothesis edits with the maximum overlap of gold annotations.\n• ERRANAT and SentERRANT (Bryant et al., 2017) extract edits by utilizing a linguisticallyenhanced alignment algorithm.\n• PT-M 2 and SentPT-M 2 (Gong et al., 2022) are recently proposed reference and PLMbased GEC metric, which score edits using the knowledge of pre-trained language model.\nAdditionally, CLEME can evaluate GEC systems by accuracy scores, which is usually not implemented by conventional reference-based metrics. Please refer to Appendix C for the introduction and analyses of evaluating GEC systems by accuracy.\nResults. Table 2 reports the correlations between reference-based metrics and human judgments. For the corpus-level metrics, GLEU achieves the highest correlations on BN-10GEC and NE-fluency reference sets. However, GLEU suffers from negative correlations on NE-Minimal, which is caused by low-quality annotations4 of NE-Minimal, indicating that GLEU may not be a robust metric, consistent with the findings of (Sakaguchi et al., 2016). ERRANT performs slightly better than M 2 on most reference sets, while PT-M 2 is a strong corpus-level metric, which achieves the highest or comparable correlations on all reference sets at the cost of more than 10× running time than other reference-based metrics. Our proposed CLEME-dependent and CLEME-independent make better use of consistent chunk boundaries, thus performing slightly better than M 2 and ERRANT on most reference sets. Notably, CLEME-independent achieves comparable performance to CLEME-dependent, showing the effectiveness of computing F 0.5 scores based on the correction independence assumption.\nThe majority of the sentence-level metrics outperform their corpus-level counterparts because they weigh samples equally, which is in line with the bias of human annotation. Despite the strong performance of PT-M 2 , SentPT-M 2 achieves lower correlations on BN-10GEC, E-Fluency and NE-Fluency compared to other sentence-level metrics. It suggests that scoring edits using pre-trained language models may not generalize well to unseen reference sets for sentence-level metrics. Our approach aligns better with human judgments than existing reference-based metrics for most reference sets. Specifically, SentCLEME-dependent performs best on BN-10GEC and NE-Fluency, and performs on a par with the best metric on E-Fluency, indicating it is more suitable for fluent reference sets. This phenomenon aligns with our intuition since fluent editing is more likely to follow the correction dependence assumption. In contrast, SentCLEME-independent achieves higher correlations on E-Minimal and NE-Minimal, as we would expect from minimal editing that is more likely to follow the correction independence assumption. These results suggests that reference sets may have a preference towards one of the correction assump-tions. Additionally, our approach achieves higher correlations on (N)E-Fluency rather than (N)E-Minimal, while SentM 2 and SentERRANT perform worse on E-Fluency than E-Minimal. This is because CLEME evaluates GEC systems using longer chunks rather than scrappy edits, which could better reflect whether a grammatical error is fluently corrected. Overall, our approach achieves higher or comparable correlations on sentencelevel than existing reference-based methods." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [ "b2", "b8", "b2" ], "table_ref": [], "text": "Experiments have shown the effectiveness of evaluating GEC systems based on the correction independence assumption. In this section, we aim to demonstrate whether the correction independence assumption makes sense for humans. We define the correction independence of a pair of chunks as the irrelevance of the correction of one chunk to the correction of the other. A simple case is presented in Appendix E. To evaluate this assumption, we conduct human evaluation experiments on 1,000 sentences randomly sampled from BN-10GEC (Bryant and Ng, 2015). Three annotators were asked to judge whether a pair of chunks is correction-independent. dence and Cohen's-κ (Cohen, 1960) inter-annotator agreement (IAA) across the three annotators. Results show that more than 90% pairs of chunks are correction-independent for all the annotators, indicating that it is reasonable to evaluate GEC systems based on the correction independence assumption. Moreover, considering the subjectivity of GEC task, the IAA statistics show that it is relatively easy to judge whether a pair of chunks is correction-independent, compared with the previous study (Bryant and Ng, 2015) 5 ." }, { "figure_ref": [], "heading": "Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "False Negative", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "We observe that the number of false negatives (FNs) identified by CLEME is significantly lower than that of ERRANT. This difference can be attributed to the distinct definitions used by each system. While ERRANT considers FNs as edits in the reference that do not match those made in the hypothesis, CLEME identifies FNs as corrected/dummy chunks in the reference that do not match the chunks in the hypothesis. We argue the definition of ERRANT is problematic, as it tends to 5 Bryant and Ng (2015) attempted to compute IAA at the sentence level. Three raters were asked simply to decide whether 200 sentences were correct or not. The authors reported IAA of just 0.16, 0.4 and 0.23. overestimate FN counts in grammatical error correction (GEC) systems, which is evident from the examples presented in Table 4. On the other hand, CLEME's definition also includes true negatives (TNs), making it possible to calculate accuracy." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "We present ablation analyses of our approaches on BN-10GEC -we have similar findings on other reference sets. We report Pearson correlations γ using Expected Wins ranking. The trend is similar for Spearson correlations and TrueSkill ranking." }, { "figure_ref": [], "heading": "Number of References.", "publication_ref": [], "table_ref": [], "text": "Since CLEME is designed for multi-reference evaluation, it degrades to conventional reference-based metrics such as M 2 and ERRANT when only one reference is available. Here we demonstrate how correlations change against an increasing number of available references. The results reported in Figure 4 indicate that the correlations of corpus-level metrics do not change significantly with the increasing number of available references. However, except for SentGLEU, correlations of sentence-level metric are consistently higher than corpus-level metrics, and steadily increase with more references. Therefore, we recommend evaluating GEC systems using sentence-level metrics rather than corpus-level metrics for the multi-reference evaluation setting.\nParameter Sensitivity Analysis. The scale factors introduced in Section 3.2 dictate how much the weights of chunks change with their length. We report the corrections for various scale factors, as shown in Figure 5. The results demonstrate that CLEME is resilient to hyperparameter selection." }, { "figure_ref": [], "heading": "Case Study", "publication_ref": [], "table_ref": [ "tab_8" ], "text": "Table 5 presents additional examples of CLEME. In the top group, chunk 2 and chunk 4 of the hypothesis respectively match those of Ref. 1 and Ref. 2. In this case, CLEME-dependent gives TP=1 and FP=1, while CLEME-independent gives TP=2. In the second group, the hypothesis exactly corrects the ungrammatical word firghtenning in chunk 4. However, it cannot be rewarded since the entire chunk is not corrected. In the bottom group, two given references have made extensive modifications, with an unchanged chunk young people. Evaluating hypotheses in terms of chunks is generally more challenging than fragmented edits, but it provides a more comprehensive diagnosis.\nEven though there are larger grammatical errors spanning a significant portion of a sentence, CLEME would not necessarily collapse, i.e., producing one single correction chunk spanning the entire sentence. If collapse happens, the quality of the reference set should be checked first. This is because that collapse happens only if the input sentences of chunk partition are completely different, resulting in a trivial chunk partition result, which is an extreme case that has not been observed in our experiments.\n6 Related Work" }, { "figure_ref": [], "heading": "Reference-based Metrics", "publication_ref": [ "b9", "b1", "b1", "b31" ], "table_ref": [ "tab_9" ], "text": "Reference-based metrics score GEC systems under the guidance of manually written references. M 2 scorer (Dahlmeier and Ng, 2012) determines an optimal edit sequence between a source sentence and a system hypothesis that achieves the highest overlap with the gold-standard annotation. The performance of each system is then represented using the F 0.5 score. However, optimality in terms of overlap does not guarantee optimality in GEC evaluation. Bryant et al. (2017) showed that M 2 scorer exploits its dynamic edit boundary prediction to artificially maximize true positives and minimize false positives, thus producing slightly inflated scores. Therefore, (Bryant et al., 2017) proposed ERRANT, which improves edit extraction using a linguistically-enhanced alignment algorithm and merging rules, improving the alignment of tokens with similar linguistic properties. Despite its effectiveness, ERRANT is language-dependent and bias still exists in multi-reference evaluation. Inspired by BLEU (Papineni et al., 2002) in NMT, Napoles et al. (2015) proposed GLEU, an n-gram based metric for GEC evaluation. To remedy the shortcoming that F 0.5 is unable to differentiate a do-nothing system and a bad system unless TP > 0, I-measure (Felice and Briscoe, 2015) generates an exact (global optimal) alignment using a three-way alignment algorithm and computes weighted accuracy to score GEC systems in terms of relative textual improvement. The comparison of reference-based GEC metrics is shown in Table 6." }, { "figure_ref": [], "heading": "Reference-less Metrics", "publication_ref": [ "b28", "b0", "b28", "b35", "b25", "b11", "b28" ], "table_ref": [], "text": "To overcome the limitation of references for GEC evaluation, recent works focus on scoring GEC systems without the help of references. Inspired by quality estimation in the NMT community, Napoles et al. (2016) proposed three Grammaticality-Based Metrics (GBMs) given by an existing GEC system or a pretrained ridge regression model. Asano et al. (2017) extended GBMs (Napoles et al., 2016) with other assessment criteria, including grammaticality, fluency and meaning preservation. SOME (Yoshimura et al., 2020) is a reference-less metric consisting of sub-metrics that are optimized for manual evaluation, which combines three regression models trained on the constructed dataset. Scribendi Score (Islam and Magnani, 2021) evaluates a GEC system using a combination of language model perplexity and sorted token/Levenshtein distance ratios. IMPARA (Maeda et al., 2022) comprises a quality estimator (QE) and similarity estimator (SE) based on BERT (Devlin et al., 2019), which evaluates the quality of the GEC output and semantic similarity of two sentences respectively. Although reference-less metrics Napoles et al. (2016) can achieve high agreement with human judgments, they lack interpretability as metrics for GEC evaluation. Essentially, reference-less metrics are equivalent to evaluating GEC systems using other trained GEC systems, which could pose latent risk. Additionally, the efficiency of reference-less metrics is also critical if used for GEC benchmark." }, { "figure_ref": [], "heading": "Meta Evaluation Methods", "publication_ref": [ "b4", "b38" ], "table_ref": [], "text": "It is intractable to determine the best GEC metric.\nA reasonable GEC metric should take into account multiple factors, including correlation with human judgments, interpretability and efficiency. Inspired by WMT human evaluation campaigns (Callison-Burch et al., 2008), 13 system outputs (including the unchanged source) from the CoNLL-2014 shared task (Ng et al., 2014) " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This paper proposes CLEME, a reference-based GEC metric that aim to provide unbiased F 0.5 scores for multi-reference evaluation. We explore evaluating GEC systems based on either the correction dependence assumption or the correction independence assumption. Several possible approaches can be suggested to further improve CLEME. For example, developing (1) a GEC metric that adaptively combines dependent and independent assumptions, and (2) a weighting strategy by utilizing the knowledge of pre-trained model. In the future, we would like to develop CLEME for all languages and admonstrate the effectiveness of CLEME on other languages. It is also worthwhile to explore accuracy-based metrics." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "Although CLEME can be extended to other languages, we have not tested its effectiveness in any language other than English. Furthermore, all the reference sets used in our experiments are based on the CoNLL-2014 shared task, a secondlanguage dataset. To demonstrate the robustness of our approaches, further experiments on evaluation datasets with multiple text domains are required. We believe that introducing the correction independence assumption perspective into GEC datasets of other languages and domains could lead to more in-depth analysis and exploration. While recent PLM-based metrics have shown superior correlations compared to reference-based metrics, including ours on some reference sets, our approach enables the evaluation of GEC systems in an interpretable manner, which is a significant advantage over reference-less metrics. We leave the exploration of incorporating the PLM's knowledge into CLEME for future work." }, { "figure_ref": [], "heading": "D Detailed Analysis", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "Table 10 reports the detailed evaluation results of 13 systems on CoNLL-2014. The reason behind the lower TP and FP counts of CLEME as compared to ERRANT is attributed to the chunk partition process, where overlapping edits are merged into chunks. It is worth noting that the FN counts of CLEME are significantly lower than those of ER-RANT because of their distinct definition. While ERRANT considers FNs as the edits of references that are not identical to hypotheses, CLEME defines them as the corrected/dummy chunks of references that do not exactly match the chunks of hypotheses. We believe that the definition of ER-RANT could be problematic, as it has a tendency to overestimate the FN counts of GEC systems. This may result in an underestimated Recall rate in turn.\n-dependent v.s. -independent. Comparing the Precision and Recall of (Sent)CLEME-independent to those of (Sent)CLEME-dependent, it is observed that the former has a slightly higher value. This is because (Sent)CLEME-independent has the potential to overestimate the performance of GEC systems, whereas (Sent)CLEME-dependent could result in underestimating the same. It is noteworthy that both metrics provide an upper bound and lower bound for GEC performance, respectively.\nCorpus-level v.s. Sentence-level. The precision, recall, and F 0.5 scores of sentence-level metrics are considerably higher than those of corpus-level variants. There might be several factors contributing to this difference, but one possible explanation is that precision and recall values get affected by a limited number of challenging samples that contain numerous corrected/dummy chunks." }, { "figure_ref": [], "heading": "E Correction Independence", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "We introduce the term correction independence to describe a pair of chunks where the correction of each chunk is not related to the correction of the other, as illustrated in Table 11. Specifically, chunk 2 and chunk 4 are considered correctiondependent because the correction of chunk 2 family do from Ref.9 must be matched with the correction of chunk 4 help then from Ref.9. However, chunk 6 is correction-independent with chunk 2 (or 4) since the correction of chunk 6 has no impact on the correction of chunk 2 (or 4). " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This research is supported by National Natural Science Foundation of China (Grant No.62276154), Research Center for Computer Network(Shenzhen)Ministry of Education, the Natural Science Foundation of Guangdong Province (Grant No. 2023A1515012914), Basic Research Fund of Shenzhen City (Grant No. JCYJ20210324120012033 and JSGG20210802154402007), the Major Key Project of PCL for Experiments and Applications (PCL2021A06), and Overseas Cooperation Research Fund of Tsinghua Shenzhen International Graduate School (HW2021008)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "In this paper, we verify the effectiveness of our proposed approach using CoNLL-2014, BN-10GEC, and SN-8GEC reference sets, all of which are from publicly available datasets or resources on legitimate websites without sensitive data involved. All the baselines used in our experiments are also publicly available GEC metrics and we have cited the corresponding authors. We confirm that all datasets and baselines used in our experiments are consistent with their intended use.\nAdditionally, we conduct human evaluation experiments to show the rationality of correction independence assumption. To do so, three postgraduate students specializing in foreign linguistics and applied linguistics were employed as part-time annotators. Each annotator could complete the entire annotation process within approximately 6 working hours. All annotators were paid for their work, with an average salary of approximately $5 per hour." }, { "figure_ref": [], "heading": "A Statistics of Reference Sets", "publication_ref": [], "table_ref": [], "text": "Table 7 presents the statistics of all reference sets involved in our experiments, including In-Corrected-Chunk (ICC) ratio, Unchanged-Chunk (IUC) ratio and Cross-Chunk (CC) ratio. It is worth noting that all reference sets exhibit a low CC ratio with varying ICC and IUC ratios, indicating the rationality and feasibility of evaluating GEC systems following the correction independence assumption." }, { "figure_ref": [], "heading": "B Hyperparameters", "publication_ref": [], "table_ref": [], "text": "The hyperparameters of our proposed CLEME consist of scale factors α and thresholds. We tune the hyperparameters on CoNLL-2014 and keep them on the other reference sets to demonstrate the adaptability of our method. The hyperparameters of CLEME are listed in Table 8." }, { "figure_ref": [], "heading": "C Evaluate by Accuracy", "publication_ref": [ "b28", "b26" ], "table_ref": [], "text": "Conventional reference-based metrics such as Max-Match (M 2 ) and ERRANT are unable to calculate accuracy because they do not define True Negatives (TNs) 6 . In order to implement the computation of accuracy, CLEME defines TNs as hypothesis unchanged chunks that match the chunks of references. Similar to F 0.5 , accuracy can be computed based on correction dependence or independence assumptions in both corpus-and sentence-level, resulting in four new variants: 1) CLEME-dependent-acc, 2) CLEMEindependent-acc, 3) SentCLEME-dependentacc, and 4) SentCLEME-independent-acc.\nThe results of human correlations are reported in Table 9. Accuracy-based metrics perform very differently at the corpus-and sentence-level, which is similar to the findings (Napoles et al., 2016(Napoles et al., , 2019)). Surprisingly, two accuracy-based corpus-level metrics, i.e., CLEME-dependent-acc and CLEMEindependent-acc, result in negative correlations on all reference sets. However, their sentencelevel variants, i.e., SentCLEME-dependent-acc and SentCLEME-independent-acc, perform well and achieve the highest correlations on some reference sets. Regarding the disparity between the performance of accuracy-based metrics and F 0.5 -based metrics at the sentence level, one notable difference is their stability or robustness on reference sets with varying numbers of references and annotation styles. F 0.5 -based metrics are more robust to different reference sets, where SentCLEME-(in)dependent achieves comparable correlations with the best metric on all reference sets. However, the performance of accuracy-based metrics lags far behind other metrics on some reference sets (BN-10GEC, E-Minimal and NE-Fluency). A deeper investigation into this phenomenon is needed to understand the instability of accuracy-based metrics.\nWe leave the exploration and further analysis of accuracy-based metric for future work." } ]
Evaluating the performance of Grammatical Error Correction (GEC) systems is a challenging task due to its subjectivity. Designing an evaluation metric that is as objective as possible is crucial to the development of GEC task. However, mainstream evaluation metrics, i.e., referencebased metrics, introduce bias into the multireference evaluation by extracting edits without considering the presence of multiple references. To overcome this issue, we propose Chunk-LEvel Multi-reference Evaluation (CLEME), designed to evaluate GEC systems in the multireference evaluation setting. CLEME builds chunk sequences with consistent boundaries for the source, the hypothesis and references, thus eliminating the bias caused by inconsistent edit boundaries. Furthermore, we observe the consistent boundary could also act as the boundary of grammatical errors, based on which the F 0.5 score is then computed following the correction independence assumption. We conduct experiments on six English reference sets based on the CoNLL-2014 shared task. Extensive experiments and detailed analyses demonstrate the correctness of our discovery and the effectiveness of CLEME. Further analysis reveals that CLEME is robust to evaluate GEC systems across reference sets with varying numbers of references and annotation styles 1 .
CLEME: Debiasing Multi-reference Evaluation for Grammatical Error Correction
[ { "figure_caption": "Figure 2 :2Figure2: Overview of our approach CLEME. CLEME first 1) extracts edits of the hypothesis and the references, 2) merges the overlapping edits into chunks, and then 3) computes the F 0.5 scores based on two different assumptions.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Curves of length weighting with different α for ℓ = 2. All the curves pass through the point (ℓ, 1.0).A curve with a larger scale factor has a greater slope.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "Ref. 2Nowadaystechnology hasimproved a lot comparedwith the last century.Hyp.Nowadaystechnologieshaveimproved a lot comparedwiththe last century.", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics of the BN-10GEC dataset.", "figure_data": "ItemNumber (perc.) LengthSentences1,31223.0References13,12022.9Edits36,6771.0Unchanged Chunks93,469 (77.63%)2.5Corrected/Dummy Chunks 26,948 (22.37%)2.4ICC33,251 (90.66%)-IUC2,837 (7.74%)-CC589 (1.61%)-3 Method3.1 Chunk EvaluationAs shown in Figure 2, each chunk consists of editoperation(s), start index, end index, and correct to-kens. Conventional reference-based metrics suchas MaxMatch (M 2 ) and ERRANT compute F 0.5scores based on the correction dependence assump-tion. They evaluate the performance for each ref-erence separately and select the one that yieldsthe best result for the source sentence. CLEME-dependent also computes F 0.5 scores in this wayby treating corrected/dummy chunks as edits. Onthe other hand, CLEME-independent is proposedto compute F 0.5 scores based on the correctionindependence assumption. A corrected/dummychunk from the hypothesis is considered correctif it matches one of the corresponding chunks fromthe references. It is worth noting that CLEME isable to fully inherit pre-classified errors from ER-RANT, where each corrected/dummy chunk mayconsist of multiple error with different types.", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "011235710Peoplebuilda relationshipwith otherswithsocial media .Peoplebuildrelationshipswith othersthroughsocial media .Peoplebuildrelationshipwith othersby thesocial media .", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": ".938 0.602 ♣ 0.682 ♣ 0.831 ♣ 0.855 ♣ 0.689 0.763 0.770 ♣ 0.822 ♣ 0.648 0.725 ρ 0.907 0.874 0.626 ♣ 0.670 ♣ 0.808 ♣ 0.819 ♣ 0.797 0.841 0.813 ♣ 0.857 ♣ 0.742 0.786 ♣ 0.819 ♣ 0.824 0.863 0.797 ♣ 0.846 ♣ 0.791 0.846 SentCLEME-independent (Ours) γ 0.868 0.857 0.855 ♣ 0.876 ♣ 0.821 ♣ 0.856 ♣ 0.841 0.877 0.782 ♣ 0.831 ♣ 0.852 0.896 ρ 0.725 0.758 0.659 ♣ 0.714 ♣ 0.775 ♣ 0.819 ♣ 0.808 0.846 0.819 ♣ 0.874 ♣ 0.762 0.825 Overview of correlations between mainstream GEC metrics and human judgments. We highlight the highest score in bold and the second-highest score with underlines. SN-8GEC consists of four reference sets, i.e.,", "figure_data": "MetricCoNLL-2014BN-10GECE-MinimalE-FluencyNE-MinimalNE-FluencyEWTSEWTSEWTSEWTSEWTSEWTSM 2γ 0.623 0.672 0.547 ρ 0.687 0.720 0.6480.610 0.6920.597 0.6540.650 0.7030.590 0.659 0.575 0.654 0.709 0.5770.634 0.6480.582 0.649 0.648 0.703GLEUγ 0.701 0.750 0.678 ρ 0.467 0.555 0.7540.761 0.8060.533 0.5770.513 0.5110.693 0.771 -0.044 -0.113 0.674 0.767 0.710 0.757 -0.005 -0.055 0.725 0.819ERRANTγ 0.642 0.688 0.586 ρ 0.659 0.698 0.6370.644 0.6980.578 0.7420.631 0.7860.594 0.663 0.585 0.720 0.775 0.7470.637 0.7970.597 0.659 0.753 0.797PT-M 2γ 0.693 0.737 0.650 ρ 0.758 0.769 0.6900.706 0.8240.626 0.7090.667 0.7360.621 0.681 0.630 0.758 0.802 0.7360.675 0.7580.620 0.682 0.758 0.802CLEME-dependent (Ours)γ 0.648 0.691 0.602 ρ 0.709 0.742 0.6920.656 0.7470.594 0.7970.644 0.8130.589 0.654 0.595 0.714 0.775 0.7860.643 0.8350.612 0.673 0.720 0.791CLEME-independent (Ours)γ 0.649 0.691 0.609 ρ 0.709 0.731 0.6920.659 0.7470.593 0.7910.643 0.8020.587 0.653 0.601 0.731 0.791 0.7970.647 0.8410.611 0.672 0.714 0.786SentM 2γ 0.871 0.864 0.567 ρ 0.731 0.758 0.5930.646 0.6480.805 ♣ 0.836 ♣ 0.655 0.732 0.729 ♣ 0.785 ♣ 0.621 0.699 0.806 ♣ 0.845 ♣ 0.731 0.764 0.797 ♣ 0.846 ♣ 0.632 0.687SentGLEUγ 0.784 0.828 0.756 ρ 0.720 0.775 0.7690.826 0.8240.742 ♣ 0.773 ♣ 0.785 0.846 0.723 ♣ 0.762 ♣ 0.778 0.848 0.764 ♣ 0.797 ♣ 0.791 0.846 0.764 ♣ 0.830 ♣ 0.768 0.846SentERRANTγ 0.870 0.846 0.885 ρ 0.742 0.747 0.7860.896 0.8300.768 ♣ 0.803 ♣ 0.806 0.732 0.710 ♣ 0.765 ♣ 0.793 0.847 0.775 ♣ 0.819 ♣ 0.813 0.764 0.780 ♣ 0.841 ♣ 0.830 0.857SentPT-M 2 γ 0.949 0SentCLEME-dependent (Ours) γ 0.876 0.844 0.915 ρ 0.824 0.808 0.8350.913 0.8740.806 ♣ 0.838 ♣ 0.849 0.886 0.742 ♣ 0.795 ♣ 0.876 0.921 0.775", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "", "figure_data": "reports the ratio of correction indepen-", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "A comparison of correction independence annotations across three annotators.", "figure_data": "", "figure_id": "tab_5", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Cases of ERRANT and CLEME. ERRANT gives FP=1 and FN=1 since the hypothesis does not match one of the edits of references. CLEME gives only FP=1 since the hypothesis tries to correct the error.", "figure_data": "1.00.8CLEME-dependentCLEME-independentSentCLEME-dependentSentCLEME-independent0.6246810Scale Factor αFigure 5: Effect of scale factors on BN-10GEC.TextFP FNSource It has improved compared for the last century.Hyp.It has improved compared between the last century.ERRANTRef. 1 It has improved compared to the century.11Ref. 2 It has improved compared with the century.11CLEMERef. 1 It has improved compared to the century.10Ref. 2 It has improved compared with the century.10", "figure_id": "tab_6", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "This is especially the case for young people who are unmarried , if he/she is known to have some genetic risk .", "figure_data": "Chunk 1Chunk 2Chunk 3Chunk 4Chunk 5Chunk 6Source On the other hand , if there areways canhelp us to controlorcure the disease , we cangoing .Hyp.On the other hand , if there are ways that can help us to controlandcure the disease , we cango .Ref. 1 On the other hand , if there are ways that can help us to controlorcure the disease , we cango .Ref. 2 On the other hand , if there are things that can help us to controlandcure the disease , we cango .Chunk 1 Chunk 2Chunk 3Chunk 4Chunk 5SourceOnone hand , we do not want this potential danger causing firghtenning affects in our lives .Hyp.Onone hand , we do not want this potential danger causing frightening affects in our lives .Ref. 1Onone hand , we do not want this potential danger having frightening effects in our lives .Ref. 2Ontheone hand , we do not want this potential danger to have frightening effects on our lives .Chunk 1Chunk 2Chunk 3Chunk 4SourceEspecially for theyoung peoplewithout marrige , if he/she isknown to have some genetic risk .Hyp.Especially for theyoung people without marriage , if the latter is known to have some genetic risk .Ref. 1Especially for unmarriedyoung peoplemarrige who areknown to have some genetic risk .Ref. 2", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Cases of chunk partition. These tables are automatically generated by CLEME. More cases from multiple datasets and language are provided in Appendix F.", "figure_data": "", "figure_id": "tab_8", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Reference-based MetricsGranularityScoreDeterministicM 2 (Dahlmeier and Ng, 2012) Phrase-level EditFβ•GLEU (Napoles et al., 2015)N-gramWeighted Precision•ERRANT (Bryant et al., 2017) Phrase-level EditFβ•CLEME (Ours)Chunk-level EditFβ•", "figure_id": "tab_9", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "♣ 0.920 ♣ 0.604 ♣ 0.555 ♣ 0.693 ♣ 0.637 ♣ 0.840 ♣ 0.891 ♣ 0.791 ♣ 0.763 ♣ 0.756 ♣ 0.822 ♣ ρ 0.830 ♣ 0.849 ♣ 0.363 ♣ 0.303 ♣ 0.588 ♣ 0.544 ♣ 0.857 ♣ 0.890 ♣ 0.654 ♣ 0.626 ♣ 0.747 ♣ 0.819 ♣ Overview of correlations between reference-based metrics and human judgments. We highlight the highest score in bold and the second-highest score with underlines. ♣ We remove unchanged reference sentences for higher correlations due to low-quality annotations. Otherwise, negative correlations are possible.", "figure_data": "MetricCoNLL-2014BN-10GECE-MinimalE-FluencyNE-MinimalNE-FluencyEWTSEWTSEWTSEWTSEWTSEWTSM 2γ 0.623 ρ 0.6870.672 0.7200.547 0.6480.610 0.6920.597 0.6540.650 0.7030.590 0.6540.659 0.7090.575 0.5770.634 0.6480.582 0.6480.649 0.703GLEUγ 0.701 ρ 0.4670.750 0.5550.678 0.7540.761 0.8060.533 0.5770.513 0.5110.693 0.7100.771 0.757-0.044 -0.113 0.674 -0.005 -0.055 0.7250.767 0.819ERRANTγ 0.642 ρ 0.6590.688 0.6980.586 0.6370.644 0.6980.578 0.7420.631 0.7860.594 0.7200.663 0.7750.585 0.7470.637 0.7970.597 0.7530.659 0.797CLEME-dependent (Ours)γ 0.648 ρ 0.7090.691 0.7420.602 0.6920.656 0.7470.594 0.7970.644 0.8130.589 0.7140.654 0.7750.595 0.7860.643 0.8350.612 0.7200.673 0.791CLEME-independent (Ours)γ 0.649 ρ 0.7090.691 0.7310.609 0.6920.659 0.7470.593 0.7910.643 0.8020.587 0.7310.653 0.7910.601 0.7970.647 0.8410.611 0.7140.672 0.786CLEME-dependent-acc (Ours)γ -0.261 -0.342 -0.288 -0.371 -0.222 -0.313 -0.216 -0.302 -0.370 -0.453 -0.430 -0.513 ρ -0.407 -0.478 -0.445 -0.516 -0.335 -0.423 -0.347 -0.437 -0.429 -0.516 -0.473 -0.555CLEME-independent-acc (Ours)γ -0.175 -0.262 -0.206 -0.284 -0.195 -0.283 -0.105 -0.189 -0.335 -0.420 -0.328 -0.412 ρ -0.176 -0.264 -0.341 -0.418 -0.291 -0.379 -0.132 -0.231 -0.429 -0.516 -0.451 -0.522SentM 2γ 0.871 ρ 0.7310.864 0.7580.567 0.5930.646 0.6480.805 ♣ 0.836 ♣ 0.655 0.806 ♣ 0.845 ♣ 0.7310.732 0.7640.729 ♣ 0.785 ♣ 0.621 0.797 ♣ 0.846 ♣ 0.6320.699 0.687SentGLEUγ 0.784 ρ 0.7200.828 0.7750.756 0.7690.826 0.8240.742 ♣ 0.773 ♣ 0.785 0.764 ♣ 0.797 ♣ 0.7910.846 0.8460.723 ♣ 0.762 ♣ 0.778 0.764 ♣ 0.830 ♣ 0.7680.848 0.846SentERRANTγ 0.870 ρ 0.7420.846 0.7470.885 0.7860.896 0.8300.768 ♣ 0.803 ♣ 0.806 0.775 ♣ 0.819 ♣ 0.8130.732 0.7640.710 ♣ 0.765 ♣ 0.793 0.780 ♣ 0.841 ♣ 0.8300.847 0.857SentCLEME-dependent (Ours)γ 0.876 ρ 0.8240.844 0.8080.915 0.8350.913 0.8740.806 ♣ 0.838 ♣ 0.849 0.775 ♣ 0.819 ♣ 0.8240.886 0.8630.742 ♣ 0.795 ♣ 0.876 0.797 ♣ 0.846 ♣ 0.7910.921 0.846SentCLEME-independent (Ours)γ 0.868 ρ 0.7250.857 0.7580.855 ♣ 0.876 ♣ 0.821 ♣ 0.856 ♣ 0.841 0.659 ♣ 0.714 ♣ 0.775 ♣ 0.819 ♣ 0.8080.877 0.8460.782 ♣ 0.831 ♣ 0.852 0.819 ♣ 0.874 ♣ 0.7620.896 0.825SentCLEME-dependent-acc (Ours)γ 0.828 ρ 0.8130.857 0.8410.650 0.6820.719 0.7400.808 ♣ 0.838 ♣ 0.679 0.830 ♣ 0.852 ♣ 0.7310.740 0.7860.757 ♣ 0.811 ♣ 0.557 0.853 ♣ 0.894 ♣ 0.6550.641 0.702SentCLEME-independent-acc (Ours) γ 0.900 Metric AMU CAMB CUUI IITB INPUT IPN NTHU PKU POST RAC SJTU UFC UMCTP4837256072805240929450831910436311FP7951329985650488991697115279426114774ERRANTFN P1934 37.791886 35.301946 2064 38.13 30.112070 100.02078 9.631976 29.21 29.67 30.60 28.66 28.49 72.00 28.66 2007 1985 2044 2036 2069 2020R19.9827.7723.78 1.340.002.4417.15 12.78 20.38 13.50 4.861.71 13.34F 0.532.0833.4834.02 5.680.006.0625.61 23.46 27.81 23.40 14.44 7.81 23.31TP314482379170332661953332036924213w/o LW382588471220393302464122548532216FP87213921034720529975776124678229219844w/o LW8151303964670488905709114478227218788FN11829871169 1564159214451191125911581278 1471 1583 1266CLEME-dependentw/o LW 134511321333 1751178216341366142613331453 1657 1772 1439TN631263476245 6313630864126295631464496310 6324 6280 6377P26.4525.7426.81 19.29100.05.8521.42 20.06 21.07 20.60 19.02 56.40 20.14R20.9732.8424.48 1.090.002.2218.23 13.39 22.31 13.71 4.451.52 14.40F 0.525.1426.9026.31 4.450.004.4120.69 18.24 21.31 18.72 11.50 6.85 18.65P63.0541.0757.60 95.27100.067.45 55.17 63.62 53.26 65.05 82.90 98.70 61.78SentCLEME-dependentR F 0.548.87 36.2459.94 32.9451.21 32.37 37.39 31.5131.33 31.3336.07 49.71 44.84 50.61 44.57 36.06 32.15 44.43 23.25 32.24 32.56 33.34 32.37 31.93 32.30 31.46TP318488392170332721963392046924214w/o LW388596487220393382484202558532262FP86413821016720529965773123678129219843w/o LW8091295948670488897707113678127218787FN92870188413621393124693710228831025 1266 1372 1026CLEME-independentw/o LW 1029778984149715301382104511299901135 1398 1506 1136TN662967016597 6567656066646617661167936628 6583 6546 6680P26.9026.1127.85 19.29100.05.8522.00 20.23 21.50 20.69 19.02 56.40 20.22R25.5341.0630.71 1.250.002.5722.52 16.10 27.72 16.59 5.141.75 17.23F 0.526.6128.1628.38 4.970.004.6622.10 19.24 22.51 19.71 12.35 7.77 19.54P65.3645.3960.92 95.27100.067.51 57.44 65.03 56.26 66.65 83.17 98.70 63.36SentCLEME-independentR F 0.557.20 42.0070.15 39.6360.76 36.87 43.76 35.4935.29 35.2940.56 58.08 52.20 59.83 52.14 41.31 36.59 51.94 25.90 38.13 37.43 39.38 37.04 36.25 36.48 36.59", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Detailed evaluation results across 13 GEC systems on CoNLL-2014.", "figure_data": "Chunk 1 Chunk 2 Chunk 3Chunk 4Chunk 5 Chunk 6Chunk 7Source Ifnottheir family thenwho else that arewilling to do that ?Ref. 1Ifnottheir family thenwho else will bewilling to do that ?Ref. 2Ifnottheir family thenwho else would be willing to do that ?Ref. 3Ifnotfrom your family then who else iswilling to do that ?Ref. 4Ifnottheir family , thenwho else will bewilling to do that ?Ref. 5Ifnottheir family thenwho else will bewilling to do that ?Ref. 6Ifnottheir familywho else would be willing to do that ?Ref. 7Ifnottheir family thenwho else will bewilling to do that ?Ref. 8Ifnottheir family ,who else iswilling to do that ?Ref. 9Iffamily do nothelp thenwho else would be willing to do that ?Ref. 10 Ifnottheir family , thenwho else iswilling to do that ?", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "A case of correction independence. We apply chunk partition to the source and all the references. If one person does not have good health , that means they could lose so many things . Ref. 3 If one person does n't have good health , that means they could lose so many things . Ref. 4 If one person does n't have good health , that means so many things they could lost .", "figure_data": "Chunk 1 Chunk 2 Chunk 3 Chunk 4SourceFor notusecar.Ref. 1Not forusewith a car.Ref. 2Do notusein the car.Ref. 3 Car not foruse.Ref. 4Can notusethe car.Chunk 1Chunk 2Chunk 3Chunk 4Chunk 5Chunk 6SourceOne person if do n'thave good healththat meansso many things they could lost.Ref. 1If a person does n'thave good health,so many things could be lost.Ref. 2 Chunk 1Chunk 2Chunk 3 Chunk 4Chunk 5Chunk 6Source今天听天气预报说今天还有天气冷。Ref. 1听天气预报说今天天气冷。Ref. 2今天听天气预报说天气还会变冷。Ref. 3听天气预报说今天天气还会变冷。Chunk 1Chunk 2Chunk 3Chunk 4Chunk 5Chunk 6Chunk 7Chunk 8Source所以我从小到现在在这些快餐吃饭的机会很少。对我来说每次饭都很重要。Ref. 1所以我从小到现在在这些快餐店吃饭的机会很少。对我来说每顿饭都很重要。Ref. 2所以我从小到现在在这些快餐店吃饭的机会很少。对我来说每次吃饭都很重要。Ref. 3我从小到现在在这些快餐店吃饭的机会很少 ,所以 对我来说每次吃饭都很重要。Ref. 4我从小到现在在这些快餐店吃饭的机会很少 ,所以 对我来说每顿饭都很重要。", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "More cases of chunk partition. These tables are automatically generated by CLEME. The first two cases are from JELEG-dev, and the next two cases are from MuCGEC-dev.", "figure_data": "", "figure_id": "tab_14", "figure_label": "12", "figure_type": "table" } ]
Jingheng Ye; Yinghui Li; Qingyu Zhou; Yangning Li; Shirong Ma; Hai-Tao Zheng; Ying Shen
[ { "authors": "Hiroki Asano; Tomoya Mizumoto; Kentaro Inui", "journal": "Asian Federation of Natural Language Processing", "ref_id": "b0", "title": "Reference-based metrics can be replaced with reference-less metrics in evaluating grammatical error correction systems", "year": "2017" }, { "authors": "Christopher Bryant; Mariano Felice; Ted Briscoe", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Automatic annotation and evaluation of error types for grammatical error correction", "year": "2017" }, { "authors": "Christopher Bryant; Hwee Tou Ng", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "How far are we from fully automatic high quality grammatical error correction", "year": "2015" }, { "authors": "Christopher Bryant; Zheng Yuan; Muhammad ; Reza Qorib; Hannan Cao; Hwee Tou Ng; Ted Briscoe", "journal": "", "ref_id": "b3", "title": "Grammatical error correction: A survey of the state of the art", "year": "2022" }, { "authors": "Chris Callison-Burch; Cameron Fordyce; Philipp Koehn; Christof Monz; Josh Schroeder", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "Further meta-evaluation of machine translation", "year": "2008" }, { "authors": "Martin Chodorow; Markus Dickinson; Ross Israel; Joel Tetreault", "journal": "", "ref_id": "b5", "title": "Problems in evaluating grammatical error detection systems", "year": "2012" }, { "authors": "Shamil Chollampatt; Hwee Tou Ng", "journal": "", "ref_id": "b6", "title": "A reassessment of reference-based grammatical error correction metrics", "year": "2018" }, { "authors": "Leshem Choshen; Omri Abend", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Automatic metric validation for grammatical error correction", "year": "2018" }, { "authors": "Jacob Cohen", "journal": "Educational and psychological measurement", "ref_id": "b8", "title": "A coefficient of agreement for nominal scales", "year": "1960" }, { "authors": "Daniel Dahlmeier; Hwee Tou Ng", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Better evaluation for grammatical error correction", "year": "2012" }, { "authors": "Daniel Deutsch; Rotem Dror; Dan Roth", "journal": "", "ref_id": "b10", "title": "On the limitations of reference-free evaluations of generated text", "year": "2022" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Mariano Felice; Ted Briscoe", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "Towards a standard evaluation method for grammatical error detection and correction", "year": "2015" }, { "authors": "Peiyuan Gong; Xuebo Liu; Heyan Huang; Min Zhang", "journal": "", "ref_id": "b13", "title": "Revisiting grammatical error correction evaluation and beyond", "year": "2022" }, { "authors": "Takumi Gotou; Ryo Nagata; Masato Mita; Kazuaki Hanawa", "journal": "International Committee on Computational Linguistics", "ref_id": "b14", "title": "Taking the correction difficulty into account in grammatical error correction evaluation", "year": "2020" }, { "authors": "Roman Grundkiewicz; Marcin Junczys-Dowmunt; Edward Gillian", "journal": "", "ref_id": "b15", "title": "Human evaluation of grammatical error correction systems", "year": "2015" }, { "authors": "Asadul Md; Enrico Islam; Magnani", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Is this the end of the gold standard? a straightforward referenceless grammatical error correction metric", "year": "2021" }, { "authors": "Masahiro Kaneko; Sho Takase; Ayana Niwa; Naoaki Okazaki", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Interpretability for language learners using example-based grammatical error correction", "year": "2022" }, { "authors": "Yinghui Li; Haojing Huang; Shirong Ma; Yong Jiang; Yangning Li; Feng Zhou; Hai-Tao Zheng; Qingyu Zhou", "journal": "", "ref_id": "b18", "title": "On the (in)effectiveness of large language models for chinese text correction", "year": "2023" }, { "authors": "Yinghui Li; Shirong Ma; Qingyu Zhou; Zhongli Li; Li Yangning; Shulin Huang; Ruiyang Liu; Chao Li; Yunbo Cao; Haitao Zheng; ; ", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Learning from the dictionary: Heterogeneous knowledge guided fine-tuning for Chinese spell checking", "year": "2022" }, { "authors": "Yinghui Li; Qingyu Zhou; Yangning Li; Zhongli Li; Ruiyang Liu; Rongyi Sun; Zizhen Wang; Chao Li; Yunbo Cao; Hai-Tao Zheng", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "The past mistake is the future wisdom: Error-driven contrastive probability optimization for Chinese spell checking", "year": "2022" }, { "authors": "Zuchao Li; Kevin Parnow; Masao Utiyama; Eiichiro Sumita; Hai Zhao", "journal": "Association for Computational Linguistics", "ref_id": "b21", "title": "MiSS: An assistant for multi-style simultaneous translation", "year": "2021" }, { "authors": "Shirong Ma; Yinghui Li; Haojing Huang; Shulin Huang; Yangning Li; Hai-Tao Zheng; Ying Shen", "journal": "", "ref_id": "b22", "title": "Progressive multi-task learning framework for chinese text error correction", "year": "2023" }, { "authors": "Shirong Ma; Yinghui Li; Rongyi Sun; Qingyu Zhou; Shulin Huang; Ding Zhang; Li Yangning; Ruiyang Liu; Zhongli Li; Yunbo Cao; Haitao Zheng; Ying Shen", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Linguistic rules-based corpus generation for native Chinese grammatical error correction", "year": "2022" }, { "authors": "Matouš Macháček; Ondřej Bojar", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Results of the WMT13 metrics shared task", "year": "2013" }, { "authors": "Koki Maeda; Masahiro Kaneko; Naoaki Okazaki", "journal": "", "ref_id": "b25", "title": "IMPARA: Impact-based metric for GEC using parallel data", "year": "2022" }, { "authors": "Courtney Napoles; Maria Nȃdejde; Joel Tetreault", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b26", "title": "Enabling robust grammatical error correction in new domains: Data sets, metrics, and analyses", "year": "2019" }, { "authors": "Courtney Napoles; Keisuke Sakaguchi; Matt Post; Joel Tetreault", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Ground truth for grammatical error correction metrics", "year": "2015" }, { "authors": "Courtney Napoles; Keisuke Sakaguchi; Joel Tetreault", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "There's no comparison: Referenceless evaluation metrics in grammatical error correction", "year": "2016" }, { "authors": "Courtney Napoles; Keisuke Sakaguchi; Joel Tetreault", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "JFLEG: A fluency corpus and benchmark for grammatical error correction", "year": "2017" }, { "authors": "Tou Hwee; Ng; Mei Siew; Ted Wu; Christian Briscoe; Raymond Hendy Hadiwinoto; Christopher Susanto; Bryant", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "The CoNLL-2014 shared task on grammatical error correction", "year": "2014" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b31", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Keisuke Sakaguchi; Courtney Napoles; Matt Post; Joel Tetreault", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b32", "title": "Reassessing the goals of grammatical error correction: Fluency instead of grammaticality", "year": "2016" }, { "authors": "Keisuke Sakaguchi; Matt Post; Benjamin Van Durme", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Efficient elicitation of annotations for human evaluation of machine translation", "year": "2014" }, { "authors": "Jingheng Ye; Yinghui Li; Shirong Ma; Rui Xie; Wei Wu; Hai-Tao Zheng", "journal": "", "ref_id": "b34", "title": "Focus is what you need for chinese grammatical error correction", "year": "2022" }, { "authors": "Ryoma Yoshimura; Masahiro Kaneko; Tomoyuki Kajiwara; Mamoru Komachi", "journal": "International Committee on Computational Linguistics", "ref_id": "b35", "title": "SOME: Reference-less sub-metrics optimized for manual evaluations of grammatical error correction", "year": "2020" }, { "authors": "Ding Zhang; Yinghui Li; Qingyu Zhou; Shirong Ma; Yangning Li; Yunbo Cao; Hai-Tao Zheng", "journal": "", "ref_id": "b36", "title": "Contextual similarity is more valuable than character similarity: An empirical study for chinese spell checking", "year": "2023" }, { "authors": "Yue Zhang; Zhenghua Li; Zuyi Bao; Jiacheng Li; Bo Zhang; Chen Li; Fei Huang; Min Zhang", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "MuCGEC: a multi-reference multi-source evaluation dataset for Chinese grammatical error correction", "year": "2009" }, { "authors": " Ng", "journal": "", "ref_id": "b38", "title": "BN-10GEC", "year": "2014" } ]
[ { "formula_coordinates": [ 3, 139.16, 195.23, 150.57, 26.84 ], "formula_id": "formula_0", "formula_text": "ICC = 1 M M i=1 f1(ei),(1)" }, { "formula_coordinates": [ 3, 138.92, 237.98, 150.82, 26.84 ], "formula_id": "formula_1", "formula_text": "IUC = 1 M M i=1 f2(ei),(2)" }, { "formula_coordinates": [ 3, 137.84, 282.33, 148.41, 8.31 ], "formula_id": "formula_2", "formula_text": "CC = 1 -ICC -IUC, (3" }, { "formula_coordinates": [ 3, 286.25, 282.87, 3.48, 7.77 ], "formula_id": "formula_3", "formula_text": ")" }, { "formula_coordinates": [ 3, 315.27, 687.31, 141.62, 19.74 ], "formula_id": "formula_4", "formula_text": "w TP = clip α1 1 + (α1 -1) exp(ℓ -x)" }, { "formula_coordinates": [ 3, 315.44, 723.17, 141.29, 19.74 ], "formula_id": "formula_5", "formula_text": "w FP = clip α2 1 + (α2 -1) exp(x -ℓ)" }, { "formula_coordinates": [ 4, 79.66, 474.43, 210.07, 19.74 ], "formula_id": "formula_6", "formula_text": "w FN = clip α3 1 + (α3 -1) exp(ℓ -x) , cmin, cmax ,(6)" }, { "formula_coordinates": [ 4, 107.84, 631.27, 181.89, 37.77 ], "formula_id": "formula_7", "formula_text": "P = c∈C H ∩C R w TP c c∈C H ∩C R w TP c + c∈C H \\C R w FP c ,(7)" }, { "formula_coordinates": [ 4, 107.41, 677.43, 182.32, 37.77 ], "formula_id": "formula_8", "formula_text": "R = c∈C H ∩C R w TP c c∈C H ∩C R w TP c + c∈C R \\C H w FN c ,(8)" }, { "formula_coordinates": [ 4, 121.65, 723.59, 168.09, 19.74 ], "formula_id": "formula_9", "formula_text": "F β = (1 + β 2 ) • P • R (β 2 • P ) + R ,(9)" }, { "formula_coordinates": [ 4, 314.33, 464.14, 202.69, 135.06 ], "formula_id": "formula_10", "formula_text": "0 1 2 3 4 5 6 0.0 0.5 1.0 1.5 2.0 2.5 3.0 Length of Chunk x Length Weight w α 1 = 2 α 1 = 3 α 1 = 5 α 2 = 2 α 2 = 3 α 2 = 5" } ]
2023-10-18
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b1", "b3", "b4", "b5", "b7", "b8", "b9", "b11", "b12" ], "table_ref": [], "text": "Due to the significant progress of social media platforms and artificial intelligence Xu et al. [2022a], Gu et al. [2023], Zhang et al. [2022a], image editing technology has become a common demand. Specifically, AI-based technology Niu et al. [2023], Chen et al. [2023] has significantly lowered the threshold for fancy image editing, which traditionally required professional software and laborintensive manual operations. Deep neural networks can now achieve remarkable results in various image editing tasks, such as image inpainting Feng et al. [2022], image colorization Zhang et al. [2022b], and object replacement Kwon and Ye [2022], by learning from rich paired data. Futhermore, recent advances in diffusion models Brack et al. [2023], Brooks et al. [2023], Saharia et al. [2022a] enable precise control over generation quality and diversity during the diffusion process. By incorporating a text encoder, diffusion models can be adapted to generate natural images following text instructions, making them well-suited for image editing.\nDespite the impressive results, existing image editing methods still encounter numerous challenges. As a typical task, scene text editing is widely used in practical applications such as text-image synthesis, advertising photo editing, text-image correction and augmented reality translation. It aims to replace text instances (i.e., the foreground) in an image without compromising the background. However, the fine-grained and complex structures of text instances raise two major challenges: (i) How to transfer text style and retain background texture. Specifically, text style includes factors such as font, color, orientation, stroke size, and spatial perspective. It is difficult to precisely capture the complete text style in the source image due to the complexity of the background. (ii) How to maintain the consistency of the edited background especially for complex scenes, e.g., menus and street store signs. Numerous studies formulate scene text editing as a style transfer task and approach it by generative models like GANs Wu et al. [2019], Qu et al. [2023]. Typically, a cropped text region with the target style is needed as the reference image. Such methods then transfer a rendered text in the desired spelling to match the reference image's style and the source image's background. However, the two major challenges for scene text editing remains. (i) These methods are currently constrained to editing English and fail to accurately generate complex text style (e.g., Chinese). (ii) The process of cropping, transferring style and blending results in less natural-looking outcomes. End-to-end pipelines are needed for the consistency and harmony.\nTo address the above issues, we present DiffUTE, a general diffusion model designed to tackle high-quality multilingual text editing tasks. DiffUTE utilizes character glyphs and text locations in source images as auxiliary information to provide better control during character generation. As shown in Figure 1, our model can generate very realistic text. The generated text is intelligently matched to the most contextually appropriate text style and seamlessly integrated with the background while maintaining high quality.\nThe major contribution of this paper is the universal text edit diffusion model proposed to edit scene text images. DiffUTE possesses obvious advantages over existing methods in several folds:\n1. We present DiffUTE, a novel universal text editing diffusion model that can edit any text in any image. DiffUTE generates high-quality text through fine-grained control of glyph and position information. DiffUTE is capable of seamlessly integrating various styles of text characters into the image context, resulting in realistic and visually pleasing outputs.\n2. We design a self-supervised learning framework that enables the model to be trained with large amounts of scene text images. The framework allows the model to learn from the data without annotation, making it a highly efficient and scalable solution for scene text editing.\n3. We conduct extensive experiments to evaluate the performance of DiffUTE. Our method performs favorably over prior arts for text image editing, as measured by quantitative metrics and visualization." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b13", "b14" ], "table_ref": [], "text": "In this paper, we adopt Stable Diffusion (SD) Rombach et al. [2022] as our baseline method to design our network architecture. SD utilizes a variational auto-encoder (VAE) to enhance computation efficiency. Through VAE, SD performs the diffusion process in low-dimensional latent space. Specifically, given an input image 𝑥 ∈ R 𝐻 ×𝑊 ×3 , the encoder E 𝑣 of VAE transforms it into a latent representation 𝑧 ∈ R ℎ×𝑤×𝑐 , where 𝛼 = 𝐻 ℎ = 𝑊 𝑤 is the downsampling factor and 𝑐 is the latent feature dimension. The diffusion process is then executed in the latent space, where a conditional UNet denoiser Ronneberger et al. [2015] 𝜖 𝜃 (𝑧 𝑡 , 𝑡, 𝑦) is employed to predict the noise with noisy latent 𝑧 𝑡 , generation condition input 𝑦 and current time step 𝑡. The condition information 𝑦 may encompass various modalities, e.g., natural language, semantic segmentation maps and canny edge maps. To pre-processing 𝑦 from various modalities, SD employs a domain-specific encoder 𝜏 𝜃 to project 𝑦 into an intermediate representation 𝜏 𝜃 (𝑦) ∈ R 𝑀 ×𝑑 𝜏 which is then mapped to the intermediate layers of the 𝑄 , 𝑊 (𝑖) 𝐾 , 𝑊 (𝑖) 𝑉 are learnable projection matrices, 𝑑 denotes the output dimension of key (𝐾) and query (𝑄) features, and 𝜙 𝑖 (𝑧 𝑡 ) ∈ R 𝑁 ×𝑑 𝑖 𝜖 denotes a flattened intermediate representation of the UNet implementing 𝜖 𝜃 . In the scenario of text-to-image generation, the condition 𝐶 = 𝜏 𝜃 (𝑦) is produced by encoding the text prompts 𝑦 with a pre-trained CLIP text encoder 𝜏 𝜃 . The overall training objective of SD is defined as\nL 𝑠𝑑 = E E ( 𝑥 ),𝑦, 𝜖 ∼N (0,1),𝑡 ∥𝜖 -𝜖 𝜃 (𝑧 𝑡 , 𝑡, 𝜏 𝜃 (𝑦)) ∥ 2 2 ,(1)\nTherefore, 𝜏 𝜃 and 𝜖 𝜃 can be jointly optimized via Equation (1)." }, { "figure_ref": [], "heading": "Universal Text Editing Diffusion Model", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2" ], "heading": "Model Overview", "publication_ref": [ "b13" ], "table_ref": [], "text": "The overall training process of our proposed DiffUTE method is illustrated in Figure 2 (a). Based on the cross attention mechanism in SD Rombach et al. [2022], the original input latent vector 𝑧 𝑡 is replaced by the concatenation of latent image vector 𝑧 𝑡 , masked image latent vector 𝑥 𝑚 , and text mask 𝑚. The condition 𝐶 is also equipped with a glyph encoder for encoding glyph image 𝑥 𝑔 . Introducing text masks and glyph information enables fine-grained diffusion control throughout the training process, resulting in the improved generative performance of the model." }, { "figure_ref": [], "heading": "Perceptual Image Compression", "publication_ref": [ "b13" ], "table_ref": [], "text": "Following Rombach et al. [2022], we utilize a VAE to reduce the computational complexity of diffusion models. The model learns a perceptually equivalent space to the image space but with significantly reduced computational complexity. Since the VAE in SD is trained on natural images, its ability to restore text regions is limited. Moreover, compressing the original image directly through the VAE encoder causes the loss of dense text texture information, leading to blurry decoded images by the VAE decoder. To improve the reconstruction performance of text images, we further fine-tune the VAE on text image datasets. As shown in our experiments (Section 4.4), training VAE directly on the original image size lead to bad reconstruction results, i.e., unwanted patterns and incomplete strokes. We propose a progressive training strategy (PTT) in which the size of the images used for training increases as the training proceeds. Specifically, in the first three stages of training, we randomly crop images of sizes 𝑆/8, 𝑆/4 and 𝑆/2 and resize them to 𝑆 for training, where 𝑆 is the resolution of the model input image and 𝑆 = 𝐻 = 𝑊. Thus, the tuned VAE can learn different sizes of stroke details and text recovery. In the fourth stage, we train with images of the same size as the VAE input to ensure that the VAE can predict accurately when inferring." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Fine-grained Conditional Guidance", "publication_ref": [ "b15", "b16", "b17" ], "table_ref": [], "text": "The pixel-level representation of text images differs greatly from the representation of natural objects.\nAlthough textual information consists of just multiple strokes of a two-dimensional structure, it has fine-grained features, and even slight movement or distortion lead to unrealistic image generation. In contrast, natural images have a much higher tolerance level as long as the semantic representation of the object is accurate. To ensure the generation of perfect text representations, we introduce two types of fine-grained guidance: positional and glyph.\nPositional guidance. Unlike the small differences between natural images, the latent feature distributions of character pixels differ dramatically. Text generation requires attention to specific local regions instead of the existing global control conditions for natural images Zhang and Agrawala [2023], Mou et al. [2023], Cheng et al. [2023] (e.g., segmentation maps, depth maps, sketch and grayscale images). To prevent model collapse, we introduce position control to decouple the distribution between different regions and make the model focus on the region for text generation. As shown in Figure 2 (a), a binary mask is concatenated to the original image latent features.\nGlyph guidance. Another important issue is to precisely control the generation of character strokes. Language characters are diverse and complex. For example, a Chinese character may consist of more than 20 strokes, while there are more than 10,000 common Chinese characters. Learning directly from large-scale image-text datasets without explicit knowledge guidance is complicated. Liu et al. [2022a] proposes that the character-blinded can induce robust spelling knowledge for English words only when the model parameters are larger than 100B and cannot generalize well beyond Latin scripts such as Chinese and Korean. Therefore, we heuristically incorporate explicit character images as additional conditional information to generate text accurately into the model diffusion process. As shown in Figure 2 (a), we extract the latent feature of the character image as a control condition." }, { "figure_ref": [ "fig_2" ], "heading": "Self-supervised Training Framework for Text Editing", "publication_ref": [ "b3" ], "table_ref": [], "text": "It is impossible to collect and annotate large-scale paired data for text image editing, i.e., (𝑥 𝑠 , 𝑥 𝑔 , 𝑚), 𝑦 . It may take great expense and huge labor to manually paint reasonable editing results. Thus, we perform self-supervised training. Specifically, given an image and the OCR bounding box of a sentence in the image, our training data is composed of (𝑥 𝑚 , 𝑥 𝑔 , 𝑚), 𝑥 𝑠 .\nFor diffusion-based inpainting models, the condition 𝐶 is usually text, which is usually processed by a pre-trained CLIP text encoder. Similarly, a naive solution is directly replacing it with an image encoder. To better represent glyph images, we utilize the pre-trained OCR encoder Li et al. [2023] as the glyph encoder. Such naive solution converges well on the training set. However, the generated quality is far from satisfactory for test images. We argue that the main reason is that the model learns a mundane mapping function under the naive training scheme: 𝑥 𝑔 + 𝑥 𝑠 • (1 -𝑚) = 𝑥 𝑠 . It impedes the network from understanding text style and layout information in the image, resulting in poor generalization. To alleviate such issue, we use a uniform font style (i.e., \"arialuni\") and regenerate the corresponding text image, as shown in Figure 2 (a) with the example of \"RM 30.00\". Thus, we prevent the model from learning such a trivial mapping function and facilitate model understanding in a self-supervised training manner. \nL DiffUTE = E E 𝑣 ( 𝑥 𝑠 ), 𝑥 𝑔 ,𝑥 𝑚 ,𝑚, 𝜖 ∼N (0,1),𝑡 ||𝜖 -𝜖 𝜃 (𝑧 𝑡 , 𝑡, 𝑥 𝑔 , 𝑥 𝑚 , 𝑚)|| 2 2 .\n(2)" }, { "figure_ref": [ "fig_2" ], "heading": "Interactive Scene Text Editing with LLM", "publication_ref": [ "b20" ], "table_ref": [], "text": "To enhance the interaction capability of the model, we introduced the large language model (LLM), i.e., ChatGLM Zeng et al. [2023]. Moreover, we fine-tuned ChatGLM using the extracted OCR data to facilitate a better understanding of structured information by ChatGLM, The inference process of DiffUTE is show in Figure 2 (b). We first provide the OCR information extracted by the OCR detector and the target that the user wants to edit with to LLM, which will return the target text and its corresponding bounding box. Then, we use bounding boxes to generate mask and masked images, and generate images through a complete diffusion process (𝑡 = {𝑇, 𝑇 -1, ..., 0}) by DDIM Song et al.\n[2020] sampling strategy. By using ChatGLM to understand natural language instruction, we avoid requiring users to provide masks for the areas they want to edit, making our model more convenient. checkpoint of stable-diffusion-2-inpainting2 . The VAE is trained for three epochs with a batch size of 48 and a learning rate of 5e-6. We use a pre-trained OCR encoder as our glyph encoder, i.e., TROCR Li et al. [2023]. During the training of DiffUTE, we set the batch size to 256, the learning rate to 1e-5, and the batch size to 5. Note that the weights of the glyph encoder and VAE were frozen during the training of DiffUTE." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b30", "b33" ], "table_ref": [], "text": "Evaluation and metrics. In our evaluation, we evaluate the accuracy of the generated text. We report OCR accuracy, calculated separately using pre-trained recognition model Fang et al. [2021] and human evaluation of the correctness between the generated text and the target text, denoted as OCR and Cor, respectively. Training of SRNet requires different texts to appear in the same position and background, which does not exist in real-world datasets. Therefore, we use synthtiger Yim et al. [2021] to synthesize images for fine-tuning. For MOSTEL, we fine-tuned it on our dataset. For SD, we selected two baseline methods, i.e., stable-diffusion-inpainting3 (SD1) and stable-diffusion-2-inpainting (SD2).\nFor fair comparison, we fine-tuned SD1 and SD2 by instruction tuning. The resulting models are termed as SD1-FT and SD2-FT. In the NLP field, instruction tuning techniques are used to train models to perform tasks based on task instructions. We aim to accurately map text instructions to corresponding text edits using the SD model. To achieve this, we constructed a dataset for fine-tuning. Each sample in the dataset consists of a language instruction describing the target text, a mask, and the ground truth. ControlNet is an image synthesis method that achieves excellent controllability by incorporating additional conditions to guide the diffusion process. To adapt this method to our text editing problem, we take the glyph image as the input to the ControlNet network. DiffSTE introduces an instruction tuning framework to train the model to learn the mapping from textual instruction to the corresponding image, and improves the pre-trained diffusion model with a dual encoder design.\nWe followed the original setup to train DiffSTE." }, { "figure_ref": [ "fig_3" ], "heading": "Comparison results", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "The quantitative results for text generation are shown in Table 1. We can find that our DiffUTE achieves state-of-the-art results on all datasets. For example, DiffUTE improves average OCR accuracy and human-evaluated text correctness by 14.95% and 14.0% compared with the second best method DiffSTE. Moreover, our method achieves better results than the diffusion model and the finetuned diffusion model because our fine-grained control can provide the model with prior knowledge of glyph and position. Furthermore, the poor performance of the diffusion model for instruction fine-tuning also demonstrates the superiority of our inference approach combining ChatGLM, which can achieve better editing effects.\nWe further conducted a visualization experiment. As shown in Figure 3, our method successfully achieved the transfer of foreground text and background texture, resulting in a regular textual structure and consistent font with the original text. Moreover, the background texture was clearer, and the overall similarity with real images was improved. In contrast, the results edited using the diffusion model often deviated from the target text, further validating the effectiveness of the glyph condition we introduced. Furthermore, other methods perform poorly when faced with more challenging Chinese text generation tasks, whereas DiffUTE still achieves good generation results." }, { "figure_ref": [ "fig_4", "fig_5", "fig_6" ], "heading": "Ablation results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "The Ablation studies examine three main aspects, namely 1) the effectiveness of the progressive training strategy of VAE, and 2) the impact of position control and glyph control on the image generation performance of DiffUTE. The experimental results are shown in Table 2, Figure 4 and Figure 5.\nProgressive training strategy. Without using the progressive training strategy, the editing results become distorted and the accuracy of text generation significantly decreases. The reason for such poor results is the complexity of the local structure of the text, whereby the VAE may need to learn the reconstruction ability of local details efficiently by focusing when there are too many characters in the image. And using our proposed progressive training strategy, the reconstruction ability of the model is significantly improved and more realistic results are obtained. The experimental results validate the effectiveness of this strategy and highlight the pivotal role of VAE in the diffusion model.\nFine-grained control. When position control is not used, the mask and masked images at the input of the UNet are removed. When glyph control is not used, the latent code obtained from the text through the CLIP text encoder is used as the condition. When position control and glyph control are not used, there is a significant drop in performance. For example, when position control is not used, the OCR accuracy of the model drops by 36.7%, and the Cor drops by 38.6%. When glyph control is not used, the model cannot generate accurate text and the OCR accuracy of the model drops by 39.8%, and the Cor drops by 41.5%. These results show that position control can help the model focus on the area where text is to be generated, while glyph control can provide prior knowledge of the shape of the characters to help the model generate text more accurately.\nVisualisation. In Figure 6 5 Related Works " }, { "figure_ref": [], "heading": "Large Language Model", "publication_ref": [ "b48", "b49", "b20" ], "table_ref": [], "text": "Large language models (LLMs) refer to language models that contain billions (or more) of parameters, which are trained on massive amounts of text data, such as models like GPT-3 Brown et al. [2020], Galactica Taylor et al. [2022], LLaMA Touvron et al. [2023] and ChatGLM Zeng et al. [2023]. Among them, ChatGLM is a billion-scale language model with rudimentary question-answering and conversational capabilities. It differs from BERT Devlin et al. [2018], GPT-3 and T5 Xue et al. [2021] architectures and is a self-regressive pre-training model that includes multiple objective functions. In this paper, we use ChatGLM to enhance the interaction capability of our model." }, { "figure_ref": [], "heading": "Conclusion and Limitations", "publication_ref": [], "table_ref": [], "text": "In this paper, we argue that the current diffusion model can not generate realistic text in images. To tackle this problem, we present a novel method DiffUTE, a diffusion-based universal text editing model. DiffUTE generates high-quality text through fine-grained control of glyph and position information, and benefits from massive amounts of text images through a self-supervised training approach. Moreover, by integrating a large language model (i.e., ChatGLM), we can use natural language to edit the text in images, enhancing the editing usability and convenience of model.\nExtensive experiments have shown that DiffUTE excels in textual correctness and image naturalness.\nThe main limitation of our method is that the accuracy of generated text will decrease as the number of characters to be edited in the image increases. This is due to the fact that as the number of characters increase, the spatial complexity of the characters will also increase, making the generation process more challenging. Therefore, our future work will focus on improving the generation quality and solving the problem of rendering long texts." } ]
Diffusion model based language-guided image editing has achieved great success recently. However, existing state-of-the-art diffusion models struggle with rendering correct text and text style during generation. To tackle this problem, we propose a universal self-supervised text editing diffusion model (DiffUTE), which aims to replace or modify words in the source image with another one while maintaining its realistic appearance. Specifically, we build our model on a diffusion model and carefully modify the network structure to enable the model for drawing multilingual characters with the help of glyph and position information. Moreover, we design a self-supervised learning framework to leverage large amounts of web data to improve the representation ability of the model. Experimental results show that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity. Our code will be avaliable in https://github.com/chenhaoxing/DiffUTE.
DiffUTE: Universal Text Editing Diffusion Model
[ { "figure_caption": "Figure 1 :1Figure 1: Examples of text editing. DiffUTE achieves the best result among existing models.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Training and inference process of our proposed universal text editing diffusion model. (a)Given an image, we first extracted all the text and corresponding bounding boxes by the OCR detector. Then, a random area is selected and the corresponding mask and glyph image are generated. We use the embedding of the glyph image extracted by the glyph encoder as the condition, and concatenate the masked image latent vector 𝑥 𝑚 , mask 𝑚, and noisy image latent vector 𝑧 𝑡 as the input of the model. (b) Users can directly input the content they want to edit, and the large language model will understand their needs and provide the areas to be edited and the target text to DiffUTE, which then completes the text editing.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Visualization comparison. Our DiffUTE beats other methods with a significant improvement.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Sample results of ablation study.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Examples of image reconstruction with our method DiffUTE.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: More visualization results of text editing.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Quantitative comparison across four datasets. ↑ means the higher the better, underline indicates the second best method.Our self-supervised training process is summarized as follows: (1) An ocr region is randomly selected from the image and the corresponding text image is regenerated with a uniform font style. (2) The regenerated character image 𝑥 𝑔 is fed into glyph encoder to get condition glyph embedding 𝑒 𝑔 . (3) The masked image latent vector 𝑥 𝑚 , mask 𝑚 and noisy image latent vector 𝑧 𝑡 is concatenated to form a new latent vector 𝑧 ′ 𝑡 = Concat(𝑥 𝑚 , 𝑚, 𝑧 𝑡 ). After dimension adjustment through a convolution layer, the feature vector ẑ𝑡 = Conv(𝑧 ′ 𝑡 ) is fed into the UNet as the query component. Consequently, the training objective of DiffUTE is:", "figure_data": "ModelWeb OCR↑ Cor↑ OCR↑ Cor↑ OCR↑ Cor↑ OCR↑ Cor↑ OCR↑ ArT TextOCR ICDAR13 Average Cor↑Pix2Pix17.241613.521115.741415.481515.5014SRNet30.874231.224432.094130.854431.2642.8MOSTEL 48.936160.736845.975353.765952.3560.3SD14.3255.9877.4373.6465.346.3SD25.8876.9499.29115.3286.868.8SD1-FT33.534533.254749.724628.763236.3242.5SD2-FT46.345149.694462.895946.874651.4550DiffSTE48.555082.728484.858581.488174.3075DiffUTE84.83 +35.90 +24 +3.26 85 85.9887 +387.32 +2.4788 +383.49 +2.0182 +185.41 +11.11 +10.5 85.5", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "ModelWeb OCR↑ Cor↑ OCR↑ Cor↑ OCR↑ Cor↑ OCR↑ Cor↑ OCR↑ Cor↑ ArT TextOCR ICDAR13 Averagew/o PTT44.734745.294160.835241.223948.0244.8w/o Pos.49.845350.894765.726349.724754.0452.5w/o Gly.46.345149.694462.895946.874651.4550.0DiffUTE84.83 +34.99 +32 +35.09 +40 +21.60 +25 +33.77 +35 +31.37 +33 85 85.98 87 87.32 88 83.49 82 85.41 85.5", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ", we provide some examples edited by DiffUTE. DiffUTE consistently generates correct visual text, and the texts naturally follow the same text style, i.e. font, and color, with other surrounding texts. We can see from the experiment that DiffUTE has a strong generative power. (i) In sample N1, DiffUTE can automatically generate slanted text based on the surrounding text. (ii) As shown in sample N2, the input is 234, and DiffUTE can automatically add the decimal point according to the context, which shows that DiffUTE has some document context understanding ability. (iii) In the sample CN4, DiffUTE can generate even artistic characters very well.", "figure_data": "", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "that generates the target text. MostelQu et al. [2023] improves upon these methods by incorporating stroke-level information to enhance the editing performance. However, despite their reasonable performance, these methods are often constrained in their ability to generate text in arbitrary styles and locations and can result in less natural-looking images.Text-guided image editing has attracted increasing attention in recent years among various semantic image editing methods. Early works utilized pretrained GAN generators and text encoders to progressively optimize images based on textual promptsBau et al. [2021],Gal et al. [2021],Pérez et al. [2003]. However, these GAN-based manipulation methods encounter difficulties in editing images with complex scenes or diverse objects, owing to the limited modeling capability of GANs. The rapid rise and development of diffusion modelsRombach et al. [2022],Saharia et al. [2022b],Ruiz et al. [2023] have demonstrated powerful abilities in synthesizing high-quality and diverse images. Many studiesBrack et al.[2023],Brooks et al. [2023] have employed diffusion models for text-driven image editing. Among various diffusion models, Stable DiffusionRombach et al. [2022] is one of the state-of-the-art models, which compresses images into low-dimensional space using an auto-encoder and implements effective text-based image generation through cross-attention layers. This model can easily adapt to various tasks, such as text-based image inpainting and image editing.However, it has been observed that diffusion models exhibit poor visual text generation performance and are often prone to incorrect text generation. Only a few studies have focused on improving the text generation capability of diffusion models. Recently, one study trained a model to generate images containing specific text based on a large number of image-text pairsLiu et al. [2022b]. However, this work differs from ours in terms of application, as they focus on text-to-image generation, while ours concentrates on editing text in images. Another ongoing work, ControlNetZhang and Agrawala [2023], has demonstrated remarkable performance in image editing by providing reference images such as Canny edge images and segmentation maps. While ControlNet achieves remarkably impressive results, it performs poorly on text editing tasks. To obtain better editing results, we incorporate auxiliary glyph information into the conditional generation process and emphasize local control in all diffusion steps.", "figure_data": "5.2 Image Editing5.1 Scene Text EditingStyle transfer techniques based on Generative Adversarial Networks (GANs) have gained widespreadpopularity for scene text editing tasks Roy et al. [2020], Huang et al. [2022], Kong et al. [2022],", "figure_id": "tab_4", "figure_label": "", "figure_type": "table" } ]
Haoxing Chen; Zhuoer Xu; Zhangxuan Gu; Jun Lan; Xing Zheng; Yaohui Li; Changhua Meng; Huijia Zhu; Weiqiang Wang; Ant Group
[ { "authors": "Zhuoer Xu; Guanghui Zhu; Changhua Meng; Zhenzhe Ying; Weiqiang Wang; Ming Gu; Yihua Huang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b0", "title": "A2: Efficient automated attacker for boosting adversarial training", "year": "2022" }, { "authors": "Zhangxuan Gu; Zhuoer Xu; Haoxing Chen; Jun Lan; Changhua Meng; Weiqiang Wang", "journal": "", "ref_id": "b1", "title": "Mobile user interface element detection via adaptively prompt tuning", "year": "2023" }, { "authors": "Chao Zhang; Huaxiong Li; Yang Gao; Chunlin Chen", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b2", "title": "Weakly-supervised enhanced semanticaware hashing for cross-modal retrieval", "year": "2022" }, { "authors": "Li Niu; Junyan Cao; Wenyan Cong; Liqing Zhang", "journal": "", "ref_id": "b3", "title": "Deep image harmonization with learnable augmentation", "year": "2023" }, { "authors": "Haoxing Chen; Zhangxuan Gu; Yaohui Li; Jun Lan; Changhua Meng; Weiqiang Wang; Huaxiong Li", "journal": "", "ref_id": "b4", "title": "Hierarchical dynamic image harmonization", "year": "2023" }, { "authors": "Tingliang Feng; Wei Feng; Weiqi Li; Di Lin", "journal": "", "ref_id": "b5", "title": "Cross-image context for single image inpainting", "year": "2022" }, { "authors": "Jiangning Zhang; Chao Xu; Jian Li; Yue Han; Yabiao Wang; Ying Tai; Yong Liu", "journal": "", "ref_id": "b6", "title": "Scsnet: An efficient paradigm for learning simultaneously image colorization and super-resolution", "year": "2022" }, { "authors": "Gihyun Kwon; Jong Chul; Ye ", "journal": "", "ref_id": "b7", "title": "Clipstyler: Image style transfer with a single text condition", "year": "2022" }, { "authors": "Manuel Brack; Felix Friedrich; Dominik Hintersdorf; Lukas Struppek; Patrick Schramowski; Kristian Kersting", "journal": "", "ref_id": "b8", "title": "Sega: Instructing diffusion using semantic dimensions", "year": "2023" }, { "authors": "Tim Brooks; Aleksander Holynski; Alexei A Efros", "journal": "", "ref_id": "b9", "title": "Instructpix2pix: Learning to follow image editing instructions", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "NeurIPS", "ref_id": "b10", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Liang Wu; Chengquan Zhang; Jiaming Liu; Junyu Han; Jingtuo Liu; Errui Ding; Xiang Bai", "journal": "", "ref_id": "b11", "title": "Editing text in the wild", "year": "2019" }, { "authors": "Yadong Qu; Qingfeng Tan; Hongtao Xie; Jianjun Xu; Yuxin Wang; Yongdong Zhang", "journal": "", "ref_id": "b12", "title": "Exploring stroke-level modifications for scene text editing", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b13", "title": "Highresolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b14", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b15", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Chong Mou; Xintao Wang; Liangbin Xie; Jian Zhang; Zhongang Qi; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b16", "title": "T2i-adapter: Learning adapters to dig out more controllable ability for text-to-image diffusion models", "year": "2023" }, { "authors": "Shin-I Cheng; Yu-Jie Chen; Wei-Chen Chiu; Hung-Yu Tseng; Hsin-Ying Lee", "journal": "", "ref_id": "b17", "title": "Adaptively-realistic image generation from stroke and sketch with diffusion model", "year": "2023" }, { "authors": "Rosanne Liu; Dan Garrette; Chitwan Saharia; William Chan; Adam Roberts; Sharan Narang; Irina Blok; Mohammad Mical; Noah Norouzi; Constant", "journal": "", "ref_id": "b18", "title": "Character-aware models improve visual text rendering", "year": "2022" }, { "authors": "Minghao Li; Tengchao Lv; Jingye Chen; Lei Cui; Yijuan Lu; Dinei Florencio; Cha Zhang; Zhoujun Li; Furu Wei", "journal": "", "ref_id": "b19", "title": "Trocr: Transformer-based optical character recognition with pre-trained models", "year": "2023" }, { "authors": "Aohan Zeng; Xiao Liu; Zhengxiao Du; Zihan Wang; Hanyu Lai; Ming Ding; Zhuoyi Yang; Yifan Xu; Wendi Zheng; Xiao Xia; Weng Lam Tam; Zixuan Ma; Yufei Xue; Jidong Zhai; Wenguang Chen; Zhiyuan Liu; Peng Zhang; Yuxiao Dong; Jie Tang", "journal": "", "ref_id": "b20", "title": "GLM-130b: An open bilingual pre-trained model", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b21", "title": "Denoising diffusion implicit models", "year": "2020" }, { "authors": "Hang Li", "journal": "", "ref_id": "b22", "title": "Cdla: A chinese document layout analysis (cdla) dataset", "year": "2021" }, { "authors": "Yiheng Xu; Tengchao Lv; Lei Cui; Guoxin Wang; Yijuan Lu; Dinei Florencio; Cha Zhang; Furu Wei", "journal": "", "ref_id": "b23", "title": "Xfund: A benchmark dataset for multilingual visually rich form understanding", "year": "2022" }, { "authors": "Xu Zhong; Jianbin Tang; Antonio Jimeno; Yepes ", "journal": "", "ref_id": "b24", "title": "Publaynet: largest dataset ever for document layout analysis", "year": "2019" }, { "authors": "Rui Zhang; Yongsheng Zhou; Qianyi Jiang; Qi Song; Nan Li; Kai Zhou; Lei Wang; Dong Wang; Minghui Liao; Mingkun Yang", "journal": "", "ref_id": "b25", "title": "Icdar 2019 robust reading challenge on reading chinese text on signboard", "year": "2019" }, { "authors": "Nibal Nayef; Yash Patel; Michal Busta; Pinaki Nath Chowdhury; Dimosthenis Karatzas; Wafa Khlif; Jiri Matas; Umapada Pal; Jean-Christophe Burie; Cheng-Lin Liu", "journal": "", "ref_id": "b26", "title": "Icdar2019 robust reading challenge on multi-lingual scene text detection and recognition-rrc-mlt-2019", "year": "2019" }, { "authors": "Dimosthenis Karatzas; Lluis Gomez-Bigorda; Anguelos Nicolaou; Suman K Ghosh; Andrew D Bagdanov; Masakazu Iwamura; Jiri Matas; Lukas Neumann; Vijay Ramaseshan Chandrasekhar; Shijian Lu; Faisal Shafait; Seiichi Uchida; Ernest Valveny", "journal": "", "ref_id": "b27", "title": "ICDAR 2015 competition on robust reading", "year": "2015" }, { "authors": "Chee Kheng; Chng ; Yuliang Liu; Yipeng Sun; Chun ; Chet Ng; Canjie Luo; Zihan Ni; Chuanming Fang; Shuaitao Zhang; Junyu Han; Errui Ding", "journal": "", "ref_id": "b28", "title": "Icdar2019 robust reading challenge on arbitrary-shaped text-rrc-art", "year": "2019" }, { "authors": "Amanpreet Singh; Guan Pang; Mandy Toh; Jing Huang; Wojciech Galuba; Tal Hassner", "journal": "", "ref_id": "b29", "title": "Textocr: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text", "year": "2021" }, { "authors": "Shancheng Fang; Hongtao Xie; Yuxin Wang; Zhendong Mao; Yongdong Zhang", "journal": "", "ref_id": "b30", "title": "Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition", "year": "2021" }, { "authors": "Phillip Isola; Jun-Yan Zhu; Tinghui Zhou; Alexei A Efros", "journal": "", "ref_id": "b31", "title": "Image-to-image translation with conditional adversarial networks", "year": "2017" }, { "authors": "Jiabao Ji; Guanhua Zhang; Zhaowen Wang; Bairu Hou; Zhifei Zhang; Brian Price; Shiyu Chang", "journal": "", "ref_id": "b32", "title": "Improving diffusion models for scene text editing with dual encoders", "year": "2023" }, { "authors": "Moonbin Yim; Yoonsik Kim; Han-Cheol Cho; Sungrae Park", "journal": "", "ref_id": "b33", "title": "Synthtiger: Synthetic text image generator towards better text recognition models", "year": "2021" }, { "authors": "Prasun Roy; Saumik Bhattacharya; Subhankar Ghosh; Umapada Pal", "journal": "", "ref_id": "b34", "title": "Stefann: scene text editor using font adaptive neural network", "year": "2020" }, { "authors": "Qirui Huang; Bin Fu; Yu Qiao", "journal": "", "ref_id": "b35", "title": "Gentext: Unsupervised artistic text generation via decoupled font and texture manipulation", "year": "2022" }, { "authors": "Yuxin Kong; Canjie Luo; Weihong Ma; Qiyuan Zhu; Shenggao Zhu; Nicholas Yuan; Lianwen Jin", "journal": "", "ref_id": "b36", "title": "Look closer to supervise better: One-shot font generation via component-based discriminator", "year": "2022" }, { "authors": "Junyeop Lee; Yoonsik Kim; Seonghyeon Kim; Moonbin Yim; Seung Shin; Gayoung Lee; Sungrae Park", "journal": "", "ref_id": "b37", "title": "Rewritenet: Reliable scene text editing with implicit decomposition of text contents and styles", "year": "2021" }, { "authors": "Wataru Shimoda; Daichi Haraguchi; Seiichi Uchida; Kota Yamaguchi", "journal": "", "ref_id": "b38", "title": "De-rendering stylized texts", "year": "2021" }, { "authors": "Qiangpeng Yang; Jun Huang; Wei Lin", "journal": "", "ref_id": "b39", "title": "Swaptext: Image based texts transfer in scenes", "year": "2020" }, { "authors": "Fangneng Zhan; Hongyuan Zhu; Shijian Lu", "journal": "", "ref_id": "b40", "title": "Spatial fusion gan for image synthesis", "year": "2019" }, { "authors": "David Bau; Alex Andonian; Audrey Cui; Yeonhwan Park; Ali Jahanian; Aude Oliva; Antonio Torralba", "journal": "", "ref_id": "b41", "title": "Paint by word", "year": "2021" }, { "authors": "Rinon Gal; Or Patashnik; Haggai Maron; Gal Chechik; Daniel Cohen-Or", "journal": "", "ref_id": "b42", "title": "Stylegan-nada: Clip-guided domain adaptation of image generators", "year": "2021" }, { "authors": "Patrick Pérez; Michel Gangnet; Andrew Blake", "journal": "", "ref_id": "b43", "title": "Poisson image editing", "year": "2003" }, { "authors": "Chitwan Saharia; William Chan; Huiwen Chang; Chris Lee; Jonathan Ho; Tim Salimans; David Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b44", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b45", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Rosanne Liu; Dan Garrette; Chitwan Saharia; William Chan; Adam Roberts; Sharan Narang; Irina Blok; Mohammad Mical; Noah Norouzi; Constant", "journal": "", "ref_id": "b46", "title": "Character-aware models improve visual text rendering", "year": "2022" }, { "authors": "Tom Brown; Benjamin Mann; Nick Ryder; Melanie Subbiah; Jared D Kaplan; Prafulla Dhariwal; Arvind Neelakantan; Pranav Shyam; Girish Sastry; Amanda Askell", "journal": "NeurIPS", "ref_id": "b47", "title": "Language models are few-shot learners", "year": "2020" }, { "authors": "Ross Taylor; Marcin Kardas; Guillem Cucurull; Thomas Scialom; Anthony Hartshorn; Elvis Saravia; Andrew Poulton; Viktor Kerkez; Robert Stojnic", "journal": "", "ref_id": "b48", "title": "Galactica: A large language model for science", "year": "2022" }, { "authors": "Hugo Touvron; Thibaut Lavril; Gautier Izacard; Xavier Martinet; Marie-Anne Lachaux; Timothée Lacroix; Baptiste Rozière; Naman Goyal; Eric Hambro; Faisal Azhar", "journal": "", "ref_id": "b49", "title": "Llama: Open and efficient foundation language models", "year": "2023" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b50", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2018" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b51", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" } ]
[ { "formula_coordinates": [ 3, 204.2, 515.01, 300.46, 13.56 ], "formula_id": "formula_0", "formula_text": "L 𝑠𝑑 = E E ( 𝑥 ),𝑦, 𝜖 ∼N (0,1),𝑡 ∥𝜖 -𝜖 𝜃 (𝑧 𝑡 , 𝑡, 𝜏 𝜃 (𝑦)) ∥ 2 2 ,(1)" }, { "formula_coordinates": [ 5, 171.7, 362.76, 268.62, 13.56 ], "formula_id": "formula_1", "formula_text": "L DiffUTE = E E 𝑣 ( 𝑥 𝑠 ), 𝑥 𝑔 ,𝑥 𝑚 ,𝑚, 𝜖 ∼N (0,1),𝑡 ||𝜖 -𝜖 𝜃 (𝑧 𝑡 , 𝑡, 𝑥 𝑔 , 𝑥 𝑚 , 𝑚)|| 2 2 ." } ]
10.1016/j.istruc.2020.04.006
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b17", "b18", "b19", "b20", "b21", "b22", "b23" ], "table_ref": [], "text": "The reinforced concrete shear wall systems have been widely used in high-rise buildings [1,2]. The shear wall layout plays a crucial role in earthquake resistance.\nHowever, the design of shear wall layout often needs years of design experience and constant trial and error. Therefore, some studies on automatic shear wall layout have been conducted. For instance, Zhang et al. [3] combined architectural layout and structural performance to generate shear wall layouts based on the modified evolutionary algorithm (GA). In another work, Gan et al. [4] applied parametric modelling and a novel GA optimization method to generate the structural topology and optimize member dimensions. Lou et al. [5] developed a shear wall layout optimization strategy for minimizing the structural weight with constraints on the story drift and period ratio based on the tabu search. Tafraout et al. [6] proposed an approach to optimize the wall layouts considering the performance of the slab and a set of general structural guidelines and seismic design rules. Lou et al. [7] combined a response surface methodology (RSM) with a discrete Particle Swarm Optimization (PSO) technique to optimize member sizes. The aforementioned studies primarily employ intelligent evolutionary algorithms. They usually need lots of time to make iterations for the optimal solution.\nOver the past few years, with the development of artificial neural networks (ANN), several deep learning-based approaches have also been investigated. Pizarro et al. [8] used convolutional neural network (CNN) models by combining two independent floor plan prediction networks to generate the shear wall layouts. Liao et al. [9] employed generative adversarial networks (GAN) to generate shear wall layouts utilizing abstracted, semantically interpreted, classified, and parameterized data. For sake of improving the local design of shear wall layout, Zhao et al. [10] proposed an attentionenhanced GAN and generated more reasonable layouts in local zones, such as elevator shafts. In another work, Zhao et al. [11] utilized graph neural networks (GNNs) by representing a shear wall layout with a graph, which can ingeniously reflect the topological characteristics of shear wall layouts.\nThe aforementioned research can rapidly obtain design results by extracting the designer's experience. However, to achieve satisfactory generative outcomes, a considerable amount of paired data is often required for training neural networks.\nMoreover, the convergence of the network model typically necessitates continuous debugging and analysis by specialized researchers. Furthermore, the generated results usually yield only a single option, limiting the choices available to the designer.\nArtificial Intelligence Generated Content (AIGC), encompassing natural language, music, and images, has experienced explosive progress recently, primarily attributed to the employment of large-scale models. In terms of natural language processing, the ChatGPT series [12][13][14] has achieved remarkable results. Moreover, the open-source Large Language Model Meta AI (LLaMA) [15,16] has the potential to aid individuals in developing their personalized reasoning assistants. In the field of image generation, Diffusion models [17] have found wide-ranging applications. DALL-E-2 [18] and Midjourney [19] generates images from natural language descriptions based on the user's prompts, which both achieved good results. Moreover, Stable Diffusion [20] not only has powerful image generation capabilities but also fosters a large open-source community [21], which helps users built their personalized AI.\nIn addition, numerous large model fine-tuning approaches such as Hypernetworks [22], DreamBooth [23], LoRA [24] , and others have helped the public obtain their own personalized AI design assistants. Especially, LoRA achieves impressive results by freezing the pre-trained model's weight parameters and adding a bypass operation that performs dimensionality reduction followed by an increase in dimensionality, simulating intrinsic rank, as shown in Fig. 1. It allows for low-cost fine-tuning of large models and yields satisfactory outcomes. However, Stable Diffusion and LoRA have not yet been applied in shear wall layout design. The remainder of this paper is organized as follows. Section 2 details the proposed methodology, including two parts: training of the LoRa network and application. Section 3 compares the design performance of the proposed method with existing design approaches. Finally, Section 4 concludes the paper and examines the research applicability." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "The structure of our proposed methodology framework for training and implementing a personalized AI assistant in shear wall layout design is depicted in Fig. 2. This methodology principally comprises two phases: training the personalized AI assistant and applying the personalized AI assistant. The first phase necessitates collecting preferred data with the aid of the provided automatic pre-processor and training the LoRA network using this data. The application of the personalized AI assistant encompasses five steps: processing the architectural CAD; setting design parameters using the recommended values after trial; generating shear wall layouts with the previously trained LoRA; selecting the favored layout from the generated ones and making fine adjustments using PowerPoint (PPT); and utilizing the supplied postprocessor for calculations in structural design software such as SAP2000 or PKPM. " }, { "figure_ref": [], "heading": "Training the personalized LoRA network", "publication_ref": [ "b19", "b24", "b25" ], "table_ref": [], "text": "Fine-tuning the Stable Diffusion with the LoRA network typically doesn't require a substantial amount of data. However, pixelating approximately forty to fifty layout plans can also pose a challenge to ordinary users. Hence, this paper, based on [20],\napplies OpenCV [25] to pixelate the separately extracted geometries of the architectural floor plan and the shear wall floor plan, as illustrated in Fig. 3. Users can select their preferred drafts (approximately forty to fifty should suffice) and utilize the program provided in this study to automate the process of dataset creation.\nOnce the data is acquired, the LoRA can trained using the GUI for Stable Diffusion trainers [26]. Due to the powerful capabilities of LoRA, numerous parameters need to be adjusted in the GUI. The size of the training images can be adjusted based on the dimensions of the input images. It should be noted that to reduce the difficulty of training, users can train drawings of the same subcategory together, such as high-rise shear wall structures in the seventh-degree area. Therefore, all labels can be the same, such as \"Seventh Degree High-Rise Building Shear Wall Structure\" or \"Shear Wall\nLayout\", and so on. Through experimentation, it has been found that satisfactory results can be achieved with twenty epochs of training, each consisting of 100 steps. " }, { "figure_ref": [ "fig_4" ], "heading": "Fine adjustments and automated post-processing", "publication_ref": [], "table_ref": [], "text": "Upon receiving generated images from the AI assistant, designers can promptly make fine adjustments to achieve their desired layout plans. This study proposes two methods for designers to make these adjustments (see Fig. 7.). Method 1 involves using PowerPoint (PPT) to place red blocks of the same width as the walls in the desired locations, then combining them and saving as an image. By utilizing the provided conversion program, the modeling in structural calculation software can be completed.\nAlternatively, Method 2 allows for direct modeling in structural calculation software using the provided conversion program, with adjustments made within the software itself. The calculation software commonly used in China, such as PKPM, and internationally, like SAP2000, can facilitate this process. " }, { "figure_ref": [], "heading": "Evaluation metrics", "publication_ref": [ "b27" ], "table_ref": [ "tab_0" ], "text": "Structural design is a complex and rigorous task. In shear wall structures, the plan layout of shear walls significantly affects the seismic performance of the structure [28].\nTo validate the proposed method and relevant parameters in this study, a series of evaluation metrics are introduced.\nFirst, each country has specific seismic design codes for earthquake resistance . In this study, taking the related Chinese code requirements as an example, five global seismic structural indicators are used as important evaluation metrics, as shown in Table 1. 𝛿 𝑑𝑟𝑖𝑓𝑡 represents the inter-story drift angle, which restricts the horizontal displacement of structures under normal usage conditions to ensure the required stiffness of high-rise structures, preventing excessive displacement that may affect the structure's load-bearing capacity, stability, and usage requirements. 𝑟 𝑡𝑜𝑟𝑠𝑖𝑜𝑛 refers to the torsional ratio, which serves as an important basis for determining the presence and degree of torsional irregularity in structures. 𝑟 𝑝𝑒𝑟𝑖𝑜𝑑 stands for the period ratio, which also controls the relative relationship between lateral stiffness and torsional stiffness, making the plan layout of lateral force-resisting elements more effective and rational, and preventing the structure from experiencing excessive torsional effects (relative to lateral displacement). " }, { "figure_ref": [ "fig_5" ], "heading": "𝑟 𝑝𝑒𝑟𝑖𝑜𝑑", "publication_ref": [], "table_ref": [ "tab_1", "tab_1" ], "text": "Period ratio ≤ 0.9\nIn addition to the global seismic structural indicators, the planar geometric shape of the shear wall segments also greatly affects the seismic capacity of the structure. The presence of irregular columns complicates the force distribution and makes it difficult to predict, leading to local stress concentrations, which in turn can cause bending, twisting, and buckling instability, affecting the overall structural stability. Such irregularities should be avoided in typical shear wall structures. Similarly, short-limb shear walls (Fig. 8) have poor seismic performance and are prone to cracking under horizontal forces, so they should be avoided as much as possible. The aforementioned two indicators are represented by 𝑁 𝑐𝑜𝑙𝑢𝑚𝑛 and 𝑁 𝑠ℎ𝑜𝑟𝑡 , as shown in Table 2. Besides, rectangular columns also should be avoided, and they are also accounted for in 𝑁 𝑐𝑜𝑙𝑢𝑚𝑛 .\nFurthermore, material consumption is an essential aspect of design and can be approximated by the total wall length (𝐿 𝑤𝑎𝑙𝑙 ) of the shear walls.\nApart from the qualitative metrics mentioned above, the shear wall layout must be reasonable, which requires a level of expertise achievable only through years of design experience. Therefore, designers with multiple years of experience will provide a comprehensive score, ranging from 0 to 10, as listed in Table 2. " }, { "figure_ref": [ "fig_9" ], "heading": "Discussions of design performance", "publication_ref": [ "b9", "b10" ], "table_ref": [], "text": "To compare with the studies [10,11] From the scores of critics (Fig. 12), it can be seen that the total length of shear walls in the proposed method is shorter, and both arrangements generally have more columns and short-limb shear walls. The shear walls become more reasonable after fine-tuning.\nCase Critic Score GAN GNN Preferred (AiAssist) Adjusted 6.77 7.13 5.92 6.17 " }, { "figure_ref": [], "heading": "Conclusion and future work", "publication_ref": [], "table_ref": [], "text": "The experiments conducted in this study have demonstrated that the proposed method can effectively assist designers in their work. Additionally, users can follow the procedures and open-source software provided in this paper to train and improve their own AI models." } ]
Shear wall structures are widely used in high-rise residential buildings, and the layout of shear walls requires many years of design experience and iterative trial and error. Currently, there are methods based on heuristic algorithms, but they generate results too slowly. Those based on Generative Adversarial Networks (GANs) or Graph Neural Networks (GNNs) can only generate single arrangements and require large amounts of training data. At present, Stable Diffusion is being widely used, and by using the Low-Rank Adaptation (LoRA) method to fine-tune large models with small amounts of data, good generative results can be achieved. Therefore, this paper proposes a personalized AI assistant for shear wall layout based on Stable Diffusion, which has been proven to produce good generative results through testing.
Constructing a personalized AI assistant for shear wall layout using Stable Diffusion
[ { "figure_caption": "Fig. 1 .1Fig. 1. LoRA (from [24])", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Fig. 2 .2Fig. 2. Methodology framework for training and applying the personalized AI assistant in shear wall layout design", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Fig. 3 . 131Fig. 3. Process of obtaining training images", "figure_data": "", "figure_id": "fig_2", "figure_label": "31", "figure_type": "figure" }, { "figure_caption": "Fig. 5 . 2 . 2 . 2 Fig. 6 .52226Fig. 5. Flowchart of getting required pixel format", "figure_data": "", "figure_id": "fig_3", "figure_label": "52226", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. Flowchart of obtaining required pixel format", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. Different plane geometries of short-limb shear walls", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": ", the dataset used for LoRA network training is part L1_7 of the open-source dataset [29]. To validate the proposed method without being overly influenced by specialized fine-tuning, a second-year graduate student in the field of architectural structure serves as the user for the selection and fine-tuning process. The input architectural floor plan is shown in Fig. 9. The floor plans generated by GAN and GNN are shown in Fig. 10. The floor plans generated by AIassist are shown in Fig. 11.", "figure_data": "", "figure_id": "fig_6", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .Fig. 10 .Fig. 11 .91011Fig. 9. Cases for comparison (2× 5)", "figure_data": "", "figure_id": "fig_7", "figure_label": "91011", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig. 12. Score of critics", "figure_data": "", "figure_id": "fig_9", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 14 .14Fig. 14. Comparison of each metric", "figure_data": "", "figure_id": "fig_10", "figure_label": "14", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "global seismic structural indicatorsItemNameDescriptionCode limit11/𝛿 𝑑𝑟𝑖𝑓𝑡Inter-story drift angle≤ 1/10002𝑟 𝑡𝑜𝑟𝑠𝑖𝑜𝑛Torsional ratio≤ 1.4", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "", "figure_data": "Structural indicatorsItemNameDescription4𝑁 𝑐𝑜𝑙𝑢𝑚𝑛Number of irregular and rectangular columns5𝑁 𝑠ℎ𝑜𝑟𝑡Number of short-limb shear walls6𝐿 𝑤𝑎𝑙𝑙Total length of shear walls7𝑆 𝑙𝑎𝑦𝑜𝑢𝑡Rationality of the layout, ranging from 0 to 10", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Lufeng Wang; Jiepeng Liu; Guozhong Cheng; En Liu; Wei Chen
[ { "authors": "Z Wang; W Pan; Z Zhang", "journal": "Structures", "ref_id": "b0", "title": "High-rise modular buildings with innovative precast concrete shear walls as a lateral force resisting system", "year": "2020" }, { "authors": "H Hu; J Liu; G Cheng; Y Ding; Y F Chen", "journal": "Advances in Structural Engineering", "ref_id": "b1", "title": "Seismic behavior of hybrid coupled shear wall with replaceable U-shape steel coupling beam using terrestrial laser scanning", "year": "2022" }, { "authors": "Y Zhang; C Mueller", "journal": "Engineering Structures", "ref_id": "b2", "title": "Shear wall layout optimization for conceptual design of tall buildings", "year": "2017" }, { "authors": "V J L Gan; C L Wong; K T Tse; J C P Cheng; I M C Lo; C M Chan", "journal": "Advanced Engineering Informatics", "ref_id": "b3", "title": "Parametric modelling and evolutionary optimization for cost-optimal and low-carbon design of high-rise reinforced concrete buildings", "year": "2019" }, { "authors": "H Lou; B Gao; F Jin; Y Wan; Y Wang", "journal": "Computers & Structures", "ref_id": "b4", "title": "Shear wall layout optimization strategy for high-rise buildings based on conceptual design and data-driven tabu search", "year": "2021" }, { "authors": "S Tafraout; N Bourahla; Y Bourahla; A Mebarki", "journal": "Automation in Construction", "ref_id": "b5", "title": "Automatic structural design of RC wall-slab buildings using a genetic algorithm with application in BIM environment", "year": "2019" }, { "authors": "H Lou; Z Xiao; Y Wan; G Quan; F Jin; B Gao; H Lu", "journal": "Journal of Building Engineering", "ref_id": "b6", "title": "Size optimization design of members for shear wall high-rise buildings", "year": "2022" }, { "authors": "P N Pizarro; L M Massone; F R Rojas; R O Ruiz", "journal": "Engineering Structures", "ref_id": "b7", "title": "Use of convolutional networks in the conceptual structural design of shear wall buildings layout", "year": "2021" }, { "authors": "W Liao; X Lu; Y Huang; Z Zheng; Y Lin", "journal": "Automation in Construction", "ref_id": "b8", "title": "Automated structural design of shear wall residential buildings using generative adversarial networks", "year": "2021" }, { "authors": "P Zhao; W Liao; Y Huang; X Lu", "journal": "Engineering Structures", "ref_id": "b9", "title": "Intelligent design of shear wall layout based on attention-enhanced generative adversarial network", "year": "2023" }, { "authors": "P Zhao; W Liao; Y Huang; X Lu", "journal": "Advanced Engineering Informatics", "ref_id": "b10", "title": "Intelligent design of shear wall layout based on graph neural networks", "year": "2023" }, { "authors": "A Radford; K Narasimhan; T Salimans; I Sutskever", "journal": "", "ref_id": "b11", "title": "Improving Language Understanding by Generative Pre-Training", "year": "" }, { "authors": "A Radford; J Wu; R Child; D Luan; D Amodei; I Sutskever", "journal": "", "ref_id": "b12", "title": "Language Models are Unsupervised Multitask Learners", "year": "" }, { "authors": "T B Brown; B Mann; N Ryder; M Subbiah; J Kaplan; P Dhariwal; A Neelakantan; P Shyam; G Sastry; A Askell; S Agarwal; A Herbert-Voss; G Krueger; T Henighan; R Child; A Ramesh; D M Ziegler; J Wu; C Winter; C Hesse; M Chen; E Sigler; M Litwin; S Gray; B Chess; J Clark; C Berner; S Mccandlish; A Radford; I Sutskever; D Amodei", "journal": "", "ref_id": "b13", "title": "Language Models are Few-Shot Learners", "year": "2020-05-11" }, { "authors": "H Touvron; T Lavril; G Izacard; X Martinet; M.-A Lachaux; T Lacroix; B Rozière; N Goyal; E Hambro; F Azhar; A Rodriguez; A Joulin; E Grave; G Lample", "journal": "", "ref_id": "b14", "title": "LLaMA: Open and Efficient Foundation Language Models", "year": "2023-05-11" }, { "authors": " Llama", "journal": "", "ref_id": "b15", "title": "", "year": "2023-05-11" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "", "ref_id": "b16", "title": "Denoising Diffusion Probabilistic Models", "year": "2020-05-17" }, { "authors": "A Ramesh; M Pavlov; G Goh; S Gray; C Voss; A Radford; M Chen; I Sutskever", "journal": "", "ref_id": "b17", "title": "Zero-Shot Text-to-Image Generation", "year": "2021-05-11" }, { "authors": "Midjourney Midjourney", "journal": "", "ref_id": "b18", "title": "", "year": "2023-05-17" }, { "authors": "R Rombach; A Blattmann; D Lorenz; P Esser; B Ommer", "journal": "", "ref_id": "b19", "title": "High-Resolution Image Synthesis with Latent Diffusion Models", "year": "2022" }, { "authors": "", "journal": "", "ref_id": "b20", "title": "CompVis/stable-diffusion: A latent text-to-image diffusion model", "year": "2023-05-11" }, { "authors": "D Ha; A Dai; Q V Le", "journal": "", "ref_id": "b21", "title": "HyperNetworks", "year": "2016-05-11" }, { "authors": "N Ruiz; Y Li; V Jampani; Y Pritch; M Rubinstein; K Aberman", "journal": "", "ref_id": "b22", "title": "DreamBooth: Fine Tuning Text-to-Image Diffusion Models for Subject-Driven Generation", "year": "2023-05-11" }, { "authors": "E J Hu; Y Shen; P Wallis; Z Allen-Zhu; Y Li; S Wang; L Wang; W Chen", "journal": "", "ref_id": "b23", "title": "LoRA: Low-Rank Adaptation of Large Language Models", "year": "2021-05-11" }, { "authors": "", "journal": "", "ref_id": "b24", "title": "opencv-python: Wrapper package for OpenCV python bindings", "year": "2023-04-10" }, { "authors": "", "journal": "", "ref_id": "b25", "title": "Kohya's GUI", "year": "2023-05-10" }, { "authors": "", "journal": "", "ref_id": "b26", "title": "AUTOMATIC1111/stable-diffusion-webui: Stable Diffusion web UI", "year": "2023-05-10" }, { "authors": "X Zhou; L Wang; J Liu; G Cheng; D Chen; P Yu", "journal": "Automation in Construction", "ref_id": "b27", "title": "Automated structural design of shear wall structures based on modified genetic algorithm and prior knowledge", "year": "2022" }, { "authors": "Wenjie Liao", "journal": "StructGAN_v", "ref_id": "b28", "title": "", "year": "2023-05-15" } ]
[]
2023-06-01
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17", "b49", "b17", "b46" ], "table_ref": [], "text": "Metaphor is a pervasive phenomenon in human language (Lakoff and Johnson, 2008). It is defined as \"mapping of conceptual structure from a source to a target domain\" (Ruiz de Mendoza Ibáñez, 2017). Depending on the dimension of complexity of metaphor, authors distinguish two types of metaphors: image metaphors and conceptual metaphors (Lakoff and Johnson, 2008). Image metaphors compare one single image in one domain with another image belonging to another domain, such as the image metaphor \"she is as good as gold\". Conceptual metaphors are more complex at the conceptual and cognitive levels, and they refer to the resemblance established between a whole set of experiences, such as the metaphor in \"life is a journey\", which implies a whole set of elements activated within the metaphoric target domain. Metaphor-based terms, or the so-called terminological metaphors, are common in specialised languages. Their use is abundant, as they help in the conceptualisation of phenomena and their description by establishing a resemblance between images and domains. They also help in understanding abstract phenomena in terms of more concrete notions and in modelling scientific thought (Urena Gomez-Moreno and Faber, 2010). However, the identification of metaphorbased terms in discourse is an arduous task. This leads in some cases to committing errors during translation processes and lexicographic tasks. The process is even more challenging when it comes to machine translation, both in the cases of single-word terms and multi-word terms, which are represented by Multiword Expressions (MWEs). The main common error while carrying out the translation processes is that the metaphorical lexical items forming part of a term would be transferred literally into other languages without taking into consideration its metaphoric and cultural dimension or without taking into account that they form part of an MWE.\nPrevious studies focused on the extraction of metaphorical terms from discourse, such as Mu, Yannakoudakis, and Shutova (2019) and Razali et al. (2022); however, to the best of our knowledge, there are no programs that could automatically retrieve those terms both as single-word terms and MWEs in specialised languages. This study seeks to fill in this gap and proposes a novel method based on transformer models (Premasiri et al., 2022;Premasiri and Ranasinghe, 2022); (Ranasinghe et al., 2021) for automatic extraction of metaphor-based terms from the specialised domain of Botany and concerning the names of flowers and plants in English and Spanish. The main contributions of this study are:\n1. We empirically evaluate thirteen discriminative transformer models and one generative transformer model (Chat-GPT) for the tasks of metaphoric flower and plant names identification on English and Spanish datasets.\n2. We show that discriminative models perform better in the metaphoric flower and plant names identification task.\n3. We release new annotated datasets for metaphoric names identification in English and Spanish.\n4. We make our code freely available for further research1 .\nThis paper is organised as follows: in Section 2 we present previous related work. In Section 3 we describe the dataset used and its annotation process. In Section 4 we detail the experimental set-up and methodology, while in Section 5 we report our experiment's results and evaluation. Finally, we summarise the main conclusions and propose future work in Section 6." }, { "figure_ref": [], "heading": "Related work", "publication_ref": [ "b12", "b25", "b56", "b19", "b64", "b64", "b27", "b53", "b46", "b14", "b14", "b63", "b47" ], "table_ref": [], "text": "The study of metaphor-based terms in discourse has been a subject of study in the last few decades. One of the main concerns in this field is the detection of metaphorbased words in discourse. With this aim, the Pragglejaz Group suggested a method for the manual identification of metaphor, called Metaphor Identification Procedure (MIP) (Group, 2007). This method has been used extensively (Nacey et al., 2019). Studies like Turney et al. (2011), Jang et al. (2015) and Coll-Florit and Climent (2019) have a similar approach.\nOther projects such as the VU Amsterdam Metaphor Corpus (Leong et al., 2020) offer a manually annotated corpus for all metaphorical language use. Moreover, studies like Yaneva (2016), show how the use of metaphor and figurative language in discourse is of utmost difficulty for people with Autism Spectrum Disorder (ASD); hence, studies like Yaneva (2016) and Štajner et al. (2017) endeavour to identify and disambiguate complex sentences which contain metaphor and metonymy among other features through the application of Complex Word Identification modules. The above studies were partially inspired by the FIRST Project2 (Orăsan, Evans, and Mitkov, 2018) and the development of the Open Book tool which helps people with ASD.\nConcurrently, one of the recent concerns of Natural Language Processing (NLP) applications and Machine Translation (MT) technologies is the automatic identification of metaphorbased words in discourse through Deep Learning Methods (DLM). For example, Mu, Yannakoudakis, and Shutova (2019) suggest working with large corpora and training simple gradient boosting classifiers on representations of an utterance and its surrounding discourse learned with a variety of document embedding methods\". Su et al. (2020) focus on token-level metaphor detection paradigm and propose using an end-to-end deep metaphor detection model.\nAuthors like Razali et al. (2022) use machine learning to automatically detect metaphor instances in short texts by implementing Support Vector Machine algorithms, while other authors like Gutierrez et al. (2016) propose modelling metaphor explicitly within compositional distributional semantic models to improve the resulting vector representations. Those authors classify the already used methods in the following categories: clustering; topic modelling; topical structure and imageability analysis; semantic similarity graphs and feature-based classifiers (Gutierrez et al., 2016). Recent approaches are more centred on using dense embedding methods (Vitez et al., 2022).\nOn the other hand, the study of metaphorbased terms in specialised discourse has been subject to scientific and cognitive studies.\nThe automatic identification of metaphor-based terms is considered a substantial challenge.\nSome studies highlight the importance of automatic extraction of terms in specialised discourse (Rodríguez Penagos and others, 2005) while other studies, such as Urena Gomez-Moreno and Faber (2010), propose a semi-automatic method for term retrieval in the domain of Marine Biology. However, to the best of our knowledge, there have been no previous studies or methodologies which cover the automatic extraction of those terms from scientific discourse in other domains and no previous studies were carried out in the domain of Botany." }, { "figure_ref": [], "heading": "Data", "publication_ref": [ "b10", "b44", "b26", "b10", "b10", "b11", "b12", "b9", "b54" ], "table_ref": [], "text": "Specialised discourse is rich in metaphorbased terms; Botany is no exception. The semantic motivations for plant names are usually influenced by the appearance of the plant, the place of its occurrence, the properties of the plant, its usage, as well as other motivations typical of a specific genus of species (Dębowiak and Waniakowa, 2019). Many studies have shown that metaphor is one of the most frequent techniques to coin flowers and plants names (Rastall, 1996); (Nissan, 2014); (Dębowiak and Waniakowa, 2019). This metaphoric use may give clues to cultural references related to legends and beliefs associated with plants in general, like their healing properties and supposed magical powers (Dębowiak and Waniakowa, 2019). At the same time, this shows that this metaphorical use may vary among languages and cultures. From another perspective, studies like Goodman (1963) highlight the importance of flower names based on metaphor for the study of colour and its comparison among languages. For this reason, we consider the study of metaphor-based terms in this domain relevant as a case-study.\nThe dataset we use to extract metaphorbased terms in English is the Encyclopaedia of Flowers and Plants, published by the American Horticultural Society (Brickell, 2012). We selected this edition as it is available in a digitalised format in the online library of the Internet Archive. This Encyclopaedia consists of 522,707 words. It contains a dictionary of names of flowers from around the world, with approximately 8000 terms referring to both scientific and common names and their origins, as well as 4000 images. It is divided into the following sections: firstly it has an introduction about how to use the book, plant names and origins and relevant information on how to create a garden and how to select plants. This introductory part shows that it is aimed at both professionals and laypersons. Secondly, it has a plant catalogue, subdivided into categories such as trees, shrubs, roses, climbers and wall shrubs, perennials, annuals, biennials and bedding, rock plants, bulbs, water and bog plants as well as tender and exotic plants. All those subsections contain rich contexts on each term, concerning the origin, uses, habitat, size, etc. Finally, the En-cyclopaedia offers a dictionary section with an index of common names and glossary of terms. We benefited from this last section to extract and annotate terms. The advantage of using this Encyclopaedia is that it includes a wide range of varieties of flowers and plants from all around the world. For this reason, the obtained results may be useful to be applied in different contexts and in multidisciplinary studies.\nThe data was pre-processed by annotating the proper names and their metaphorical condition. The MIP criteria for metaphor identification (Group, 2007) was adapted to annotate the terms, considering a term as metaphor-based when one or more of the lexical units forming it or its etymology give evidence that they belong to different domains, based on its meaning in the dictionary. The annotated names represent both image metaphors and conceptual metaphors. An example of image metaphors, is the oneword name of the flower Edelwiess which is a combination between the two lexical units edel which means noble and weiss, which means white in German. This name represents an image metaphor where the flower is called as so as it symbolises purity. The scientific name of this flower is Leontopodium Alpinum, an MWE with Greek origin and etymology. It is also an image metaphor, as the lexical unit Leontopodium means lion's foot (Dweck, 2004), the resemblance is established between the for of the petals of the flowers and the aspect of the foot of a lion. Another example are the flowers Sunburst and Moonlight. The name of the flower Sunburst shows the resemblance between the colours of the flower and the colours of the sun, while the flower called Moonlight, alludes to the resemblance between the flower and the light of the moon. Other metaphor based-names represent a conceptual image, such as the MWE flower name forget-me-not which refer to the association between the heart-shaped blue flowers that reminds the person of his or her beloved one; or the one-word name of the flower cascade which associate the aspect of a flower with the whole process of the water falling in a real cascade.\nApart from the Encyclopaedia of Plants and Flowers, we also compiled a corpus of other resources related to Botany in English. It consists of 437,663 words. Some of the texts are monographs, others are jour-nal articles, and some texts are retrieved from other online resources. The full list of references used to compile the English corpus are listed in Appendix 1. With respect to the Spanish dataset, we have annotated a list of flowers and plants names provided in selected monographs and glossaries following the same criteria as in the case of the English terms. Above all, we used books and articles in the domain of Botany and botanical glossaries, such as the glossaries provided in Los Áraboles en España (de Lorenzo Cáceres, 1999), Biología de la Conservación de Plantas en Sierra Nevada (Peñas and Lorite, 2019) and the glossary of scientific names of plants and their vernacular names provided by the Entomological Museum in Leon in the Bio-Nica webpage3 . The list obtained from this source consists of more than 5000 scientific and vernacular names of flowers and plants. As for the book Los Áraboles en España, it consists of almost 155,000 words with more than 600 terms in the section of Glossary. The book describes the details of each plant, its family names, its vernacular names and synonyms, its origin, etymology, description and cultivation information. It also provides illustrative images of each plant. The book Biología de la Conservación de Plantas en Sierra Nevada was also valuable as some of its chapters contained lists of scientific names of endemic flowers from Sierra Nevada and its common names too. In order to enhance the datasets, we also added more specialised, semi-specialised and informative texts in the domain of botany to obtain more rich contexts. It consists of 460,258 words. The full list of the sources used to compile the Spanish corpus are listed in Appendix.\nWith this paper, we release datasets of English and Spanish flower and plant names with their annotations metaphoric or not metaphoric. The English dataset consists of 6330 total plant and flower names as a combination of 1869 metaphorical names and 4461 non-metaphorical names. The Spanish dataset consists of 875 metaphoric names and 4,988 non-metaphoric names out of 5863 total.\nData Preparation Since we model the metaphoric name identification task as a token level classification task, we used IOB format tagging for our corpus. IOB format is widely used in token level classification tasks (Tjong Kim Sang and De Meulder, 2003) where B -Beginning, I -Inside and Ooutside of a metaphoric flower or plant name; Table 1 shows an example IOB annotation. After tagging the sentences from the corpus, we identified that there were a very high number of sentences which do not have a single metaphoric name. In other words, the majority of the sentences only had 'O' as the tag for all their words. Since this has a negative impact on the model training process, we decided to balance the dataset by removing some sentences. Then we shuffled all the sentences and divided the training and test sets. Finally, we had 2020 total sentences divided 1500 and 520 in English training and test set respectively. For Spanish, we used only 250 sentences as the dataset.\nTest sets were the same for discriminative and generative experiments. The only thing is that in the generative approach, we did not use the training set, since we cannot train ChatGPT." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Discriminative", "publication_ref": [ "b61", "b8", "b21", "b8", "b8", "b5", "b5", "b3", "b0", "b0", "b8", "b5" ], "table_ref": [], "text": "Models Transformers (Vaswani et al., 2017) have been a major breakthrough in Deep Learning research, since they provide a robust mechanism based on attention for the neural networks to flow information without recurrence and convolution. This architecture has produced state-of-the-art results in many NLP applications. With the introduction of BERT (Devlin et al., 2019), which employs the transformers architecture, the pre-trained large language models have played an important role in pushing the boundaries of all NLP tasks such as text classification (Ranasinghe, Zampieri, and Hettiarachchi, 2019) , (Uyangodage, Ranasinghe, and Hettiarachchi, 2021), question answering (Premasiri et al., 2022), text similarity (Mitkov et al., 2023) etc. and achieving new state-of-the-art. With this motivation, we use transformers as our primary experimental setup and evaluate multiple pre-trained language models. These models follow similar architectures to BERT (Devlin et al., 2019) while they are pre-trained on different corpora and different objectives. Figure 1 (Ranasinghe and Zampieri, 2021) shows the transformer architecture we used where we input sentences which contain metaphoric flower and plant names, then we obtain BIO tags from the output layer by adding a softmax layer on top of the last hidden state of the deep network to classify each token into one of I,O,B tags. We used several popular transformers based pre-trained language models.\nFor the experiments on English dataset, we used the cased and uncased variants of BERT base and BERT large versions.\nIn order to establish the capabilities of multilingual models, we experimented with the multilingual-bert (Devlin et al., 2019) model with its cased and uncased variants and xlm-roberta-base (Conneau et al., 2020) model and xlmroberta-large (Conneau et al., 2020) version. We further experimented with google/electrabase-discriminator (Clark et al., 2020) model which is different from BERT architecture. Finally, within these discriminative models we evaluate allenai/scibert_scivocab_cased (Beltagy, Lo, and Cohan, 2019) and allenai/scibert_scivocab_uncased (Beltagy, Lo, and Cohan, 2019) variants which are specifically pre-trained on scientific corpora. We assume that flower and plant names could appear in those corpora such that the model can leverage the learning to produce better results.\nSince Spanish is low in resources on metaphoric flower and plants names corpora, we experimented zero-shot learning for Spanish on English data. We specifically used the multilingual-bert (Devlin et al., 2019) and xlm-roberta (Conneau et al., 2020) for our experimental setting as these models provide multilingual capabilities.\nAll the models were trained for three epochs, learning rate 4e-5 with 32 training batch size and for the hardware we used a GeForce RTX 3090 GPU.\nGenerative Models While all above methods rely on the discriminative approach, which tries to identify boundaries in the data space, generative models attempt to model the placement of the data throughout the space. This approach attracted huge attention in the research community with the release of ChatGPT Since ChatGPT is a generalised conversational application, it does not essentially provide IOB tags as outputs. After experimenting with different prompts to retrieve IOB tags from ChatGPT, we decided it would be easier to retrieve the metaphoric flower or plant name in the sentence from the API 6 and No otherwise. Prompt we used: Is there a metaphoric flower name or metaphoric plant name included in the following sentence, say yes or no, if yes what is the metaphoric flower or metaphoric plant names in the sentence separately : {sentence goes here}. The outputs of ChatGPT are not uniform, and we had to post process the outputs using regular expressions to re-generate the IOB tags for evaluation.\nSince this is a token classification task, we use macro averaged Precision, Recall and F1 score as our evaluation metrics.\nP recision = T P/(T P + F P )\n(1) (3) 5 Results and Discussion\nRecall = T P/(T P + F N )(2)" }, { "figure_ref": [], "heading": "English", "publication_ref": [], "table_ref": [], "text": "The results in table 2 show the competitive performance of transformer models, in the flower and plant names classification task. Despite the fact that most of the transformer models we experimented with are not specifically pre-trained on botanic corpora, almost all discriminative models were able to produce more than 90% F1 score in the task. Interestingly, the multilingual bert model could surpass the other models and mark the top results at 92.2349% F1 score.\nAnother noteworthy observation in our study was that cased models outperformed all the respective uncased models. Even though the xlm-roberta-base was the least performer in discriminative models, the performance gap to the best performer is only 2.3789% which shows the competitiveness of the transformers in token level classification tasks.\nEven though scibert models are specifically trained on scientific corpus, these models were not able to outperform the bert multilingual model, which shows that the general knowledge could play a significant role in metaphoric identification task.\nWhile ChatGPT seems very good at handling general text, it does not perform well in metaphoric names identification in flower and plant names. Given that we cannot further fine-tune the GPT model with our corpus, the ChatGPT is struggling to identify and generate text with metaphoric flower and plant names. Another important observation was, ChatGPT was not producing consistent results because we could observe different results for the same sentence if we retrieve twice. This shows that ChatGPT is uncertain about its answers on metaphoric flower and plant names, maybe with GPT-4 it may have a better understanding with more data. We leave it for future work." }, { "figure_ref": [], "heading": "Spanish", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Table 3 shows the results on Spanish data in zero-shot configuration on English data. We note that in all models, learning from English data has lead to decent results on Spanish metaphoric flower and plant names identification. Interestingly, bert-base-multilingualcased model performs better in both languages marking over 52% F1 score on Spanish. It was noted that there is a significant difference between English and Spanish results, as expected because the English models were fine-tuned on English metaphoric data, but we were not able to do that in Spanish due to lack of resources.\nChatGPT has kept similar performance for Spanish recording over 51% F1 score. This is very close value to the best discriminative model but could not outperform bert-basemultilingual-cased model. Unlike ChatGPT, since discriminative models are able to finetune, we conjecture that their performance could be boosted with a fine-tuning step with more data." }, { "figure_ref": [], "heading": "Conclusions", "publication_ref": [ "b28", "b49" ], "table_ref": [], "text": "The detection of metaphorical terms is an important research area for many NLP applications. Detecting metaphor-based terms of flowers and plants may give birth to different multidisciplinary research and applications. On the one hand, it may help in overcoming the so-called plant awareness disparity or plant blindness (Parsley, 2020) as the metaphoric factor would help in remembering the names of flowers and plants and their aspect. It may also give insightful information to Cognitive Studies towards understanding phenomena such as metaphor and metonymy, and even towards a more comprehensive understanding of conceptual complexes (Ruiz de Mendoza Ibáñez, 2017). This may be carried out by comprehending the associations between metaphoric names and the image of the flower and plant rep-resenting them, and how the resemblance of images or the metonymic aspect is conceptualised through the coinage of terms. On the other hand, this information is also helpful for the studies of representation of abstract phenomena in art and its comprehension across languages. The automatic extraction of those terms is a step towards achieving more comprehensive and accurate results. In addition, this may help rendering texts more accessible to people with ASD. At the same time, these types of studies may also help in the development of software or mobile applications to be used by both laypersons and professionals.\nIn conclusion, we show that the state-ofthe-art transformers are well capable of performing excellently in identifying metaphoric flower and plant names." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "Part of this research was carried within the framework of the projects the projects PID2020-118369GB-I00 and A-HUM-600-UGR20, funded by the Spanish Ministry of Science and Innovation and the Regional Government of Andalusia. Funding was also provided by an FPU grant (FPU18/05327) given by the Spanish Ministry of Education. We also want to thank Elvira Cámara Aguilera for her help in the annotation process." }, { "figure_ref": [], "heading": "A Appendix 1: English And Spanish Corpus", "publication_ref": [], "table_ref": [], "text": "List of references used to compile the corpus in English and Spanish." } ]
The domain of Botany is rich with metaphorical terms. Those terms play an important role in the description and identification of flowers and plants. However, the identification of such terms in discourse is an arduous task. This leads in some cases to committing errors during translation processes and lexicographic tasks. The process is even more challenging when it comes to machine translation, both in the cases of single-word terms and multi-word terms. One of the recent concerns of Natural Language Processing (NLP) applications and Machine Translation (MT) technologies is the automatic identification of metaphor-based words in discourse through Deep Learning (DL). In this study, we seek to fill this gap through the use of thirteen popular transformer based models, as well as ChatGPT, and we show that discriminative models perform better than GPT-3.5 model with our best performer reporting 92.2349% F1 score in metaphoric flower and plant names identification task.
Deep Learning Methods for Extracting Metaphorical Names of Flowers and Plants Métodos de aprendizaje profundo para la extracción de nombres metafóricos de flores y plantas
[ { "figure_caption": "Figure 1: Transformers architecture for token level classification", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "F1 = 22* (Precision * Recall)/(Precision + Recall)", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "4 by openAI 5 . Resutls for English", "figure_data": "calliandra haematocephala (Red powder puff) is an evergreen, spreading shrub O O B I I O O O O OTable 1: BIO annotation exampleModel bert-base-uncased bert-base-cased bert-large-uncased bert-large-cased bert-base-multilingual-uncased bert-base-multilingual-cased xlm-roberta-base xlm-roberta-large xlnet-base-cased roberta-base google/electra-base-discriminator allenai/scibert_scivocab_uncased allenai/scibert_scivocab_cased ChatGPTPrecision Recall 92.8204 89.4824 91.0784 F1 93.4157 90.8295 92.0801 92.8424 90.6789 91.7219 93.4157 90.8295 92.0801 91.7655 89.6286 90.6648 93.3662 91.1718 92.2349 90.1220 89.6020 89.8560 90.8455 89.4348 90.1220 89.8189 90.8769 90.3402 91.9779 89.8922 90.9025 92.0412 91.1617 91.5898 91.7084 90.3453 91.0071 92.3408 90.6466 91.4750 62.1516 45.1943 48.1392", "figure_id": "tab_0", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on metaphoric flower and plant names identification in Spanish; P -The macro averaged precision, R -The macro averaged Recall, F1 -The macro averaged F1 score.", "figure_data": "6 https://bit.ly/3OLCWFn", "figure_id": "tab_1", "figure_label": "3", "figure_type": "table" } ]
Amal Haddad Haddad; Damith Premasiri; Tharindu Ranasinghe; Ruslan Mitkov
[ { "authors": "Lo Beltagy; Cohan ; ] Beltagy; I ; K Lo; A Cohan", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "SciBERT: A pretrained language model for scientific text", "year": "2019-11" }, { "authors": "C Brickell", "journal": "Dorling Kindersley", "ref_id": "b1", "title": "Encyclopedia of plants and flowers", "year": "2012" }, { "authors": " Clark", "journal": "", "ref_id": "b2", "title": "", "year": "2020" }, { "authors": "K Clark; M.-T Luong; Q V Le; C D Manning", "journal": "", "ref_id": "b3", "title": "A new methodology for conceptual metaphor detection and formulation in corpora: A case study on a mental health corpus", "year": "2019" }, { "authors": " Conneau", "journal": "", "ref_id": "b4", "title": "", "year": "2020" }, { "authors": "A Conneau; K Khandelwal; N Goyal; V Chaudhary; G Wenzek; F Guzmán; E Grave; M Ott; L Zettlemoyer; V Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020-07" }, { "authors": "Lorenzo Cáceres; ; De Lorenzo Cáceres; J M S ", "journal": "Mundi-Prensa", "ref_id": "b6", "title": "Los Árboles en España: Manual de Identificación", "year": "1999" }, { "authors": " Devlin", "journal": "", "ref_id": "b7", "title": "", "year": "2019" }, { "authors": "J Devlin; M.-W Chang; K Lee; K Toutanova", "journal": "", "ref_id": "b8", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019-06" }, { "authors": "A Dweck", "journal": "Sofw Journal", "ref_id": "b9", "title": "A review of edelweiss", "year": "2004" }, { "authors": " Dębowiak; P Waniakowa ; Dębowiak; J Waniakowa", "journal": "", "ref_id": "b10", "title": "Semantic motivation of plant names as a part of their etymology", "year": "2019" }, { "authors": "J S Goodman", "journal": "Anthropological Linguistics", "ref_id": "b11", "title": "Malayalam color categories", "year": "1963" }, { "authors": "P Group", "journal": "Metaphor and symbol", "ref_id": "b12", "title": "Mip: A method for identifying metaphorically used words in discourse", "year": "2007" }, { "authors": " Gutierrez", "journal": "", "ref_id": "b13", "title": "", "year": "2016" }, { "authors": "E D Gutierrez; E Shutova; T Marghetis; B Bergen", "journal": "", "ref_id": "b14", "title": "Literal and metaphorical senses in compositional distributional semantic models", "year": "2016" }, { "authors": " Jang", "journal": "", "ref_id": "b15", "title": "", "year": "2015" }, { "authors": "H Jang; S Moon; Y Jo; C Rose", "journal": "", "ref_id": "b16", "title": "Metaphor detection in discourse", "year": "2015" }, { "authors": "Johnson ; Lakoff Lakoff; G ; M Johnson", "journal": "University of Chicago press", "ref_id": "b17", "title": "Metaphors we live by", "year": "2008" }, { "authors": " Leong", "journal": "", "ref_id": "b18", "title": "", "year": "2020" }, { "authors": "C W Leong; B B Klebanov; C Hamill; E Stemle; R Ubale; X Chen", "journal": "", "ref_id": "b19", "title": "A report on the 2020 vua and toefl metaphor detection shared task", "year": "2020" }, { "authors": " Mitkov", "journal": "", "ref_id": "b20", "title": "", "year": "2023" }, { "authors": "R Mitkov; H M Le An; T Ha; V Ranasinghe; Sosoni", "journal": "", "ref_id": "b21", "title": "Automatic generation of multiplechoice test items from paragraphs using deep neural networks", "year": "2023" }, { "authors": "Yannakoudakis Mu; Shutova", "journal": "", "ref_id": "b22", "title": "", "year": "2019" }, { "authors": "J Mu; H Yannakoudakis; E Shutova", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Learning outside the box: Discourse-level features improve metaphor identification", "year": "2019-06" }, { "authors": " Nacey", "journal": "", "ref_id": "b24", "title": "", "year": "2019" }, { "authors": "S Nacey; A G Dorst; T Krennmayr; W G Reijnierse", "journal": "John Benjamins Publishing Company", "ref_id": "b25", "title": "Metaphor identification in multiple languages: MIPVU around the world", "year": "2019" }, { "authors": "E Nissan", "journal": "", "ref_id": "b26", "title": "Multilingual lexis, semantics, and onomasiology. terminological database modelling, by using the cupros metarepresentation language: An xml-compatible xml-precursor enabling flexible nested-relation structures", "year": "2014" }, { "authors": "Evans Orăsan; C Mitkov ; ] Orăsan; R Evans; R Mitkov", "journal": "Intelligent Natural Language Processing: Trends and Applications", "ref_id": "b27", "title": "Intelligent text processing to help readers with autism", "year": "2018" }, { "authors": "K M Parsley", "journal": "People, Planet", "ref_id": "b28", "title": "Plant awareness disparity: A case for renaming plant blindness", "year": "2020" }, { "authors": "Lorite Peñas", "journal": "", "ref_id": "b29", "title": "", "year": "2019" }, { "authors": "J Peñas; J Lorite", "journal": "Universidad de Granada", "ref_id": "b30", "title": "Biología de la conservación de plantas en Sierra Nevada", "year": "2019" }, { "authors": " Premasiri", "journal": "", "ref_id": "b31", "title": "", "year": "2022" }, { "authors": "D Premasiri; A H Haddad; T Ranasinghe; R Mitkov", "journal": "", "ref_id": "b32", "title": "Transformer-based detection of multiword expressions in flower and plant names", "year": "2022-09" }, { "authors": "Ranasinghe ; Premasiri; D Premasiri; T Ranasinghe", "journal": "Premasiri et al", "ref_id": "b33", "title": "Bert (s) to detect multiword expressions", "year": "2022" }, { "authors": "D Premasiri; T Ranasinghe; W Zaghouani; R Mitkov", "journal": "European Language Resources Association", "ref_id": "b34", "title": "DTW at qur'an QA 2022: Utilising transfer learning with transformers for question answering in a low-resource domain", "year": "2022" }, { "authors": " Radford", "journal": "", "ref_id": "b35", "title": "", "year": "2018" }, { "authors": "A Radford; K Narasimhan; T Salimans; I Sutskever", "journal": "", "ref_id": "b36", "title": "Improving language understanding by generative pre-training", "year": "2018" }, { "authors": " Ranasinghe", "journal": "", "ref_id": "b37", "title": "", "year": "2021" }, { "authors": "T Ranasinghe; D Sarkar; M Zampieri; A Ororbia", "journal": "", "ref_id": "b38", "title": "WLV-RIT at SemEval-2021 task", "year": "2021" }, { "authors": "", "journal": "Association for Computational Linguistics", "ref_id": "b39", "title": "5: A neural transformer framework for detecting toxic spans", "year": null }, { "authors": "Zampieri Ranasinghe", "journal": "", "ref_id": "b40", "title": "", "year": "2021" }, { "authors": "T Ranasinghe; M Zampieri", "journal": "Association for Computational Linguistics", "ref_id": "b41", "title": "MUDES: Multilingual detection of offensive spans", "year": "2021-06" }, { "authors": "Zampieri Ranasinghe; Hettiarachchi", "journal": "", "ref_id": "b42", "title": "", "year": "2019" }, { "authors": "T Ranasinghe; M Zampieri; H Hettiarachchi", "journal": "", "ref_id": "b43", "title": "BRUMS at HASOC 2019: Deep Learning Models for Multilingual Hate Speech and Offensive Language Identification", "year": "2019" }, { "authors": "P Rastall", "journal": "English Today", "ref_id": "b44", "title": "Metaphor and the names of plants", "year": "1996" }, { "authors": " Razali", "journal": "", "ref_id": "b45", "title": "", "year": "2022" }, { "authors": "M S Razali; A A Halin; Y.-W Chow; N M Norowi; S Doraisamy", "journal": "", "ref_id": "b46", "title": "Deep and contextually engineered features for metaphor detection", "year": "2022" }, { "authors": "Rodríguez Penagos; Others", "journal": "", "ref_id": "b47", "title": "", "year": "2005" }, { "authors": "C Rodríguez Penagos", "journal": "", "ref_id": "b48", "title": "Metalinguistic information extraction from specialized texts to enrich computational lexicons", "year": "2005" }, { "authors": "Mendoza Ruiz De; ; Ibáñez; F J Ruiz De Mendoza Ibáñez", "journal": "Revista Española de Lingüística Aplicada/Spanish Journal of Applied Linguistics", "ref_id": "b49", "title": "Conceptual complexes in cognitive modeling", "year": "2017" }, { "authors": " Štajner", "journal": "", "ref_id": "b50", "title": "", "year": "2017" }, { "authors": "S Štajner; V Yaneva; R Mitkov; S P Ponzetto", "journal": "Association of Computational Linguistics", "ref_id": "b51", "title": "Effects of lexical properties on viewing time per word in autistic and neurotypical readers", "year": "2017" }, { "authors": " Su", "journal": "", "ref_id": "b52", "title": "", "year": "2020" }, { "authors": "C Su; F Fukumoto; X Huang; J Li; R Wang; Z Chen", "journal": "", "ref_id": "b53", "title": "Deepmet: A reading comprehension paradigm for token-level metaphor detection", "year": "2020" }, { "authors": "Tjong Kim; Sang ; De Meulder; ; Tjong; Kim Sang; E F ; F De Meulder", "journal": "", "ref_id": "b54", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": " Turney", "journal": "", "ref_id": "b55", "title": "", "year": "2011" }, { "authors": "P Turney; Y Neuman; D Assaf; Y Cohen", "journal": "", "ref_id": "b56", "title": "Literal and metaphorical sense identification through concrete and abstract context", "year": "2011" }, { "authors": "Urena Gomez-Moreno; Urena Faber; J M Gomez-Moreno; P Faber", "journal": "Metaphor and Symbol", "ref_id": "b57", "title": "Strategies for the semi-automatic retrieval of metaphorical terms", "year": "2010" }, { "authors": "Ranasinghe Uyangodage; Hettiarachchi", "journal": "", "ref_id": "b58", "title": "", "year": "2021" }, { "authors": "L Uyangodage; T Ranasinghe; H Hettiarachchi", "journal": "INCOMA Ltd", "ref_id": "b59", "title": "Can multilingual transformers fight the COVID-19 infodemic", "year": "2021-09" }, { "authors": " Vaswani", "journal": "", "ref_id": "b60", "title": "", "year": "2017" }, { "authors": "A Vaswani; N Shazeer; N Parmar; J Uszkoreit; L Jones; A N Gomez; Ł Kaiser; I Polosukhin", "journal": "Advances neural information processing systems", "ref_id": "b61", "title": "Attention is all you need", "year": "2017" }, { "authors": " Vitez", "journal": "", "ref_id": "b62", "title": "", "year": "2022" }, { "authors": "A Z Vitez; M Brglez; M Robnik-Šikonja; T Škvorc; A Vezovnik; S Pollak", "journal": "", "ref_id": "b63", "title": "Extracting and analysing metaphors in migration media discourse: towards a metaphor annotation scheme", "year": "2022" }, { "authors": "V Yaneva", "journal": "References", "ref_id": "b64", "title": "Assessing text and web accessibility for people with autism spectrum disorder", "year": "2016" }, { "authors": "Christopher Brickell", "journal": "Dorling Kindersley", "ref_id": "b65", "title": "Encyclopedia of Plants and Flowers", "year": "2012" }, { "authors": "Jean Vigneron; Pol", "journal": "Physical Review E", "ref_id": "b66", "title": "Optical Structure and Function of the White Filamentary Hair Covering the Edelweiss Bracts", "year": "2005-01-19" }, { "authors": "Lăcrămioara M Maghiar", "journal": "Ecology and Evolution", "ref_id": "b67", "title": "Integrating Demography Distribution Modeling for the Iconic Leontopodium Alpinum Colm. In the Romanian Carpathians", "year": "2021-08-25" }, { "authors": "J L Blanco-Pastor", "journal": "Molecular Ecology", "ref_id": "b68", "title": "Past and Future Demographic Dynamics of Alpine Species: Limited Genetic Consequences despite Dramatic Range Contraction in a Plant from the Spanish Sierra Nevada", "year": "2013-07-12" }, { "authors": "Lianghong Ni", "journal": "", "ref_id": "b69", "title": "Migration Patterns of Gentiana Crassicaulis, an Alpine Gentian Endemic to the Himalaya-Hengduan Mountains", "year": "2008" }, { "authors": "Gómez García; Daniel", "journal": "Diputación General de Aragón", "ref_id": "b70", "title": "Flora y Vegetación de La Jacetania", "year": "2004" }, { "authors": "M López Guadalupe", "journal": "Ars Pharmaceutica (Internet)", "ref_id": "b71", "title": "Comunidades, Hábitat Y Tipos de Suelos Sobre Los Que Se Desarrolla La Manzanilla de Sierra Nevada", "year": "1985" }, { "authors": "Francisco Pugnaire", "journal": "Ministerio de Agricultura, Ministerio de Agricultura, Alimentación y Medio Ambiente. Organismo Autónomo de Parques Nacionales", "ref_id": "b72", "title": "Facilitación de Las Especies Almohadilladas Y Cambio Global En Las Comunidades Alpinas Del Parque Nacional de Sierra Nevada", "year": "2015" }, { "authors": "P Montserrat; Balcells", "journal": "Sinergia (Publicación Paramédica de Sociedad General de Farmacia, SA)", "ref_id": "b73", "title": "LA FLORA DEL PIRINEO", "year": "1960" }, { "authors": "Paúl Gonzáles", "journal": "Bosques Andinos", "ref_id": "b74", "title": "Las Plantas Comunes Del Bosque Seco Del Marañón: Biodiversidad Para Las Comunidades Locales", "year": "" }, { "authors": "Gabriel Blanca", "journal": "", "ref_id": "b75", "title": "Flora Amenazada Endémica de Sierra Nevada", "year": "2001" }, { "authors": "Julio Peñas; Juan Lorite", "journal": "", "ref_id": "b76", "title": "Biología de La Conservación de Plantas En Sierra Nevada", "year": "2019" }, { "authors": "", "journal": "Orasconhu.org, Ministerio de Salud y Protección Social de Colombia", "ref_id": "b77", "title": "Protección Social de Colombia", "year": "2008" }, { "authors": "Sánchez De; Lorenzo Cáceres; Jose Manuel", "journal": "Ediciones Mundi-Prensa", "ref_id": "b78", "title": "Los Árboles En España: Manual de Identificación", "year": "1999" } ]
[ { "formula_coordinates": [ 6, 357.6, 711.14, 167.9, 13 ], "formula_id": "formula_0", "formula_text": "Recall = T P/(T P + F N )(2)" } ]
10.1145/3580305.3599768
2023-07-13
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b18", "b26", "b7", "b34", "b24", "b33", "b4", "b28", "b32", "b20", "b3", "b25", "b31", "b29" ], "table_ref": [], "text": "Recommender systems are a crucial tool for web applications, helping users to navigate the overwhelming amount of information available online. These systems provide personalized recommendations of items that users might be interested in, such as products on online retail platforms [19,27], posts on social networking sites [8,35], and video sharing platforms [25,34]. One of the most common approaches for generating these recommendations is collaborative filtering (CF), where the system uses the preferences of similar users or items to suggest new items for a given user [5,29].\nCollaborative filtering (CF) models have traditionally relied on matrix factorization (MF) to learn latent user and item embeddings from interaction data. However, with the rise of graph neural networks (GNNs), there has been a growing interest in using these models to propagate information along the user-item interaction graph and learn more sophisticated representations of user-item interactions. PinSage [33], NGCF [21], and LightGCN [4] are examples of GNN-based CF models that have shown promising results in personalized recommendations. These models use graph convolutional networks (GCNs) to propagate embeddings over the user-item interaction graph, allowing them to capture higher-order interactions between users and items that are not captured by other alternative CF models. In particular, PinSage and NGCF use multilayer GCNs to capture both local and global information about the user-item interaction graph, while LightGCN simplifies the message passing process by omitting the non-linear transformer and only using a simple weighted sum of the neighboring embeddings.\nGraph-based collaborative filtering models have become increasingly popular in recommender systems. However, these models face challenges that current techniques have not adequately addressed. One such challenge is data noise, which can arise due to various factors, such as users clicking on irrelevant products due to over-recommendation of popular items. Directly aggregating information from all interaction edges in the user-item interaction graph can lead to inaccuracies in user representations, and multihop embedding propagation can worsen the noise effect. Therefore, existing graph-based CF models may not accurately capture user interests and generate inaccurate recommendations. Furthermore, the sparsity and skewed distribution of recommendation data can negatively impact effective user-item interaction modeling. As a result, current approaches may suffer from the problem of user data scarcity, where high-quality training signals may be limited.\nRecently, some recommendation methods, such as SGL [26], SLRec [32] and HCCF [30], have leveraged self-supervised learning to improve user representations. These methods introduce additional supervision information by creating contrastive views through probability-based random masking or adding noise. However, these operations may keep some noisy interactions or drop important training signals during the data augmentation process, limiting the applicability and potential of contrastive learning.\nContribution. Given the limitations and challenges of existing solutions, we propose a novel Adaptive Graph Contrastive Learning (AdaGCL) framework to enhance the robustness and generalization performance of recommender systems. Our approach leverages adaptive contrastive learning to introduce high-quality training signals, empowering the graph neural CF paradigm. While several recent studies have used contrastive learning to improve model performance, they all require specific ways to create contrastive views. The selection of methods for creating contrastive views can be burdensome and often limited to a pool of prefabricated views, which can limit their potential and applicability. To address these issues, we integrate a graph generative model and a graph denoising model to establish views that adapt to the data distribution, achieving adaptive contrastive views for graph contrastive learning. By • AdaGCL employs two trainable view generators, namely a graph generator and a graph denoising model, to create contrastive views. These views address the problem of model collapse and enable adaptive views for contrastive learning, ultimately enhancing the effectiveness of the graph neural CF paradigm.\n• Our experimental results demonstrate that our AdaGCL outperforms various baseline models on multiple datasets, highlighting its superior performance and effectiveness. Furthermore, our approach is able to address the challenges of data noise and user data scarcity, which can negatively impact the accuracy of collaborative filtering models for recommendation." }, { "figure_ref": [], "heading": "PRELIMINARIES AND RELATED WORK 2.1 Collaborative Filtering Paradigm", "publication_ref": [], "table_ref": [], "text": "We let\nU = {𝑢 1 , • • • , 𝑢 𝑖 , • • • , 𝑢 𝐼 } (|U| = 𝐼 ) and V = {𝑣 1 , • • • , 𝑣 𝑗 , • • • , 𝑣 𝐽 } (|V | = 𝐽\n) represent the set of users and items, respectively. The interaction matrix A ∈ R I× J indicates the implicit relationships between each user in U and his/her consumed items. Each entry A 𝑖,𝑗 in A will be set as 1 if user 𝑢 𝑖 has adopted item 𝑣 𝑗 before and A 𝑖,𝑗 = 0 otherwise. Upon the constructed interaction graph structures, the core component of graph-based CF paradigm lies in the information aggregation function, gathering the feature embeddings of neighboring users/items via different aggregators, e.g., mean or sum. The objective of CF task is to forecast the unobserved user-item interactions with the encoded corresponding representations. The assumption of the collaborative filtering paradigm is that users who exhibit similar behavior are more likely to share similar interests. One popular paradigm of existing collaborative filtering (CF) approaches involves using various embedding functions to generate vectorized representations of users and items. The similarity matching function is then introduced to estimate the relevance score between a user 𝑢 𝑖 and a candidate item 𝑣 𝑗 ." }, { "figure_ref": [], "heading": "Graph-based Recommender Systems", "publication_ref": [ "b27", "b32", "b20", "b3", "b21", "b12", "b23", "b36", "b5", "b22" ], "table_ref": [], "text": "Graph neural architectures have become increasingly popular in recent years due to their ability to effectively model complex relationships between users and items in recommendation systems [28]. These architectures leverage graph embedding propagation techniques to encode the interactions between users and items in the form of graph embeddings. One important advantage of graph neural architectures is their ability to capture multi-hop connections between users and items. This allows the model to capture more complex and nuanced relationships between users and items. Some architectures, like PinSage [33] and NGCF [21], use graph convolutional networks in the spectral domain. Others, like Light-GCN [4], simplify the non-linear transformation and use sum-based pooling over neighboring representations for improved efficiency. These architectures encode each user and item into transformed embeddings while preserving multi-hop connections. Moreover, fine-grained graph-based relational learning techniques among users and items have been introduced in graph neural networks for user/item representations. Examples of these techniques include DGCF [22], DCCF [13], and DRAN [24]. These techniques aim to learn disentangled or behavior-aware user representations by exploiting the graph-structured multi-intent information. In addition to these techniques, graph neural networks have also been increasingly used in next-item recommendation tasks to capture the temporal dependencies between items and how a user's preferences for certain items evolve over time. Models such as DGSR [37], RetaGNN [6], and GCE-GNN [23] represent the user's historical interactions as a sequence of items and use graph-based message passing to update each item's embedding based on the information from its neighbors. This approach allows the models to capture the dependencies and relationships between items and how a user's preferences for certain items evolve over time, leading to more accurate and effective recommendations." }, { "figure_ref": [], "heading": "Self-Supervised Graph Learning", "publication_ref": [ "b6", "b37", "b38", "b25", "b10", "b25", "b10", "b39", "b1", "b14" ], "table_ref": [], "text": "Despite the success of supervised learning in many applications, obtaining a large labeled dataset can be a challenging and expensive task. To overcome this limitation, self-supervised learning (SSL) has emerged as a promising solution. In the context of graph machine learning, SSL has been shown to be effective for learning highquality representations of graph data. One of the recent advances in SSL is the use of contrastive learning with auxiliary training signals generated from various graph data, such as heterogeneous graph [7], spatio-temporal graph [38] and molecular graph [39]. SSL with contrastive learning has been shown to improve the quality of embeddings for graphs, leading to better performance on tasks, such as node classification and link prediction.\nSelf-supervised graph learning has also been introduced into recommender systems, to show great potential for enhancing representations of users and items with contrastive SSL [26] or generative SSL [11] techniques. One example of a self-supervised graph learning framework is SGL [26], which generates contrastive views of the user-item interaction graph using random node and edge dropout operations. By maximizing the agreement between the embeddings of the contrastive views, SSL signals can be incorporated into the model joint learning process. Another example is GFormer [11], which leverages the graph autoencoder to reconstruct the masked user-item interactions for augmentation. By generating augmented training data in this way, the model can learn more effective representations of users and items. Additionally, the use of self-supervised graph learning techniques has benefited a variety of recommendation scenarios. For example, S3-Rec [40] S3-Rec is based on a self-attentive neural architecture and uses four auxiliary self-supervised objectives to learn the correlations among various types of data, including attributes, items, subsequences. C2DSR [2] is a cross-domain sequential recommendation approach that proposes a contrastive cross-domain infomax objective to enhance the correlation between single-and cross-domain user representations. SLMRec [15] is a SSL approach for multimedia recommendation that captures multi-modal patterns in the data." }, { "figure_ref": [ "fig_0" ], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this section, we introduce the AdaGCL framework, which is composed of three parts. The first part uses a graph message passing encoder to capture local collaborative relationships among users and items. The second part proposes a novel adaptive selfsupervised learning framework that includes two trainable view generators made of variational and denoising graph models. The third part introduces the phase of model optimization. The overall architecture of the AdaGCL model is illustrated in Figure 1." }, { "figure_ref": [], "heading": "Local Collaborative Relation Learning", "publication_ref": [ "b3" ], "table_ref": [], "text": "To encode the interaction patterns between users and items, we follow the common collaborative filtering paradigm by embedding them into a 𝑑-dimensional latent space. Specifically, we generate embedding vectors e 𝑖 and e 𝑗 of size R 𝑑 for user 𝑢 𝑖 and item 𝑣 𝑗 , respectively. We also define embedding matrices E (𝑢 ) ∈ R 𝐼 ×𝑑 and E (𝑣) ∈ R 𝐽 ×𝑑 to represent the embeddings of users and items, respectively. To propagate the embeddings, we design a local graph embedding propagation layer inspired by the simplified graph convolutional network used in LightGCN [4].\nz (𝑢 ) 𝑖 = Ā𝑖, * • E (𝑣) , z (𝑣) 𝑗 = Ā * ,𝑗 • E (𝑢 ) ,(1)\nTo represent the aggregated information from neighboring items/users to the central node 𝑢 𝑖 and 𝑣 𝑗 , we use the vectors z𝑖 (𝑢 ) and z𝑗 (𝑣) respectively, both having a dimension of R 𝑑 . We derive the normalized adjacent matrix Ā ∈ R 𝐼 ×𝐽 from the user-item interaction matrix A. Specifically, Ā is calculated using the following formula:\nĀ = D -1/2 (𝑢 ) • A • D -1/2 (𝑣) , Ā𝑖,𝑗 = A 𝑖,𝑗 √︁ |N 𝑖 | • |N 𝑗 | ,(2)\nThe diagonal degree matrices for users and items are D(𝑢) ∈ R 𝐼 ×𝐼 and D(𝑣) ∈ R 𝐽 ×𝐽 respectively. The neighboring items/users of user 𝑢 𝑖 and item 𝑣 𝑗 are denoted by N 𝑖 and N 𝑗 respectively.\nTo refine the user/item representations and aggregate local neighborhood information for contextual embeddings, we integrate multiple embedding propagation layers. We denote the embedding of user 𝑢 𝑖 and item 𝑣 𝑗 at the 𝑙-th graph neural network (GNN) layer as e𝑖, 𝑙 (𝑢 ) and e𝑗, 𝑙 (𝑣) respectively. We formally define the message passing process from the (𝑙 -1)-th layer to the 𝑙-th layer as follows:\ne (𝑢 ) 𝑖,𝑙 = z (𝑢 ) 𝑖,𝑙 + e (𝑢 ) 𝑖,𝑙 -1 , e (𝑣) 𝑗,𝑙 = z (𝑣) 𝑗,𝑙 + e (𝑣) 𝑗,𝑙 -1 .(3)\nTo obtain the embedding for a node, we sum its embeddings across all layers. The inner product between the final embedding of a user 𝑢 𝑖 and an item 𝑣 𝑗 is used to predict 𝑢 𝑖 's preference towards 𝑣 𝑗 : 𝑗 .\n(4) generate views in specific ways, such as randomly dropping edges, nodes, or constructing hypergraphs. However, selecting an appropriate method for generating views can be burdensome, as it often relies on tedious trial-and-error or a limited pool of prefabricated views. This limitation can restrict the applicability and potential of these methods. To overcome this issue, we propose using two learnable view generators to obtain adaptive views for GCL. Developing view generators for graph contrastive learning methods poses a challenge due to the risk of model collapse, where two views generated by the same generator share the same distribution, potentially leading to inaccurate contrastive optimization. To address this challenge, we propose using two distinct view generators that augment the user-item graph from different perspectives. Specifically, we employ a graph generative model and a graph denoising model as our two view generators. The graph generative model is responsible for reconstructing views based on graph distributions, while the graph denoising model leverages the graph's topological information to remove noise from the user-item graph and generate a new view with less noise." }, { "figure_ref": [ "fig_0" ], "heading": "Adaptive View Generators for Graph Contrastive Learning", "publication_ref": [ "b25", "b29", "b8", "b17", "b8" ], "table_ref": [], "text": "In line with existing self-supervised collaborative filtering (CF) paradigms, such as those proposed in [26,30], we use node selfdiscrimination to generate positive and negative pairs. Specifically, we treat the views of the same node as positive pairs (i.e., (e ′ 𝑖, e ′′ 𝑖)|𝑢 𝑖 ∈ U), and the views of any two different nodes as negative pairs (i.e., (e ′ 𝑖, e ′′ 𝑖 ′ )|𝑢 𝑖 , 𝑢 𝑖 ′ ∈ U, 𝑢 𝑖 ≠ 𝑢 𝑖 ′ ). Formally, the contrastive loss function that maximizes the agreement of positive pairs and minimizes that of negative pairs is as follows:\nL 𝑢𝑠𝑒𝑟 𝑠𝑠𝑙 = ∑︁ 𝑢 𝑖 ∈ U -log exp(𝑠 (e ′ 𝑖 , e ′′ 𝑖 )/𝜏) 𝑢 𝑖 ′ ∈ U exp(𝑠 (e ′ 𝑖 , e ′′ 𝑖 ′ /𝜏) ,(5)\nTo measure the similarity between two vectors, we use the cosine similarity function denoted by 𝑠 (•), with the hyper-parameter 𝜏 known as the temperature in softmax. We compute the contrastive loss for the item side as L 𝑖𝑡𝑒𝑚 𝑠𝑠𝑙 in a similar way. By combining these two losses, we obtain the objective function for the self-supervised task, which is denoted by L 𝑠𝑠𝑙 = L 𝑢𝑠𝑒𝑟 𝑠𝑠𝑙 + L 𝑖𝑡𝑒𝑚 𝑠𝑠𝑙 . 3.2.2 Graph Generative Model as View Generator. The recent emergence of learning-based graph generative models [9,18] provides a promising solution for view generator. In this study, we adopt the widely-used Variational Graph Auto-Encoder (VGAE) [9] as the generative model, which combines the concept of variational auto-encoder with graph generation. Compared to GAE, VGAE incorporates KL divergence to reduce the risk of overfitting, allowing for more diverse graphs to be generated by increasing the uncertainty. This feature provides a more challenging contrastive view for contrastive learning. Additionally, VGAE is relatively easier to train and faster than other currently popular generation models such as generative adversarial networks and diffusion models.\nAs illustrated in Fig. 1, we utilize a multi-layer GCN as the encoder to obtain the graph embeddings. Two MLPs are utilized to derive the mean value and the standard deviation of the graph embedding, respectively. With another MLP as the decoder, the input mean value and the standard deviation with Gaussian noise will be decoded to generate a new graph. The loss of VGAE is defined:\nL 𝑔𝑒𝑛 = L 𝑘𝑙 + L 𝑑𝑖𝑠 ,(6)\nThe term L 𝑘𝑙 refers to the Kullback-Leibler divergence (KL divergence) between the distribution of node embeddings and the standard Gaussian distribution. On the other hand, L 𝑑𝑖𝑠 is a crossentropy loss that quantifies the dissimilarities between the generated graph and the original graph." }, { "figure_ref": [], "heading": "3.2.3", "publication_ref": [], "table_ref": [], "text": "Graph Denoising Model as View Generator. GNN models use message passing mechanisms to propagate and aggregate information along the input graph to learn node representations. However, the quality of the input graph can heavily impact model performance since messages aggregated along noisy edges can decrease the quality of node embeddings. Therefore, for the second view generator, we aim to generate a denoising view that can enhance model performance against noisy data.\nTo improve the quality of node embeddings obtained after each layer of GCN, we propose a graph neural network that incorporates a denoising layer to filter out noisy edges in the input graph. This parameterized network is shown in Fig. 2. The main concept behind our approach is to actively filter out noisy edges in the input graph using a parameterized network. For the 𝑙-th GCN layer, we use a binary matrix\nM 𝑙 ∈ 0, 1 | V | × | V |\n, where 𝑚 𝑙 𝑖,𝑗 denotes whether the edge between node 𝑢 𝑖 and 𝑣 𝑗 is present (0 indicates a noisy edge).\nFormally, the adjacency matrix of the resulting subgraph is A 𝑙 = A ⊙ M 𝑙 , where ⊙ is the element-wise product. The straightforward idea to reduce noisy edges with the least assumptions about A 𝑙 is to penalize the number of non-zero entries in M 𝑙 of different layers.\n𝐿 ∑︁ 𝑙=1 ||M 𝑙 || 0 = 𝐿 ∑︁ 𝑙=1 ∑︁ (𝑢,𝑣) ∈𝜀 I[𝑚 𝑙 𝑖,𝑗 ≠ 0],(7)\nwhere , where 𝑓 𝑙 𝜃 𝑙 is an MLP parameterized by 𝜃 𝑙 . In order to get 𝑚 𝑙 𝑖,𝑗 , we also utilize the concrete distribution along with a hard sigmoid function. Within the above formulation, the constraint on the number of non-zero entries in M 𝑙 in Eq. ( 7) can be reformulated with:\nI[•]\nL 𝑐 = 𝐿 ∑︁ 𝑙=1 ∑︁ (𝑢 𝑖 ,𝑣 𝑗 ) ∈𝜀 (1 -P 𝜎 (𝑠 𝑙 𝑖,𝑗 ) (0|𝜃 𝑙 )),(8)\nwhere P 𝜎 (𝑠 𝑙 𝑖,𝑗 ) is the cumulative distribution function (CDF) of 𝜎 (𝑠 𝑙 𝑖,𝑗 ), 𝜎 (•) extends the range of 𝑠 𝑙 𝑖,𝑗 , and 𝑠 𝑙 𝑖,𝑗 is drawn from a binary concrete distribution with 𝛼 𝑙 𝑖,𝑗 parameterizing the location." }, { "figure_ref": [], "heading": "Learning Task-aware View Generators", "publication_ref": [], "table_ref": [], "text": "Although two view generators could learn to generate better views from different aspects, there may be no optimization signals to adjust generated views to the main CF task. The straightforward idea is introducing commonly-used BPR loss, as follows:\nL 𝑏𝑝𝑟 = ∑︁ (𝑢,𝑖,𝑗 ) ∈ O -log𝜎 ( ŷ𝑢𝑖 -ŷ𝑢 𝑗 ),(9)\nThe \nL 𝑔𝑒𝑛 = L 𝑘𝑙 + L 𝑑𝑖𝑠 + L 𝑔𝑒𝑛 𝑏𝑝𝑟 + 𝜆 2 ||Θ|| 2 F , (10\n)\nwhere Θ is the set of model parameters, while 𝜆 2 is a hyperparameter used to control the strength of the weight-decay regularization.\nTo train the graph denoising model, we use the node embeddings obtained by the denoising neural network to compute the BPR loss. The loss function L 𝑑𝑒𝑛 is updated as follows:\nL 𝑑𝑒𝑛 = L 𝑐 + L 𝑑𝑒𝑛 𝑏𝑝𝑟 + 𝜆 2 ||Θ|| 2 F .(11)" }, { "figure_ref": [], "heading": "Model Training", "publication_ref": [ "b10" ], "table_ref": [], "text": "The training of our proposed model consists of two parts. In the upper-level part, we adopt a multi-task training strategy to jointly optimize the classic recommendation task (Eq. ( 9)) and the selfsupervised learning task (Eq. ( 5)):\nL 𝑢𝑝𝑝𝑒𝑟 = L 𝑏𝑝𝑟 + 𝜆 1 L 𝑠𝑠𝑙 + 𝜆 2 ||Θ|| 2 F ,(12)\nwhere Θ refers to the set of model parameters in the main task, which in this work, is the set of parameters of LightGCN. Additionally, 𝜆 1 and 𝜆 2 are hyperparameters that control the strengths of SSL and 𝐿 2 regularization, respectively. The lower-level part of the training involves optimizing the generative and denoising view generators based on Eq. ( 10) and Eq. (11), which is formally presented as follows: \nL 𝑙𝑜𝑤𝑒𝑟 = L 𝑔𝑒𝑛 + L 𝑑𝑒𝑛 . (13\n)" }, { "figure_ref": [], "heading": "EVALUATION", "publication_ref": [], "table_ref": [], "text": "To evaluate the effectiveness of our proposed model, our experiments are designed to answer the following research questions:\n• RQ1: What is the performance of our proposed model compared to various state-of-the-art recommender systems?\n• RQ2: How do the key components of our proposed model contribute to its overall performance on different datasets?\n• RQ3: How well can our proposed model handle noisy and sparse data compared to baseline methods?\n• RQ4: How do the key hyperparameters influence the performance of our proposed model framework?" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "4.1.1 Evaluation Datasets. We conduct experiments on three datasets collected from online applications, Last.FM, Yelp, and Beer-Advocate. The statistics of these datasets are shown in Table 1.\n• Last.FM: This dataset contains social networking, tagging, and music artist listening information collected from a set of users from the Last.fm online music system.\n• Yelp: This commonly-used dataset contains user ratings on business venues collected from the Yelp platform. It is a valuable resource for studying user preferences and behavior in the context of personalized venue recommendations.\n• BeerAdvocate: This dataset contains beer reviews from BeerAdvocate. We process it using the 10-core setting by keeping only users and items with at least 10 interactions." }, { "figure_ref": [], "heading": "Evaluation Protocols.", "publication_ref": [ "b3", "b25", "b9", "b4", "b13", "b0", "b32", "b20", "b35", "b3", "b2", "b29", "b30", "b31", "b25", "b11", "b16" ], "table_ref": [], "text": "We follow the recent collaborative filtering models [4,26] and split the datasets by 7:2:1 into training, validation, and testing sets. We adopt the all-rank evaluation protocol, where for each test user, the positive items in the test set and all the non-interacted items were tested and ranked together. We employ the commonly-used Recall@N and Normalized Discounted Cumulative Gain (NDCG)@N as evaluation metrics for recommendation performance evaluation. We set N to 20 by default. • BiasMF [10]: It is a matrix factorization method that aims to enhance user-specific preferences for recommendation by incorporating bias vectors for users and items.\n• NCF [5]: It is a neural network-based method that replaces the dot-product operation in conventional matrix factorization with multi-layer neural networks. This allows the model to capture complex user-item interactions and provide recommendations.\nFor our comparison, we utilize the NeuMF variant of NCF.\n• AutoR [14]: It is a method that improves the user/item representations by using a three-layer autoencoder trained under the supervision of an interaction reconstruction task.\n• GCMC [1]: This work utilizes graph convolutional networks (GCNs) for interaction matrix completion.\n• PinSage [33]: It is a graph convolutional-based method that employs random sampling in the graph convolutional framework to enhance the collaborative filtering task.\n• NGCF [21]: It uses a multi-layer graph convolutional network to propagate information through the user-item interaction graph and learn the latent representations of users and items.\n• STGCN [36]: It combines graph convolutional encoders with graph autoencoders to enhance the model's robustness against sparse and cold-start samples in collaborative filtering tasks.\n• LightGCN [4]: This model leverages the power of neighborhood information in the user-item interaction graph by using a layer-wise propagation scheme that involves only linear transformations and element-wise additions.\n• GCCF [3]: It presents a new approach to collaborative filtering recommender systems by revisiting graph convolutional networks. It removes non-linear activations and introduces a residual network structure that alleviates the over-smoothing problem.\n• HCCF [30]: A new self-supervised recommendation framework is proposed in this work, which is able to capture both local and global collaborative relations using a hypergraph neural networks enhanced by cross-view contrastive learning architecture.\n• SHT [31]: It integrates hypergraph neural networks and transformer under a self-supervised learning paradigm for data augmentation to denoise user-item interactions in recommendation.\n• SLRec [32]: It integrates contrastive learning between node features as regularization terms in order to improve the efficacy of current collaborative filtering recommender systems.\n• SGL [26]: The model augments LightGCN with self-supervised contrastive learning by conducting data augmentation through random walk and node/edge dropout to corrupt graph structures.\n• NCL [12]: This is a neighborhood-enriched contrastive learning approach that enhances graph collaborative filtering by incorporating potential neighbors into contrastive pairs. NCL introduces structural and semantic neighbors of a user or item, developing a structure-contrastive and a prototype-contrastive objective.\n• DirectAU [17]: This new approach proposes a new learning objective for collaborative filtering methods that measures the representation quality based on alignment and uniformity on the hypersphere. It directly optimizes these two properties to improve recommendation performance." }, { "figure_ref": [], "heading": "Overall Performance Comparison (RQ1)", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "The effectiveness of the proposed AdaGCL is validated through an overall performance evaluation on three datasets, comparing it with various baselines. To ensure statistical significance, the authors retrained AdaGCL and the best-performing baseline five times and computed p-values. The results are presented in Table 2.\n• The evaluation results indicate that AdaGCL outperforms the baselines under both top-20 and top-40 settings, and the t-tests validate the significance of the observed performance improvements. The superior performance of AdaGCL can be attributed to the effectiveness of the proposed contrastive learning frameworks for data augmentation over user-item interactions. The use of adaptive view generators ensures that informative and diverse contrastive views are generated. This, in turn, leads to more effective learning of user and item embeddings, resulting in better recommendations. Overall, these findings demonstrate the effectiveness of the proposed contrastive learning approach for collorative filtering and highlight the importance of designing effective data augmentation techniques for this task.\n• The evaluation results demonstrate that self-supervised learning improves existing CF frameworks, such as SLRec, SGL, and NCL. This improvement can be attributed to incorporating an augmented learning task, which provides beneficial regularization based on the input data. For example, SLRec and SGL use stochastic data augmentation to generate multiple views, while NCL incorporates potential neighbors into contrastive pairs. However, " }, { "figure_ref": [], "heading": "Model Ablation Test (RQ2)", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "We conducted extensive experiments to validate the effectiveness of the proposed methods by removing three applied techniques in AdaGCL individually: the adaptive view generators, the task-aware optimization for view generators, and the denoising view generator.\nTo evaluate the efficacy of the proposed generative and denoising generators for view generation, we compare them to existing random augmentation method. Specifically, an ablated version of AdaGCL is trained using the random edge drop augmentation (EdgeD). Additionally, we replace the denoising view generator with an identical VGAE-based generator (Gen+Gen), to study the importance of denoising in the view generation process. Furthermore, we replace the task-aware optimization with the original reconstruction objective (w/o Task), to investigate the necessity of introducing task-relevant information into model training. The variants are retrained and tested on the three datasets. The results are presented in Table 3, from which we draw the following major conclusions: • Advantage of adaptive view generators. The results presented in Table 3 demonstrate that using the random-permutation-based contrastive view generator (EdgeD) leads to a significant decay in performance compared to the proposed AdaGCL approach. This suggests that random augmentation methods may not be sufficient for generating informative contrastive views in CF.\nIn contrast, the adaptive learning ability of the generative view based on VGAE and the denoising ability of the explicit denoising network in AdaGCL are critical for achieving superior performance. The generative view preserves the key patterns of the original data by modeling the graph-based user-item interaction structures, while the denoising network filters out noise signals that may interfere with the contrastive learning process.\n• Benefit of denoising view generator. We conduct additional tests on a modified version of our model to further study the effectiveness of our designed adaptive view generators. Specifically, we remove the denoising view generator (referred to as the Gen+Gen variant). The results show that, while the VGAEbased view provide adaptive data augmentations that benefit contrastive learning, it is not enough to eliminate the inherent data noise. Our AdaGCL addresses this issue by incorporating the denoising view into the contrastive learning process, resulting in significant performance improvements.\n• Effectiveness of the task-aware optimization. The results show that the w/o Task variant performs worse than the proposed AdaGCL on all three datasets. This suggests that using a general-purpose auto-encoding loss and denoising loss for contrastive view generator training may not be sufficient for achieving optimal performance in CF. Instead, introducing BPR loss for task-aware view generator training leads to better performance. This highlights the importance of incorporating task-aware information to guide the training of view generators, which can help to capture more relevant user-item interaction patterns and improve the quality of the generated contrastive views." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "Model Robustness Test (RQ3)", "publication_ref": [], "table_ref": [], "text": "In this section, our experiments show that our proposed approach, AdaGCL, exhibits superior robustness against data noise, and is effective in handling sparse user-item interaction data.\n4.4.1 Performance w.r.t. Data Noise Degree. We investigate the robustness of our approach, AdaGCL, against data noise in recommendation systems. To evaluate the impact of noise on our model's performance, we randomly replace different percentages of real edges with fake edges and retrain the model using the corrupted graphs as input. Concretely, we replace 5%, 10%, 15%, 20%, and 25% of the interaction edges with fake edges in our experiments. We compare AdaGCL's performance with two other models, LightGCN and SGL. To better understand the effect of noise on performance degradation, we evaluate the relative performance compared to the performance on the original data, and present the results in Fig. 3. Our observations indicate that AdaGCL exhibits smaller performance degradation in most cases compared to the baselines. We attribute this observation to two reasons: First, the selfsupervised learning task employed by AdaGCL distills information from two adaptive contrastive views to refine the graph embeddings. This observation is supported by the stronger robustness of the self-supervised method SGL compared to LightGCN. Second, both view generators used in our approach are capable of generating a contrastive view with less noise and more task-related information. Additionally, we find that the relative performance degradation on the Yelp dataset is more apparent compared to the other two datasets. This finding is because noisy data has a larger influence on the performance of models on sparse datasets like Yelp, which is the sparest dataset in our experiments. Overall, our results suggest that AdaGCL is a robust and effective model for recommendation systems, even in the presence of data noise. set, with the first group in the user-side experiments containing users interacting with 0-10 items and the first group in the item-side experiments containing items interacting with 0-5 users. Fig. 4 illustrates the recommendation accuracy for our AdaGCL and the two compared methods. Our findings highlight the following: First, AdaGCL exhibits consistently superior performance on datasets with different sparsity degrees, indicating its robustness in handling sparse data for both users and items. We attribute this advantage to our adaptive contrastive view pair, which provides highquality self-supervised signals that mitigate the negative effects of data sparsity. Second, the sparsity of item interaction vectors has a more significant influence on model performance across all the methods. Overall, our experiments demonstrate the effectiveness of AdaGCL in handling sparse user-item interaction data." }, { "figure_ref": [ "fig_7" ], "heading": "Hyperparameter Analysis (RQ4)", "publication_ref": [], "table_ref": [], "text": "In this section, the authors investigate the sensitivity of their proposed model to the key hyperparameter 𝜆 1 for InfoNCE loss, which controls the strength of contrastive learning. Specifically, the weight 𝜆 1 is searched in the range of (1, 1𝑒 -1 , 1𝑒 -2 , 1𝑒 -3 , 1𝑒 -4 ) to explore its impact on the model's performance. The results are presented in Figure 5, which shows the model's performance on the Last.FM and Yelp datasets with different values of 𝜆 1 . It is observed that the best performance is achieved with 𝜆 1 = 1𝑒 -1 and 𝜆 1 = 1. This suggests that a large value of 𝜆 1 may overly emphasize the contrastive optimization loss." }, { "figure_ref": [ "fig_9", "fig_11", "fig_9", "fig_11", "fig_11", "fig_11", "fig_11", "fig_11", "fig_11", "fig_9", "fig_9", "fig_9", "fig_11", "fig_11" ], "heading": "Embedding Visualisation Analysis", "publication_ref": [ "b15" ], "table_ref": [], "text": "In this section, we conduct an embedding visualization analysis with the representations encoded from our proposed approach, AdaGCL, and the baseline SGL to gain insight into the benefits of our model. As previously mentioned, SGL uses random data augmentation methods to create contrastive views, which can result in poor performance when dealing with noisy data. The added noises may unintentionally cause damage to contrastive views. Furthermore, SGL employs the same data augmentation methods on both contrastive views, leading to the issue of model collapse since the two views can easily have a similar distribution.\nTo validate the effectiveness of our method in addressing these limitations, we visualize the embeddings of the two contrastive views given by AdaGCL and SGL. We randomly sample 2,000 nodes from the Yelp dataset and map their embeddings in the three views (i.e., one main view and two contrastive views) to the 2-D space with t-SNE [16]. We employ the KMeans algorithm to cluster the nodes based on their compressed 2-D embeddings and color them with different colors. To highlight the impact of noisy data on SGL and AdaGCL, we also visualize the embeddings of polluted data, where 25% of the edges are replaced with fake edges. The visualization results are shown in Fig. 6 and Fig. 7, respectively. Note that in Fig. 6, View 1 and View 2 are generated by the graph generative model and the graph denoising model, respectively.\n4.6.1 Effectiveness of Adaptive View Generators. As shown in Fig. 7(a), SGL learns a large cloud of evenly-distanced embeddings with a few clear community structures to capture the collaborative relations among nodes. This is because random edge dropping tends to generate contrastive views with uniform distributions, as shown in Fig. 7(b) and Fig. 7(c). Furthermore, SGL's two contrastive views show more similar distributions compared to our method. In contrast, our AdaGCL is based on two adaptive view generators that can generate more informative and diverse views of the data. By adaptively adjusting the views, our method is able to capture more complex and nuanced structures of the graph, resulting in more distinct embeddings with better clustering effects. Furthermore, our AdaGCL demonstrates better robustness when dealing with noisy data compared to SGL. The visualization results of the three views of SGL (i.e., Fig. 7(d), Fig. 7(e), and Fig. 7(f)) show severe over-uniform distributions. When dealing with noisy data, SGL can produce embeddings with uniform distributions, which can result in a loss of unique collaborative patterns in the embeddings and negatively impact the performance of the method. In contrast, according to the visual results of the three views of our AdaGCL (i.e., Fig. 6(d), Fig. 6(e), and Fig. 6(f)), our method is more robust to noisy data. This is because our method adaptively adjusts the views to capture the most informative and discriminative aspects of the data, and is therefore less affected by noise in the input. This is because the denoised graph contains more informative and discriminative signals about the graph structure corresponding to complex user-item interaction patterns, making it more robust to noise. Moreover, our graph denoising model creates better denoised views compared to the contrastive views in SGL (i.e., Fig. 7(e) and Fig. 7(f)), validating its effectiveness in graph denoising. By incorporating the denoised graph as augmented view, the captured more informative signals resulting in more robust user representations." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [ "b19" ], "table_ref": [], "text": "In this work, we propose a novel approach to improving contrastive recommender systems through the use of adaptive view generators. Specifically, we introduce a new recommendation framework, AdaGCL, which utilizes a graph generative model and a graph denoising model to create contrastive views, allowing for more effective user-item interaction modeling with self-augmented supervision signals. Our framework demonstrates improved robustness against noise perturbation, thereby enhancing the overall performance of graph-based recommender systems. Through extensive experimentation on multiple datasets, we have shown that our proposed AdaGCL, outperforms several competitive baselines, providing validation for its superiority in contrastive recommenders.\nMoving forward, an important area of research would be to extend our framework to explore casual factors for contrastive self-supervised learning signals in recommender systems. This involves leveraging causal inference techniques [20] to improve the interpretability of the self-supervised learning signals used in contrastive learning. By accounting for the underlying causal relationships between user behaviors, we can design more effective and informative self-supervised learning objectives that better capture the nuances of user-item interactions. Additionally, we may investigate the transferability of our model by exploring transfer learning techniques, such as domain adaptation and multi-task learning." } ]
Graph neural networks (GNNs) have recently emerged as an effective collaborative filtering (CF) approaches for recommender systems. The key idea of GNN-based recommender systems is to recursively perform message passing along user-item interaction edges to refine encoded embeddings, relying on sufficient and highquality training data. However, user behavior data in practical recommendation scenarios is often noisy and exhibits skewed distribution. To address these issues, some recommendation approaches, such as SGL, leverage self-supervised learning to improve user representations. These approaches conduct self-supervised learning through creating contrastive views, but they depend on the tedious trial-and-error selection of augmentation methods. In this paper, we propose a novel Adaptive Graph Contrastive Learning (AdaGCL) framework that conducts data augmentation with two adaptive contrastive view generators to better empower the CF paradigm. Specifically, we use two trainable view generators -a graph generative model and a graph denoising model -to create adaptive contrastive views. With two adaptive contrastive views, AdaGCL introduces additional high-quality training signals into the CF paradigm, helping to alleviate data sparsity and noise issues. Extensive experiments on three real-world datasets demonstrate the superiority of our model over various state-of-the-art recommendation methods. Our model implementation codes are available at the link https://github.com/HKUDS/AdaGCL.
Adaptive Graph Contrastive Learning for Recommendation
[ { "figure_caption": "SigmoidFigure 1 :1Figure 1: Overall framework of the proposed AdaGCL model.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "3. 2 . 1 Figure 2 :212Figure 2: Workflow of the graph denoising model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "212", "figure_type": "figure" }, { "figure_caption": "training data is represented by O = (𝑢, 𝑖, 𝑗)|(𝑢, 𝑖) ∈ O + , (𝑢, 𝑗) ∈ O -, where O + denotes the observed interactions and O -= U × I/O + denotes the unobserved interactions.To train the graph generative model, we use the node embeddings encoded by the VGAE encoder to compute BPR loss. The loss function L 𝑔𝑒𝑛 is then updated as follows:", "figure_data": "", "figure_id": "fig_3", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "4. 4 . 242Performance w.r.t. Data Sparsity. We also investigate the influence of data sparsity on model performance from both user 0.00 0.05 0.10 0.15 0.20 0", "figure_data": "", "figure_id": "fig_4", "figure_label": "42", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Relative performance degradation w.r.t. noise ratio.We introduce varying levels of noise by replacing 5%, 10%, 15%, 20%, and 25% of the interaction edges with fake edges.", "figure_data": "", "figure_id": "fig_5", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Performance w.r.t different sparsity degrees of interaction data for users and items, respectively, on Yelp dataset.We divide users and items into several groups based on the number of interactions they had in the dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Hyperparameter Analysis on Last.FM and Yelp.", "figure_data": "", "figure_id": "fig_7", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: View embedding visualization for AdaGCL.", "figure_data": "", "figure_id": "fig_9", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "(a) Main View (b) CL View 1 (c) CL View 2 (d) Noisy Main View (e) Noisy CL View 1 (f) Noisy CL View 2", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: View embedding visualization for SGL.", "figure_data": "", "figure_id": "fig_11", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "4. 6 . 262Effectiveness of the Graph Denoising Model. To improve the robustness of our AdaGCL to noise and avoid the issue of model collapse, we design a graph denoising module as the second view generator. As shown in Fig.6(c) and Fig.6(f), the contrastive view created by the graph denoising component is less affected by noise compared to the other view pair (i.e., Fig.6(b) and Fig.6(e)).", "figure_data": "", "figure_id": "fig_12", "figure_label": "62", "figure_type": "figure" }, { "figure_caption": "providing two different and adaptive views, we offer additional high-quality training signals that can enhance the graph neural CF paradigm and help address the problem of model collapse in contrastive learning-based data augmentation. In summary, this paper makes the following contributions: • We propose a novel self-supervised recommendation model, called AdaGCL, that enhances the robustness of the graph CF by distilling additional training signals from adaptive contrastive learning.", "figure_data": "", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "is an indicator function, with I[𝑇𝑟𝑢𝑒] = 1 and I[𝐹𝑎𝑙𝑠𝑒] = 0, || • || 0 represents the 𝑙 0 norm.However, because of its combinatorial and non-differentiability nature, optimizing this penalty is computationally intractable. Therefore, we consider each binary number 𝑚 𝑙 𝑖,𝑗 to be drawn from a Bernoulli distribution parameterized by 𝜋 𝑙 𝑖,𝑗 , i.e., 𝑚 𝑙 𝑖,𝑗 ∼ Bern(𝜋 𝑙 𝑖,𝑗 ). Here, 𝜋 𝑙 𝑖,𝑗 describes the quality of the edge (𝑢, 𝑣). To efficiently optimize subgraphs with gradient methods, we adopt the reparameterization trick and relax the binary entries 𝑚 𝑙 𝑖,𝑗 from being drawn from a Bernoulli distribution to a deterministic function 𝑔 of parameters 𝛼 𝑙 𝑖,𝑗 ∈ R and an independent random variable 𝜀 𝑙 . That is 𝑚 𝑙 𝑖,𝑗 = 𝑔(𝛼 𝑙 𝑖,𝑗 , 𝜀 𝑙 ). Based on above operations, we design a denoising layer to learn the parameter 𝛼 𝑙 𝑖,𝑗 that controls whether to remove the edge (𝑢, 𝑣). For the 𝑙-th GNN layer, we calculate 𝛼 𝑙 𝑖,𝑗 for user node 𝑢 and its interacted item node 𝑣 with 𝛼 𝑙 𝑖,𝑗 = 𝑓 𝑙 𝜃 𝑙 (e 𝑙 𝑖 , e 𝑙 𝑗 )", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Statistics of the experimental datasets.We analyze the time complexity of our proposed model by considering its three key components. Firstly, the local collaborative relation learning module takes 𝑂 (𝐿 × |A| ×𝑑) time, which is the same as that of LightGCN. Here, 𝐿 denotes the number of graph neural layers, |A| is the number of edges in the user-item interaction graph, and 𝑑 denotes the embedding dimensionality. Secondly, the graph generative model (VGAE) costs 𝑂 (|A| × 𝑑 2 ) time. Thirdly, the denoising layers in the graph denoising model cost 𝑂 (𝐿 × |A| × 𝑑 2 ) time. Finally, the contrastive learning paradigm costs 𝑂 (𝐿 × 𝐵 × (𝐼 + 𝐽 ) ×𝑑), where 𝐵 denotes the number of users/items included in a single batch. 𝐼 and 𝐽 denote the number of users and items, respectively.", "figure_data": "DatasetUser # Item # Interaction #DensityLast.FM1,89217,63292,8342.8 × 10 -3Yelp42,71226,822182,3571.6 × 10 -4BeerAdvocate 10,45613,8451,381,0949.5 × 10 -33.5 Time Complexity Analysis", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Performance comparison on Last.FM, Yelp, BeerAdvocate datasets in terms of Recall and NDCG. Compared Baseline Methods. We evaluate our proposed AdaGCL by comparing it with various baselines for comprehensive evaluation. The details of the baselines are as follows.", "figure_data": "DatasetMetricBiasMF NCF AutoR PinSage STGCN GCMC NGCF GCCF LightGCN SLRec NCLSGL HCCF SHT DirectAU Ours p-val.Recall@200.18790.1130 0.15180.16900.20670.2218 0.2081 0.22220.23490.1957 0.2353 0.2427 0.2410 0.24200.24220.2603 2.1𝑒 -5Last.FMNDCG@20 Recall@400.1362 0.26600.0795 0.1114 0.1693 0.21740.1228 0.24020.1528 0.29400.1558 0.1474 0.1642 0.3149 0.2944 0.30830.1704 0.32200.1442 0.1715 0.1761 0.1773 0.1770 0.2792 0.3252 0.3405 0.3232 0.32350.1727 0.33560.1911 9.5𝑒 -5 0.3531 6.9𝑒 -5NDCG@400.16530.0952 0.13360.14720.18210.1897 0.1829 0.19310.20220.1737 0.2033 0.2104 0.2051 0.20550.20420.2204 5.6𝑒 -4Recall@200.05320.0304 0.04910.05100.05620.0584 0.0681 0.07420.07610.0665 0.0806 0.0803 0.0789 0.07940.08180.0873 1.5𝑒 -6YelpNDCG@20 Recall@400.0264 0.08020.0143 0.0222 0.0487 0.06920.0245 0.07430.0282 0.08560.0280 0.0336 0.0365 0.0891 0.1019 0.11510.0373 0.11750.0327 0.0402 0.0398 0.0391 0.0395 0.1032 0.1230 0.1226 0.1210 0.12170.0424 0.12260.0439 1.8𝑒 -8 0.1315 3.2𝑒 -6NDCG@400.03210.0187 0.02680.03150.03550.0360 0.0419 0.04660.04740.0418 0.0505 0.0502 0.0492 0.04970.05240.0548 2.7𝑒 -7Recall@200.09960.0729 0.08160.09300.10030.1082 0.1033 0.10350.11020.1048 0.1131 0.1138 0.1156 0.11500.11820.1216 7.7𝑒 -6BeerAdvocateNDCG@20 Recall@400.0856 0.16020.0654 0.0650 0.1203 0.13250.0816 0.15530.0852 0.16500.0901 0.0873 0.0901 0.1766 0.1653 0.16620.0943 0.17570.0881 0.0971 0.0959 0.0990 0.0977 0.1723 0.1819 0.1776 0.1847 0.17990.0981 0.17970.1015 4.9𝑒 -3 0.1867 1.3𝑒 -2NDCG@400.10160.0754 0.07940.09800.10310.1085 0.1032 0.10620.11130.1068 0.1150 0.1122 0.1176 0.11560.11390.1182 2.4𝑒 -14.1.3", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study on key components of AdaGCL. In contrast, AdaGCL has two main advantages. First, it does not rely on random data augmentation to generate contrastive views, instead using two adaptive view generators to create reasonable views that retain useful information. The generative view captures the key patterns of the original data, while the denoising generator filters out noise signals that may interfere with the contrastive learning process. Second, AdaGCL addresses the problem of model collapse in contrastive learning by creating contrastive views from different aspects with two different generators. The generative and denoising views capture different aspects of the input data, ensuring that the learned representations are diverse and informative. The superior performance of AdaGCL compared to the baseline selfsupervised approaches validates the effectiveness of this new self-supervised learning paradigm for CF.", "figure_data": "CategoryData Variants Recall NDCG Recall NDCG Recall NDCG Last.FM Yelp BeerAdvocateAdaptivew/o Task 0.2562 0.1868 0.0849 0.0425 0.1212 0.1010 Gen+Gen 0.2494 0.1819 0.0853 0.0429 0.1187 0.0992Random EdgeD 0.2476 0.1794 0.0852 0.0424 0.1163 0.0964AdaGCL0.2603 0.1911 0.0873 0.0439 0.1216 0.1015these methods may lose useful signals that reflect important user-item interaction patterns.", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Yangqin Jiang; Chao Huang; Lianghao Xia
[ { "authors": "Rianne Van Den; Thomas N Berg; Max Kipf; Welling", "journal": "", "ref_id": "b0", "title": "Graph convolutional matrix completion", "year": "2017" }, { "authors": "Jiangxia Cao; Xin Cong; Jiawei Sheng; Tingwen Liu; Bin Wang", "journal": "", "ref_id": "b1", "title": "Contrastive Cross-Domain Sequential Recommendation", "year": "2022" }, { "authors": "Lei Chen; Le Wu; Richang Hong; Kun Zhang; Meng Wang", "journal": "", "ref_id": "b2", "title": "Revisiting graph based collaborative filtering: A linear residual graph convolutional network approach", "year": "2020" }, { "authors": "Xiangnan He; Kuan Deng; Xiang Wang; Yan Li; Yongdong Zhang; Meng Wang", "journal": "", "ref_id": "b3", "title": "Lightgcn: Simplifying and powering graph convolution network for recommendation", "year": "2020" }, { "authors": "Xiangnan He; Lizi Liao; Hanwang Zhang; Liqiang Nie; Xia Hu; Tat-Seng Chua", "journal": "", "ref_id": "b4", "title": "Neural collaborative filtering", "year": "2017" }, { "authors": "Cheng Hsu; Cheng-Te Li", "journal": "", "ref_id": "b5", "title": "Retagnn: Relational temporal attentive graph neural networks for holistic sequential recommendation", "year": "2021" }, { "authors": "Dasol Hwang; Jinyoung Park; Sunyoung Kwon; Kyungmin Kim; Jung-Woo Ha; Hyunwoo J Kim", "journal": "", "ref_id": "b6", "title": "Self-supervised auxiliary learning with meta-paths for heterogeneous graphs", "year": "2020" }, { "authors": "Mohsen Jamali; Martin Ester", "journal": "", "ref_id": "b7", "title": "A matrix factorization technique with trust propagation for recommendation in social networks", "year": "2010" }, { "authors": "N Thomas; Max Kipf; Welling", "journal": "", "ref_id": "b8", "title": "Variational graph auto-encoders", "year": "2016" }, { "authors": "Yehuda Koren; Robert Bell; Chris Volinsky", "journal": "Computer", "ref_id": "b9", "title": "Matrix factorization techniques for recommender systems", "year": "2009" }, { "authors": "Chaoliu Li; Lianghao Xia; Xubin Ren; Yaowen Ye; Yong Xu; Chao Huang", "journal": "", "ref_id": "b10", "title": "Graph Transformer for Recommendation", "year": "2023" }, { "authors": "Zihan Lin; Changxin Tian; Yupeng Hou; Wayne Xin Zhao", "journal": "", "ref_id": "b11", "title": "Improving Graph Collaborative Filtering with Neighborhood-enriched Contrastive Learning", "year": "2022" }, { "authors": "Xubin Ren; Lianghao Xia; Jiashu Zhao; Dawei Yin; Chao Huang", "journal": "", "ref_id": "b12", "title": "Disentangled Contrastive Collaborative Filtering", "year": "2023" }, { "authors": "Suvash Sedhain; Aditya Krishna Menon; Scott Sanner; Lexing Xie", "journal": "", "ref_id": "b13", "title": "Autorec: Autoencoders meet collaborative filtering", "year": "2015" }, { "authors": "Zhulin Tao; Xiaohao Liu; Yewei Xia; Xiang Wang; Lifang Yang; Xianglin Huang; Tat-Seng Chua", "journal": "Transactions on Multimedia", "ref_id": "b14", "title": "Self-supervised learning for multimedia recommendation", "year": "2022" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b15", "title": "Visualizing data using t-SNE", "year": "2008" }, { "authors": "Chenyang Wang; Yuanqing Yu; Weizhi Ma; Min Zhang; Chong Chen; Yiqun Liu; Shaoping Ma", "journal": "", "ref_id": "b16", "title": "Towards Representation Alignment and Uniformity in Collaborative Filtering", "year": "2022" }, { "authors": "Hongwei Wang; Jialin Wang; Jia Wang; Miao Zhao; Weinan Zhang; Fuzheng Zhang; Wenjie Li; Xing Xie; Minyi Guo", "journal": "Transactions on Knowledge and Data Engineering (TKDE)", "ref_id": "b17", "title": "Learning graph representation with generative adversarial nets", "year": "2019" }, { "authors": "Jianling Wang; Raphael Louca; Diane Hu; Caitlin Cellier; James Caverlee; Liangjie Hong", "journal": "", "ref_id": "b18", "title": "Time to Shop for Valentine's Day: Shopping Occasions and Sequential Recommendation in E-commerce", "year": "2020" }, { "authors": "Wenjie Wang; Xinyu Lin; Fuli Feng; Xiangnan He; Min Lin; Tat-Seng Chua", "journal": "", "ref_id": "b19", "title": "Causal representation learning for out-of-distribution recommendation", "year": "2022" }, { "authors": "Xiang Wang; Xiangnan He; Meng Wang; Fuli Feng; Tat-Seng Chua", "journal": "", "ref_id": "b20", "title": "Neural graph collaborative filtering", "year": "2019" }, { "authors": "Xiang Wang; Hongye Jin; An Zhang; Xiangnan He; Tong Xu; Tat-Seng Chua", "journal": "", "ref_id": "b21", "title": "Disentangled graph collaborative filtering", "year": "2020" }, { "authors": "Ziyang Wang; Wei Wei; Gao Cong; Xiao-Li Li; Xian-Ling Mao; Minghui Qiu", "journal": "", "ref_id": "b22", "title": "Global context enhanced graph neural networks for session-based recommendation", "year": "2020" }, { "authors": "Zhaobo Wang; Yanmin Zhu; Haobing Liu; Chunyang Wang", "journal": "", "ref_id": "b23", "title": "Learning graph-based disentangled representations for next POI recommendation", "year": "2022" }, { "authors": "Wei Wei; Chao Huang; Lianghao Xia; Chuxu Zhang", "journal": "", "ref_id": "b24", "title": "Multi-Modal Self-Supervised Learning for Recommendation", "year": "2023" }, { "authors": "Jiancan Wu; Xiang Wang; Fuli Feng; Xiangnan He; Liang Chen; Jianxun Lian; Xing Xie", "journal": "", "ref_id": "b25", "title": "Self-supervised graph learning for recommendation", "year": "2021" }, { "authors": "Liang Wu; Diane Hu; Liangjie Hong; Huan Liu", "journal": "", "ref_id": "b26", "title": "Turning clicks into purchases: Revenue optimization for product search in e-commerce", "year": "2018" }, { "authors": "Shiwen Wu; Fei Sun; Wentao Zhang; Xu Xie; Bin Cui", "journal": "Comput. Surveys", "ref_id": "b27", "title": "Graph neural networks in recommender systems: a survey", "year": "2022" }, { "authors": "Lianghao Xia; Chao Huang; Jiao Shi; Yong Xu", "journal": "", "ref_id": "b28", "title": "Graph-less collaborative filtering", "year": "2023" }, { "authors": "Lianghao Xia; Chao Huang; Yong Xu; Jiashu Zhao; Dawei Yin; Jimmy Huang", "journal": "", "ref_id": "b29", "title": "Hypergraph contrastive collaborative filtering", "year": "2022" }, { "authors": "Lianghao Xia; Chao Huang; Chuxu Zhang", "journal": "", "ref_id": "b30", "title": "Self-supervised hypergraph transformer for recommender systems", "year": "2022" }, { "authors": "Tiansheng Yao; Xinyang Yi; Derek Zhiyuan Cheng; Felix Yu; Ting Chen; Aditya Menon; Lichan Hong; Ed H Chi; Steve Tjoa; Jieqi Kang", "journal": "", "ref_id": "b31", "title": "Self-supervised learning for large-scale item recommendations", "year": "2021" }, { "authors": "Rex Ying; Ruining He; Kaifeng Chen; Pong Eksombatchai; William L Hamilton; Jure Leskovec", "journal": "", "ref_id": "b32", "title": "Graph convolutional neural networks for web-scale recommender systems", "year": "2018" }, { "authors": "Ruohan Zhan; Changhua Pei; Qiang Su; Jianfeng Wen; Xueliang Wang; Guanyu Mu; Dong Zheng; Peng Jiang; Kun Gai", "journal": "", "ref_id": "b33", "title": "Deconfounding Duration Bias in Watch-time Prediction for Video Recommendation", "year": "2022" }, { "authors": "Fanjin Zhang; Jie Tang; Xueyi Liu; Zhenyu Hou; Yuxiao Dong; Jing Zhang; Xiao Liu; Ruobing Xie; Kai Zhuang; Xu Zhang", "journal": "Transactions on Knowledge and Data Engineering (TKDE)", "ref_id": "b34", "title": "Understanding WeChat user preferences and \"wow\" diffusion", "year": "2021" }, { "authors": "Jiani Zhang; Xingjian Shi; Shenglin Zhao; Irwin King", "journal": "", "ref_id": "b35", "title": "Star-gcn: Stacked and reconstructed graph convolutional networks for recommender systems", "year": "2019" }, { "authors": "Mengqi Zhang; Shu Wu; Xueli Yu; Qiang Liu; Liang Wang", "journal": "Transactions on Knowledge and Data Engineering (TKDE)", "ref_id": "b36", "title": "Dynamic graph neural networks for sequential recommendation", "year": "2022" }, { "authors": "Qianru Zhang; Chao Huang; Lianghao Xia; Zheng Wang; Zhonghang Li; Siuming Yiu", "journal": "", "ref_id": "b37", "title": "Automated Spatio-Temporal Graph Contrastive Learning", "year": "2023" }, { "authors": "Zaixi Zhang; Qi Liu; Hao Wang; Chengqiang Lu; Chee-Kong Lee", "journal": "", "ref_id": "b38", "title": "Motif-based graph self-supervised learning for molecular property prediction", "year": "2021" }, { "authors": "Kun Zhou; Hui Wang; Wayne Xin Zhao; Yutao Zhu; Sirui Wang; Fuzheng Zhang; Zhongyuan Wang; Ji-Rong Wen", "journal": "", "ref_id": "b39", "title": "S3-rec: Self-supervised learning for sequential recommendation with mutual information maximization", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 53.53, 514.15, 243.34, 19.39 ], "formula_id": "formula_0", "formula_text": "U = {𝑢 1 , • • • , 𝑢 𝑖 , • • • , 𝑢 𝐼 } (|U| = 𝐼 ) and V = {𝑣 1 , • • • , 𝑣 𝑗 , • • • , 𝑣 𝐽 } (|V | = 𝐽" }, { "formula_coordinates": [ 3, 365.52, 347.24, 193.22, 13.12 ], "formula_id": "formula_1", "formula_text": "z (𝑢 ) 𝑖 = Ā𝑖, * • E (𝑣) , z (𝑣) 𝑗 = Ā * ,𝑗 • E (𝑢 ) ,(1)" }, { "formula_coordinates": [ 3, 357.08, 432.27, 201.66, 22.32 ], "formula_id": "formula_2", "formula_text": "Ā = D -1/2 (𝑢 ) • A • D -1/2 (𝑣) , Ā𝑖,𝑗 = A 𝑖,𝑗 √︁ |N 𝑖 | • |N 𝑗 | ,(2)" }, { "formula_coordinates": [ 3, 362.17, 565.59, 196.57, 14.81 ], "formula_id": "formula_3", "formula_text": "e (𝑢 ) 𝑖,𝑙 = z (𝑢 ) 𝑖,𝑙 + e (𝑢 ) 𝑖,𝑙 -1 , e (𝑣) 𝑗,𝑙 = z (𝑣) 𝑗,𝑙 + e (𝑣) 𝑗,𝑙 -1 .(3)" }, { "formula_coordinates": [ 4, 93.72, 507.96, 200.86, 28.07 ], "formula_id": "formula_4", "formula_text": "L 𝑢𝑠𝑒𝑟 𝑠𝑠𝑙 = ∑︁ 𝑢 𝑖 ∈ U -log exp(𝑠 (e ′ 𝑖 , e ′′ 𝑖 )/𝜏) 𝑢 𝑖 ′ ∈ U exp(𝑠 (e ′ 𝑖 , e ′′ 𝑖 ′ /𝜏) ,(5)" }, { "formula_coordinates": [ 4, 402.67, 198.64, 156.07, 8.43 ], "formula_id": "formula_5", "formula_text": "L 𝑔𝑒𝑛 = L 𝑘𝑙 + L 𝑑𝑖𝑠 ,(6)" }, { "formula_coordinates": [ 4, 371.59, 426.35, 62.25, 10.49 ], "formula_id": "formula_6", "formula_text": "M 𝑙 ∈ 0, 1 | V | × | V |" }, { "formula_coordinates": [ 4, 373.69, 509.59, 185.05, 26.45 ], "formula_id": "formula_7", "formula_text": "𝐿 ∑︁ 𝑙=1 ||M 𝑙 || 0 = 𝐿 ∑︁ 𝑙=1 ∑︁ (𝑢,𝑣) ∈𝜀 I[𝑚 𝑙 𝑖,𝑗 ≠ 0],(7)" }, { "formula_coordinates": [ 4, 341.81, 544.31, 12.19, 6.19 ], "formula_id": "formula_8", "formula_text": "I[•]" }, { "formula_coordinates": [ 5, 106.69, 155.57, 187.89, 26.49 ], "formula_id": "formula_9", "formula_text": "L 𝑐 = 𝐿 ∑︁ 𝑙=1 ∑︁ (𝑢 𝑖 ,𝑣 𝑗 ) ∈𝜀 (1 -P 𝜎 (𝑠 𝑙 𝑖,𝑗 ) (0|𝜃 𝑙 )),(8)" }, { "formula_coordinates": [ 5, 110.97, 311.51, 183.62, 21.99 ], "formula_id": "formula_10", "formula_text": "L 𝑏𝑝𝑟 = ∑︁ (𝑢,𝑖,𝑗 ) ∈ O -log𝜎 ( ŷ𝑢𝑖 -ŷ𝑢 𝑗 ),(9)" }, { "formula_coordinates": [ 5, 105.62, 417.97, 185.55, 13.24 ], "formula_id": "formula_11", "formula_text": "L 𝑔𝑒𝑛 = L 𝑘𝑙 + L 𝑑𝑖𝑠 + L 𝑔𝑒𝑛 𝑏𝑝𝑟 + 𝜆 2 ||Θ|| 2 F , (10" }, { "formula_coordinates": [ 5, 291.16, 420.57, 3.42, 7.94 ], "formula_id": "formula_12", "formula_text": ")" }, { "formula_coordinates": [ 5, 119.51, 503.56, 175.07, 13.24 ], "formula_id": "formula_13", "formula_text": "L 𝑑𝑒𝑛 = L 𝑐 + L 𝑑𝑒𝑛 𝑏𝑝𝑟 + 𝜆 2 ||Θ|| 2 F .(11)" }, { "formula_coordinates": [ 5, 109.32, 594.4, 185.26, 13.24 ], "formula_id": "formula_14", "formula_text": "L 𝑢𝑝𝑝𝑒𝑟 = L 𝑏𝑝𝑟 + 𝜆 1 L 𝑠𝑠𝑙 + 𝜆 2 ||Θ|| 2 F ,(12)" }, { "formula_coordinates": [ 5, 131.02, 701.01, 160.14, 8.43 ], "formula_id": "formula_15", "formula_text": "L 𝑙𝑜𝑤𝑒𝑟 = L 𝑔𝑒𝑛 + L 𝑑𝑒𝑛 . (13" }, { "formula_coordinates": [ 5, 291.16, 701.49, 3.42, 7.94 ], "formula_id": "formula_16", "formula_text": ")" } ]
10.1145/3292500.3330701
2023-05-18
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b24", "b42", "b48", "b50", "b62", "b5", "b51", "b42", "b5", "b4", "b30", "b25" ], "table_ref": [], "text": "Incrementality is an inseparable aspect of language use. Human speakers can produce utterances based on an incomplete message formed in their minds while simultaneously continuing to refine its content for subsequent speech production (Kempen andHoenkamp, 1982, 1987). They also comprehend language on (approximately) a word-by-word basis and do not need to wait until the utterance finishes to grasp its meaning (Kamide, 2008).\nAs observed by Madureira and Schlangen (2020) A policy for adaptive revision, here parameterised by a controller, can enable reanalyses to be performed when necessary (here at time steps 3 and 7).\nincremental processing would be RNNs (Rumelhart et al., 1986), as they have essential properties required in incremental scenarios: They keep a recurrent state, are sensitive to the notion of order and are able to accept partial input and produce an output at each time step. Ideally, an incremental processor should also be able to revise its previous incorrect hypotheses based on new input prefixes (Schlangen and Skantze, 2009). However, RNNs are unable to do so as their output is monotonic. The Transformer architecture (Vaswani et al., 2017) has been the de facto standard for many NLP tasks since its inception. Nevertheless, it is not designed for incremental processing as the input sequences are assumed to be complete and processed as a whole. A restart-incremental interface (Beuck et al., 2011;Schlangen and Skantze, 2011) can be applied to adapt Transformers for incremental processing (Madureira and Schlangen, 2020), where available input prefixes are recomputed at each time step to produce partial outputs. Such an interface also provides the capability to revise existing outputs through its non-monotonic nature. Although feasible, this method does not scale well for long sequences since the number of required forward passes grows with the sequence length. 2 The revision process is also not effective as it occurs at every time step, even when it is unnecessary.\nRevision is crucial in incremental processing, as it is not always possible for a model to be correct at the first attempt, either because the linguistic input is provided in its inherent piecemeal fashion (as shown in Figure 1) or because of mistakes due to poor approximation. One way to improve the output quality is the delay strategy (Beuck et al., 2011;Baumann et al., 2011), where tokens within a lookahead window are used to disambiguate the currently processed input. However, it can neither fix past hypotheses nor capture long-range influences e.g. in garden path sentences.\nIn this work, we propose the Two-pass model for AdaPtIve Revision (TAPIR), which is capable of adaptive revision, while also being fast in incremental scenarios. This is achieved by using a revision policy to decide whether to WRITE (produce a new output) or REVISE (refine existing outputs based on new evidence), whose mechanism is described in §3. Learning this policy requires a supervision signal which is usually not present in non-incremental datasets (Köhn, 2018). In §4, we tackle this issue by introducing a method for obtaining action sequences using the Linear Transformer (LT) (Katharopoulos et al., 2020). As silver labels, these action sequences allow us to view policy learning as a supervised problem.\nExperiments on four NLU tasks in English, framed as sequence labelling 3 , show that, compared to a restart-incremental Transformer encoder, our model is considerably faster for incremental inference with better incremental performance, while being comparable when processing full sequences. Our in-depth analysis inspects TAPIR's incremental behaviour, showing its effectiveness at avoiding ill-timed revisions on correct prefixes." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b71", "b21", "b34", "b42", "b23", "b47", "b6", "b38", "b17", "b18", "b45", "b9", "b1", "b39", "b19", "b31", "b65", "b55", "b16", "b43", "b2", "b70", "b67", "b33", "b54", "b66", "b49", "b22", "b27" ], "table_ref": [], "text": "There has been increasing interest to explore neural network-based incremental processing. Žilka and Jurčíček (2015) proposed a dialogue state tracker using LSTM (Hochreiter and Schmidhuber, 1997) to incrementally predict each component of the dialogue state. Liu et al. (2019) introduced an incremental anaphora resolution model composed cessing n sequences with n k=1 k tokens in total. 3 We do not run experiments on sequence classification, as revisions can trivially be performed by predicting one label at every time step. of a memory unit for entity tracking and a recurrent unit as the memory controller. RNNs still fall short on non-incremental metrics due to their strict left-to-right processing. Some works have attempted to address this issue by adapting BiL-STMs or Transformers for incremental processing and applying it on sequence labelling and classification tasks (Madureira and Schlangen, 2020;Kahardipraja et al., 2021) and disfluency detection (Rohanian and Hough, 2021;Chen et al., 2022).\nOur revision policy is closely related to the concept of policy in simultaneous translation, which decides whether to wait for another source token (READ action) or to emit a target token (WRITE action). Simultaneous translation policies can be categorised into fixed and adaptive. An example of a fixed policy is the wait-k policy (Ma et al., 2019), which waits for first k source tokens before alternating between writing and reading a token. An adaptive policy on the other hand, decides to read or write depending on the available context and can be learned by using reinforcement learning techniques (Grissom II et al., 2014;Gu et al., 2017) or applying monotonic attention (Raffel et al., 2017;Chiu and Raffel, 2018;Arivazhagan et al., 2019;Ma et al., 2020).\nThe memory mechanism is a key component for revision policy learning as it stores representations which, for instance, can be used to ensure that the action is correct (Guo et al., 2022). It also absorbs asynchronies that may arise when each component in an incremental system has different processing speed (Levelt, 1993). The memory can be internal as in RNNs, or external such as memory networks (Weston et al., 2015;Sukhbaatar et al., 2015) and the Neural Turing Machine (Graves et al., 2014).\nRevision in incremental systems has been previously explored. In simultaneous spoken language translation, Niehues et al. (2016) proposed a scheme that allows re-translation when an ASR component recognises a new word. Arivazhagan et al. (2020) evaluated streaming translation against re-translation models that translate from scratch for each incoming token and found that re-translation yields a comparable result to streaming systems. Zheng et al. (2020) proposed a decoding method for simultaneous translation that overgenerates target words at each step, which are subsequently revised. One way to achieve revision is by employing a two-pass strategy. Xia et al. (2017) proposed a deliberation network for machine translation, com-posed of encoder-decoder architecture with an additional second-pass decoder to refine the generated target sentence. In dialogue domains, this strategy is also used to improve the contextual coherence and correctness of the response (Li et al., 2019) and to refine the output of retrieval-based dialogue systems (Song et al., 2018;Weston et al., 2018). Furthermore, the two-pass approach is commonly utilised in streaming ASR to improve the initial hypothesis (Sainath et al., 2019;Hu et al., 2020;Wang et al., 2022, inter alia).\nThe aforementioned works shared a common trait, as they used a fixed policy and performed revision either for each incoming input or when the input is already complete. Our approach differs in that our model learns an adaptive policy that results in more timely revisions. Contemporaneous to our work, Kaushal et al. (2023) proposed a cascaded uni-and bidirectional architecture with an additional module to predict when to restart. The module is trained with a supervision signal obtained from comparing the model's prediction against the ground truth. Their approach is effective in reducing the required computational budget." }, { "figure_ref": [], "heading": "Model", "publication_ref": [ "b67" ], "table_ref": [], "text": "To address the weaknesses of RNN-and Transformer-only architectures for incremental processing ( §1), we introduce a Two-pass model for AdaPtIve Revision named TAPIR, which integrates advantages of both models and is based on the deliberation network (Xia et al., 2017). Its architecture, depicted in Figure 2, consists of four components as follows:\n1. Incremental Processor: a recurrent model that produces an output at each time step and serves as the first-pass model. In this work, we use a standard LSTM network." }, { "figure_ref": [], "heading": "2.", "publication_ref": [ "b32", "b13", "b16", "b8" ], "table_ref": [], "text": "Reviser: a bidirectional model that can revise via recomputation operations ( §3.1), also called the second-pass model. We opt for Transformer-based models following Li et al. (2020) as it allows parallel recomputation. The revision process corresponds to the forward reanalysis hypothesis (Frazier and Rayner, 1982), where a sentence is processed from the beginning whenever the need for reanalysis is detected.\n3. Memory: the history of inputs and outputs are stored in the memory. Taking the inspira- 4. Controller: a neural network that parameterises the revision policy. We choose a recurrent controller following Graves et al. (2014), as its internal memory complements the memory module and is also suitable for incremental scenarios. We use a modified LSTMN (Cheng et al., 2016) for this component.\nDuring incremental inference, TAPIR computes a candidate output y t for the most recent input x t as the first pass. Then, based on x t and the memory, it decides whether to take a WRITE (add y t to an output buffer) or REVISE (perform a second pass to recompute all existing outputs) action. The action is defined by a revision policy π θ , which models the effect of new input on past outputs. At each time t, π θ makes use of processed inputs x ≤t and past outputs y <t to select a suitable action a t . 4 It is parameterised by the controller hidden state k t with a non-linear function g: π θ (a t |a <t , x ≤t , y <t ) ∝ g θ (k t )\n(1)" }, { "figure_ref": [], "heading": "Revision Policy", "publication_ref": [ "b37" ], "table_ref": [], "text": "In restart-incremental models, revisions can occur as a result of recomputations, which are costly since they happen at every time step, even when no revisions occur. TAPIR revises by selectively deciding when to recompute, which enables it to revisit previous outputs at different points in time while reducing the number of recomputations. Memory Content. The memory in TAPIR contains information pertaining to processed inputs and their corresponding outputs, which is crucial for our approach. This is because it enables our model to perform relational learning between an incoming input and past outputs, using past inputs as an additional cue. Here, we use three caches Γ. Γ h stores the hidden state h of the incremental processor, representing the current input prefix, Γ z stores the projected output vector z which represents the output, and Γ p stores the input-output representation ϕ, which is computed from h and z. The i-th slot of the caches contains γ h i , γ z i , γ p i , all of them computed at the same time step. The representations z and ϕ are computed as follows:\nz = tanh(W ỹ ỹ + b z ) (2) ϕ = tanh(W in h + W out z + b ϕ ) (3)\nwhere ỹ is the output logits from the incremental processor. W ỹ, W in , and W out are parameters while b z and b ϕ are bias terms. The dimension of z and h is the same. We keep the cache size N small, as we later perform soft attention over Γ p . The attention computation for large cache sizes is costly and is not suitable for incremental scenarios. Due to this limitation, the oldest cache element is discarded when the cache is full and new partial input arrives. Modelling Actions. To model possible changes in past outputs as an effect of a new input, we use an LSTMN controller due to its ability to induce relations among tokens. It computes the relation between h t and each cache element γ p i via an attention mechanism:\nU = W c γ p i + W h h t + W k kt-1 + b u (4\n)\ns t i = softmax(v tanh(U ))(5)\nwhich yields a probability distribution over Γ p . kt-1 is the previous summary vector of the controller hidden state. W c , W h , W k, and v are parameters and b u is a bias term. We can then compute adaptive summary vectors kt and ct as a weighted sum of the cache Γ p and the controller memory tape C t-1 :\nkt ct = N i=1 s t i • γ p i c i+max (0,t-N -1)(6)\nwhere c i+max (0,t-N -1) is the controller memory cell for the corresponding cache element γ p i . The attention can be partially viewed as local (Luong et al., 2015), since older cache elements are incorporated through kt-1 . These summary vectors are used to compute the recurrent update as follows:\n  i t f t o t ĉt   =   σ σ σ tanh   W • [ kt , x t ] (7) c t = f t ct + i t ĉt (8) k t = o t tanh(c t )(9)\nLastly, k t is used by the revision policy to compute the action a t :\nπ θ (a t |a <t , x ≤t , y <t ) = σ(θ k t + b k ) (10) a t = REVISE, if σ(θ k t + b k ) ≥ τ WRITE, otherwise(11)\nwhere θ is a parameter vector, b k is the bias, and τ ∈ [0, 1] is a decision threshold. According to equation ( 11), a REVISE action is selected only if the policy value is greater than or equal to τ ; otherwise, a WRITE action is chosen. This threshold can be adjusted to encourage or discourage the recomputation frequency without the need to retrain the policy. Our model is equal to an RNN when τ = 1 (never recompute), and becomes a restart-incremental Transformer when τ = 0 (always recompute)." }, { "figure_ref": [ "fig_0" ], "heading": "Incremental Inference Mechanism", "publication_ref": [], "table_ref": [], "text": "Using the policy, TAPIR predicts when to perform a recomputation. Assume that an input token x t is fed to the RNN component to obtain y t . The controller then reads x t , h t , and Γ p to compute a t . If a REVISE action is emitted, the input buffer (containing all available inputs so far) will be passed to the reviser to yield the recomputed outputs. When this happens, both z and ϕ stored in the caches also need to be updated to reflect the effect of the recomputation. The recomputation of past z and ϕ will occur simultaneously with the computation of z and ϕ for the current time step to update Γ z and Γ p using the recomputed outputs. If a WRITE action is emitted, we take y t to be the current output and continue to process the next token. The content of Γ z and Γ p are also updated for the current step. The cache Γ h is always updated regardless of which action the policy takes. See algorithm in the Appendix.\nLet us use Figure 1 and τ = 0.5 as a constructed example. At t = 1, the incremental processor consumes the token the, updates its hidden state and predicts the POS-tag det. The controller predicts that the probability for recomputation is e.g. 0.3. Since it is lower than τ , det gets written to the output buffer, the memory is updated and the current step is finished. A similar decision happens at t = 2 and alert is classified as noun. At t = 3, however, the controller predicts that a REVISE action should occur after the input citizens. That triggers the reviser, which takes the alert citizens as input and returns det adj noun. The output buffer gets overwritten with this new hypothesis and the caches are recomputed to accommodate the new state. This dynamics continues until the end of the sentence." }, { "figure_ref": [], "heading": "Training", "publication_ref": [ "b49" ], "table_ref": [], "text": "Jointly training all components of such a two-pass model from scratch can be unstable (Sainath et al., 2019), so we opt for a two-step training process:\n1. Train only the reviser using cross entropy loss.\n2. Train the incremental processor and the controller together with a combined loss:\nL = CE(y gold , y) + BCE(a LT , a) (12)\nwhere y gold is the expected output and a LT is the expected action." }, { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Supervision Signal for Revision", "publication_ref": [ "b69", "b23", "b59", "b20", "b57", "b25" ], "table_ref": [], "text": "During incremental sentence comprehension, a revision or reanalysis occurs when disambiguating material rules out the current sentence interpretation. In Figure 1, noun is a valid label for suspect at t = 6, but person at t = 7 rules that analysis out, forcing a reanalysis to adj instead. Training TAPIR's controller requires a sequence of WRITE/REVISE actions expressed as the supervision signal a LT in equation ( 12), capturing when revision happens. This signal then allows us to frame the policy learning as a supervised learning task (as in the work of Zheng et al. (2019)).\nIf we have the sequence of output prefix hypotheses at each step, as shown in Figure 1, we know that the steps when revisions have occurred are t = {3, 7}. We can then construct the sequence of actions we need. The first action is always WRITE as there is no past output to revise at this step. For t > 1, the action can be determined by comparing the partial outputs at time step t (excluding y t ) against the partial outputs at time step t -1. If no edits occur, then the partial outputs after processing x t should not change, and a WRITE action is appended to the sequence. If any edits occur, we append a REVISE action instead.\nIntermediate human judgements about when to revise are not available, so we need to retrieve that from a model. It is possible obtain this information from a restart-incremental Transformer, by comparing how the prefix at t differs from prefix at t -1. However, as shown by Kahardipraja et al. (2021), the signal captured using this approach may lack incremental quality due to the missing recurrence mechanism. Using a recurrent model is advisable here, as it can capture order and hierarchical structure in sentences, which is apparently hard for Transformers (Tran et al., 2018;Hahn, 2020;Sun and Lu, 2022). But it is difficult to retrieve this signal using vanilla RNNs because its recurrence only allows a unidirectional information flow, which prevents a backward update of past outputs.\nTherefore, we opt for the Linear Transformer (LT) (Katharopoulos et al., 2020), which can be viewed both as a Transformer and as an RNN. 5To generate the action sequences, we first train the action generator LT with causal mask to mimic an RNN training. Afterwards, it is deployed under restart-incrementality on the same set used for training with the mask removed. We collect the sequence of partial prefixes for all sentences and use it to derive the action sequences." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b11", "b52", "b35", "b58", "b53", "b4", "b42", "b4", "b42", "b4", "b42", "b42", "b36", "b44" ], "table_ref": [ "tab_2" ], "text": "Datasets. We evaluate TAPIR on four tasks in English, for NLU and task-oriented dialogue, using seven sequence labelling datasets: Slot Filling: SNIPS (Coucke et al., 2018); Alarm, Reminder & Weather (Schuster et al., 2019) and MIT Movie (Liu et al., 2013).\nPoS Tagging: CoNLL-2003 (Tjong Kim Sang and De Meulder, 2003) and UD-EWT (Silveira et al., 2014).\nNamed-Entity Recognition (CoNLL-2003).\nChunking (CoNLL-2003). Lower is better for EO and CT, while higher is better for RC. TAPIR is better compared to the reference model for the non-delayed case (output prefixes are often correct and stable). The delay strategy of one lookahead token is beneficial.\nTable 1 shows the distribution of generated actions in the final training set for each task. Further details regarding the datasets and generated action sequences are available in the Appendix. Evaluation. An ideal incremental model deployed in real-time settings should (i) exhibit good incremental behaviour, i.e. produce correct and stable partial hypotheses and timely recover from its mistakes; (ii) be efficient for inference by delivering responses without wasting computational resources; and (iii) not come with the cost of a negative impact on the non-incremental performance, i.e. produce correct final outputs. Achieving all at the same time may be hard, so trade-offs can be necessary. We evaluate TAPIR on these three relevant di-mensions. For (i), we use similarity and diachronic metrics6 proposed by Baumann et al. (2011) and adapted in Madureira and Schlangen (2020): edit overhead (EO, the proportion of unnecessary edits over all edits), correction time score (CT, the average proportion of time steps required for an output increment to settle down), and relative correctness (RC, the proportion of output prefixes that match with the final output). Aspect (ii) is analysed by benchmarking the incremental inference speed. For (iii), we use the F1 score adapted for the IOB sequence labelling scheme, except for PoS tagging, which is evaluated by measuring accuracy. Rather than trying to beat the state-of-the art results, we focus on analysing the incremental abilities of models whose performances are high enough for our purposes. As a reference model, we use a Transformer encoder applied in a restartincremental fashion, which implicitly performs revision at every step. We follow Baumann et al. (2011) and Madureira and Schlangen (2020) by evaluating partial outputs with respect to the final output, to separate between incremental and nonincremental performance. Delay strategy. To inspect the effect of right context on the model's performance, we use the delay strategy (Baumann et al., 2011) with a lookahead window of size 1 and 2, computing a delayed version of EO and RC (Madureira and Schlangen, 2020). The output for the reference model is delayed only during inference, as in Madureira and Schlangen (2020). For TAPIR, the same treatment would not be possible as it contains an RNN that must be able to recognise the output delay. Thus, we follow the approach of Turek et al. ( 2020): During training and inference, the label for input x t is expected at time step t + d, where d is the delay. Implementation. For the reviser component, we choose Transformer (Trf) and Linear Transformer (LT) encoders trained with full attention. 7 The reference model is trained with cross entropy loss similar to the reviser. All models are trained with the AdamW optimiser (Loshchilov and Hutter, 2019). We use 300-D GloVe embeddings (Pennington et al., 2014), which, for the reference model and the reviser, are passed through an additional linear projection layer. The probability threshold τ is set to 0.5. We report results for a single run with the best hyperparameter configuration. See Appendix for details about the set-up and experiments." }, { "figure_ref": [ "fig_1" ], "heading": "Results and Analysis", "publication_ref": [], "table_ref": [], "text": "Incremental. Figure 3 depicts the incremental evaluation results. For the no-delay case, TAPIR performs better compared to the reference model. We also observe that the delay strategy helps improve the metrics. It improves the results for TAPIR, in general, but a longer delay does not always yield a better incremental performance. We suspect this happens for two possible reasons: First, if we consider the case where the delay is 1, TAPIR has already achieved relatively low EO (< 0.1) and high RC (> 0.85). This, combined with its nonmonotonic behaviour, might make it harder to further improve on both incremental metrics, even if a longer delay is allowed. Second, a longer delay means that our model needs to wait longer before producing an output. In the meantime, it still has to process incoming tokens, which might cause some difficulty in learning the relation between the input and its corresponding delayed output. As a consequence, we have mixed results when comparing EO and RC for the delayed version of the reference model and TAPIR. Their differences are, however, very small. TAPIR achieves low EO and CT score, which indicates that the partial output is stable and settles down quickly. RC is also high, which shows that, most of the time, the partial outputs are correct prefixes of the final, non-incremental output and would be useful for downstream processing. Non-Incremental. The performance of the restartincremental reference model and our model on full sentences is shown in Table 3. The results of TAPIR, in particular with the Transformer reviser (TAPIR-Trf), are roughly comparable to the reference model, with only modest differences (0.96% -4.12%). TAPIR-Trf performs slightly better than TAPIR-LT. This is possibly due to the approximation of softmax attention in LT, which leads to degradation in the output quality. Furthermore, we see that delay of 1 or 2 tokens for TAPIR is generally beneficial.9 Note that we do not force a REVISE action at the final time step to examine the effect of the learned policy on TAPIR's performance, although that would be a strategy to achieve the same non-incremental performance as the reference model. Table 3: Non-incremental performance of the models on test sets (first group is F1, second group is accuracy). D = delay. The performance of TAPIR is roughly comparable to the reference model." }, { "figure_ref": [], "heading": "Benchmark.", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_2", "fig_3", "fig_4" ], "heading": "Detailed Analysis", "publication_ref": [], "table_ref": [], "text": "In the next paragraphs, we assess TAPIR-Trf on aspects beyond the basic evaluation metrics. Policy Effectiveness. Figure 4 shows the distributions of actions and states of the output prefixes.\nHere, a prefix is considered correct if all its labels match the final output, and incorrect otherwise. We start by noticing that most of the actions are WRITE, and among them, very few occur when the prefix is incorrect. TAPIR is thus good at recognising states where recomputation is not required, supporting its speed advantage. A good model should avoid revising prefixes that are already correct. We see that, for all datasets, the vast majority of the correct prefixes indeed do not get revised. A utopian model would not make mistakes (and thus never need to revise) or immediately revise incorrect prefixes. In reality, this cannot be achieved, given the incremental nature of language and the long-distance dependencies. As a result, incorrect prefixes are expected to have a mixed distribution between actions, as the model needs to wait for the edit-triggering input, and our results corroborate that. Finally, among the REVISE actions (i.e. the lighter bars in the bottom area), there is still a considerable relative number of unnecessary revisions occurring for correct prefixes. We see room for further refinement of the policy in that sense, but, in absolute numbers, the occurrence of recomputations is much lower than in the restart-incrementality paradigm, where all steps require a recomputation. Qualitative analysis. Figure 5 shows two examples of how TAPIR behaves in incremental slot filling (more examples in the Appendix), showing that it performs critical revisions that would not be possible with a monotonic model. At the top, the model must produce labels for unknown tokens, which is harder to perform correctly. The first UNK token is initially interpreted as a city at t = 6, which is probably deemed as correct considering the available left context. The controller agrees with this, producing a WRITE action. However, when heritage and the second UNK token have been consumed at t = 8, the incremental processor labels them as parts of a geographic point of interest. The controller is able to notice the output inconsistency as I-geographic_poi should be preceded by B-geographic_poi (following the IOB scheme) and emits a REVISE action. As a result, the label B-city is correctly replaced.\nIn the second example, TAPIR produces interest- ing interpretations. It initially considers woods to be an actor name at t = 4. When it reads have, the reanalysis triggered by the controller interprets woods as a part of a title, the woods. The model revises its hypothesis again at t = 6, and decides that the complete title should be the woods have eyes. It still makes a mistake at the last time step, opting for a (wrong) revision of O to B-RATING for rated when it should be unnecessary. Effect of Threshold. Figure 6 portrays the effect of the probability threshold τ on incremental and non-incremental metrics. As τ increases, the incremental performance improves while the nonincremental performance deteriorates. This happens as higher τ discourages recomputation and makes the model closer to an RNN. In return, it is harder for the model to revisit its past decisions." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "We proposed TAPIR, a two-pass model capable of performing adaptive revision in incremental scenarios e.g. for dialogue and interactive systems.\nWe also demonstrated that it is possible to obtain an incremental supervision signal using the Linear Transformer (LT), in the form of WRITE/REVISE action sequences, to guide the policy learning for adaptive revision. Results on sequence labelling tasks showed that TAPIR has a better incremental performance than a restart-incremental Transformer, in general, while being roughly comparable to it on full sentences. The delay strategy helps to improve incremental and non-incremental metrics, although a longer delay does not always yield better results.\nThe ability to revise adaptively provides our model with substantial advantages over using RNNs or restart-incremental Transformers. It can fix incorrect past outputs after observing incoming inputs, which is not possible for RNNs. Looking from the aspect of efficiency, our model is also better compared to restart-incremental Transformers as the recomputation is only performed when the need for it is detected. TAPIR is consequently faster in terms of inference speed." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss some of the known limitations of our set-up, data and models.\nTo handle unknown words in the test sets, we replace them by a special UNK token which is also used to mask some tokens in the training set. The UNK token provides little information regarding the actual input and TAPIR might be unable to fully utilise the token to refine its interpretation of the past output. This has a direct influence in the incremental metrics, as the model can exploit this property by using UNK token as a cue to emit the REVISE action. This strategy also introduces the extra hyperparameter of what proportion of tokens to mask.\nWe put effort into achieving a diverse selection of datasets in various tasks, but our analysis is limited to English. We are reporting results on the datasets for which the non-incremental versions of the model could achieve a performance high enough to allow a meaningful evaluation of their incremental performance. Tuning is still required to extend the analysis to other datasets.\nRelated to these two issues, we decided to use tokens as the incremental unit for processing. We follow the tokenization given by the sequence labelling datasets we use. Extending the analysis for other languages requires thus a good tokenizer, and annotated data, which may not exist. We may also inherit limitations from the datasets that we use. Although we do not include an in-depth analysis of the datasets, as our focus is on the model and not on solving the tasks themselves, they are widely used by the community and details are available in their corresponding publications.\nThe method we propose to retrieve the action sequences depends on the chosen model, and the grounding of the action sequences in the actual prefix outputs have a direct influence in training the controller. Therefore, the decisions made by TAPIR rely on the quality of the underlying generated action sequences. In order to ensure that the internal representations of the action generator LT do not depend on right context, we had to restrict ourselves to a single layer variation of this model when generating the sequence of actions. It is possible that with more layers its behaviour would be different, but that would invalidate the assumptions needed for an incremental processor.\nWhen it comes to the TAPIR architecture, the attention scores for the controller are computed independently of temporal order and we do not explicitly model relation between cache elements. The limited cache size also means that some past information has to be discarded to accommodate incoming inputs. Although we have made efforts to incorporate them through the summary vector, this might be not ideal due to information bottleneck." }, { "figure_ref": [ "fig_5" ], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "In this section, we provide information regarding the hyperparameters, implementation, and additional details that are needed to reproduce this work . We also present supplementary materials to accompany the main text (Proof for §4, Algorithm 1, Figure 78).\nFor all of our experiments, the seed is set to 42119392. We re-implement the Transformer and the LSTMN used in this work, while for the Linear Transformer (LT), we use the official implementation. 10 Further information regarding dependencies and versions are available in the repository." }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [], "table_ref": [ "tab_11", "tab_12" ], "text": "Tables 6 and7 summarise the datasets. For SNIPS, we use the preprocessed data and splits provided by E et al. (2019). As the MIT Movie dataset does not have an official validation set, we randomly select 10% of the training data as the validation set. We also remove sentences longer than 200 words. While we use the validation set to tune the hyperparameters of our models, the results on test sets are obtained by using models that are trained on the combination of training and validation sets." }, { "figure_ref": [], "heading": "Action Sequence Generation", "publication_ref": [ "b36", "b14" ], "table_ref": [], "text": "For the action sequence generation, we train a single-layer LT for 20 epochs with linear learning rate warm-up over the first 5 epochs. We use AdamW optimiser (Loshchilov and Hutter, 2019) with β 1 = 0.9 and β 2 = 0.98. Xavier initialisation (Glorot and Bengio, 2010) is applied to all parameters. The learning rate is set to 1e -4 , with gradient clipping of 1, dropout of 0.1, and batch size of 128. We set the FFNN dimension to 2048 and selfattention dimension to 512, with 8 attention heads. The same hyperparameters are used for all datasets. Action sequences for training the final models are obtained using single-layer LTs that are trained on the combination of training and validation sets." }, { "figure_ref": [], "heading": "Implementation and training details", "publication_ref": [ "b71", "b0", "b49" ], "table_ref": [], "text": "Our reference model and TAPIR are trained for 50 epochs with dropout of 0.1 and early stopping with patience of 10. For AdamW, we use β 1 = 0.9 and β 2 = 0.98. We also apply Xavier initialisation to all parameters. To train the reference model and the reviser, we use linear learning rate warmup over the first 5 epochs. The learning rate is decayed by 10 https://linear-transformers.com/ 0.5 after 30, 40, and 45 epochs for all models. The number of attention heads for Transformer and LT encoders is set to 8, where each head has the dimension of d model /8 and d model is the self-attention dimension. The embedding projection layer is of size d model . For OOV words, we follow Žilka and Jurčíček (2015) by randomly replacing tokens with an UNK token during training with a probability that we set to 0.02, and then use this token whenever we encounter unknown words during inference. Hyperparameter search is performed using Optuna (Akiba et al., 2019) by maximising the corresponding non-incremental metric on the validation set. We limit the hyperparameter search trials to 25 for all of our experiments. Different from the two-pass model in Sainath et al. (2019), during training we do not take the trained reviser in step (1), freeze its weights, and use it for training step (2). This is because when recomputation occurs, we use output logits from the reviser to recompute z and ϕ, but this would mean that the error from the previous z and ϕ cannot be backpropagated. We also experimented using unit logits (ỹ/ ỹ ) to compute z, as the logits value from the incremental processor and the reviser might differ in magnitude, but using raw logits proved to be more effective. All the experiments were conducted on a GeForce GTX 1080 Ti and took ∼2 weeks to complete." }, { "figure_ref": [], "heading": "Overview of the Linear Transformer", "publication_ref": [ "b25", "b10", "b40", "b7", "b25" ], "table_ref": [], "text": "The Linear Transformer (LT) (Katharopoulos et al., 2020) uses kernel-based formulation and associative property of matrix products to approximate the softmax attention in conventional Transformers, which is a special case of self-attention. In LT, the self-attention for the i-th position is expressed as:\nAtt i (Q, K, V ) = φ(Q i ) S p φ(Q i ) Z p(13)\nS p = p j=1 φ(K j )V j ; Z p = p j=1 φ(K j )(14)\nFor unmasked attention with a sequence length of N , p = N whereas p = i for causal attention. The feature map φ is an exponential linear unit (elu) (Clevert et al., 2016), specifically φ(x) = elu(x) + 1. LT can be viewed as an RNN with hidden states S and Z that are updated as follows:\nS i = S i-1 + φ(K i )V i (15) Z i = Z i-1 + φ(K i )(16)\nwith initial states S 0 = Z 0 = 0. Proof: Duality of the Linear Transformer\nIdeally, the information regarding when to revise should be obtained with RNNs, as they have properties that are crucial for incremental processing and therefore can capture high-quality supervision signal. In practice, this is difficult because it cannot perform revision and its recurrence only allows a unidirectional information flow, which prevents a backward connection to any past outputs. For example, creating a link between the input x t and any past outputs requires computing past hidden states from h t , which is non-trivial. One technique to achieve this is to use reversible RNNs (MacKay et al., 2018) to reverse the hidden state transition, but this is only possible during training. Another approach involves using neural ODE (Chen et al., 2018) to solve the initial value problem from h 0 , which yields h t for any time step t as the solution, but it would be just an approximation of the true hidden state.\nLet us consider an RNN in an incremental scenario, keeping a hidden state h j . How does x t affect the earlier output y j for 1 ≤ j < t? We want an answer that satisfies the following conditions for incremental processing:\n1. The converse hidden state for time step j computed at time step t, ḧj , is a function of x t .\n2. The computation of h t is a function of h t-1 , and not of ḧt-1 . This is consistent with how RNNs work.\n3. The computation of h t-1 is valid iff it involves hidden states h 0 , . . . , h t-2 that agree with condition (2) in their corresponding step.\nIn other words, we want a way to compute converse states ḧj as a function of x t , but it should not be affecting h t , which is only supposed to be computed using past hidden states built from left to right. We are able to satisfy the conditions above and resolve the conflicting hidden state computation by using the Linear Transformer (LT) (Katharopoulos et al., 2020), which can be viewed both as a Transformer and as an RNN. This allows us to get the supervision signal to determine when revision should happen through restart-incremental computation, while still observes how x t affects all past outputs from the perspective of RNNs.\nLet us consider the self-attention computation at time step t for the current and past positions n, n -1, n -2; n = t obtained with a LT under restart-incrementality:\nAtt t n (Q, K, V ) = φ(Q n ) S n φ(Q n ) Z n (17) Att t n-1 (Q, K, V ) = φ(Q n-1 ) S n φ(Q n-1 ) Z n (18) Att t n-2 (Q, K, V ) = φ(Q n-2 ) S n φ(Q n-2 ) Z n(19)\nFrom equations ( 18) and ( 19) we can see that the hidden state S for computing the representations at positions n -1 and n -2 are functions of x n which satisfies condition (1). Furthermore, they are equal to each other i.e., Sn-2 = Sn-1 = S n = S t . Note that we only consider S, however the proof also holds for Z. To satisfy condition (2), consider the self-attention at time step t -1 for position n -1:\nAtt t-1 n-1 (Q, K, V ) = φ(Q n-1 ) S n-1 φ(Q n-1 ) Z n-1 (20\n)\nS t is a function of S t-1 in equation ( 20), S t = S t-1 + φ(K n )V n . We also know that S t = Sn-1 , which means that condition (2) is not completely fulfilled. However, the last clause can be relaxed as it only exists to ensure that the incremental assumption during the computation of S t is met. The reason for this is because there are two ways to view the computation of S at any time step t: (1) by updating the previous state S t-1 , or (2) computing S t directly from input tokens x 1 , . . . , x n=t , which is analogous to the kernel trick, but in this case S is a combination of projected input tokens. The latter view can be used to relax condition (2), as it means S t does not completely depend on the previous state (S t-1 or Sn-1 ) like in conventional RNNs, but can also be computed directly from input tokens while still obeying incremental assumptions.\nFulfilling condition (3) requires that condition (2) holds for all preceding time steps. Formally, S i = f (S i-1 ) and S i = f ( S); 1 ≤ i ≤ t -1. This is satisfied by the fact that S i = S i-1 + φ(K i )V i and taking the perspective of S as a combination of projected input tokens for relaxation. Notice that equation ( 17) is causal and can be expressed as an RNN at time step t while equations ( 18) and ( 19) are acausal. This proof only holds for a single layer of LT due to how information flows between layers. Let us consider the computation of S for a multi-layer LT. At time step t, we compute S t n,l for position n = t in layer l using x l 1 , . . . , x l n , which are outputs of layer l -1. At the same time, these inputs for layer l are computed using S t n,l-1 from layer l -1. This means x l 1 , . . . , x l n-1 are functions of x l-1 n , which violates the incremental assumption for the input. Therefore, we will be unable to properly examine the effect of the current input on all past outputs if we employ a multi-layer LT." }, { "figure_ref": [], "heading": "Algorithm 1 TAPIR", "publication_ref": [], "table_ref": [], "text": "Require: Incremental processor ψ, reviser η, caches Γ h , Γ z , Γ p , controller ξ, policy π θ , input X, input buffer X buf , output buffer Y buf 1: Initialise: h 0 ← 0, x 1 ⇐ X, k1 ← 0, c1 ← 0, t ← 1 2: while t ≤ |X| do 3: h t ← ψ(h t-1 , x t ), ỹt ← f ψ (h t ), y t ← softmax(ỹ t ) 4: if Γ p = ∅ then 5: for i ← 1 to min (t -1, N ) do 6: γ p i ⇐ Γ p , e t i ← f ξ (γ p i , h t , kt-1 ) 7:\nend for 8:\ns t ← softmax(e t ), kt ← i s t i γ p i , ct ← i s t i c i+max (0,t-N -1) 9: end if 10: k t , c t ← ξ( kt , ct , x t ) 11: a t ← π θ (k t ), X buf ⇐ x t 12: if |Γ h | = N then 13: del γ h 1\nDiscard the first cache element when full. \nY buf ⇐ y t , z ← f z (ỹ t ), ϕ ← f φ (h t , z) 18: if |Γ z | = N and |Γ p | = N then 19: del γ z 1 , del γ p 1 20: end if 21: Γ z ⇐ z, Γ p ⇐ ϕ 22:\nelse if a t = REVISE then 23:\nỹη ≤t ← f η (η(X buf )), Y buf ← softmax(ỹ η ≤t ), Γ z ← ∅, Γ p ← ∅ 24:\nfor j ← max (1, t -N + 1) to t do 25:\nh j ⇐ Γ h , z ← f z (ỹ η j ), ϕ ← f φ (h j , z) 26: Γ z ⇐ z, Γ p ⇐ ϕ 27:\nend for 28:\nend if 29:\nx t+1 ⇐ X, t ← t + 1 30: end while Red labels are incorrect with respect to the final output. In the first example, how does it interpreted as an object name at t = {4, 5}, but is revised to a part of an album when TAPIR reads by. It still makes a mistake at the last step, as it edits the label for how from B-album to B-track when it is unnecessary. TAPIR initially labels this in rate this as B-object_select in the second example, which probably suits the available evidence at t = 2. When it encounters the UNK token, B-object_select is revised to O. " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "We thank the anonymous reviewers for their valuable and insightful comments and suggestions. This work is partially funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -Project ID 423217434 (Schlangen)." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "We do not see any immediate ethical issues arising from this work, beyond those inherent to NLP which are under discussion by the community. Disagreements are relatively rare and when they disagree, the range of edit overhead is hardly different compared to the case where both components fully agree with each other." } ]
Language is by its very nature incremental in how it is produced and processed. This property can be exploited by NLP systems to produce fast responses, which has been shown to be beneficial for real-time interactive applications. Recent neural network-based approaches for incremental processing mainly use RNNs or Transformers. RNNs are fast but monotonic (cannot correct earlier output, which can be necessary in incremental processing). Transformers, on the other hand, consume whole sequences, and hence are by nature non-incremental. A restart-incremental interface that repeatedly passes longer input prefixes can be used to obtain partial outputs, while providing the ability to revise. However, this method becomes costly as the sentence grows longer. In this work, we propose the Two-pass model for AdaPtIve Revision (TAPIR) and introduce a method to obtain an incremental supervision signal for learning an adaptive revision policy. Experimental results on sequence labelling show that our model has better incremental performance and faster inference speed compared to restart-incremental Transformers, while showing little degradation on full sequences.
TAPIR: Learning Adaptive Revision for Incremental Natural Language Understanding with a Two-Pass Model
[ { "figure_caption": "Figure 1 :1Figure1: Illustrative example of how a monotonic incremental POS-tagger would not recover from wrong hypotheses. A policy for adaptive revision, here parameterised by a controller, can enable reanalyses to be performed when necessary (here at time steps 3 and 7).", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Incremental evaluation of the models on test sets. Edit Overhead, Correction Time Score and Relative Correctness ∈ [0, 1].Lower is better for EO and CT, while higher is better for RC. TAPIR is better compared to the reference model for the non-delayed case (output prefixes are often correct and stable). The delay strategy of one lookahead token is beneficial.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Distribution of actions and output prefixes by dataset. Most of the actions are WRITE and most of the partial prefixes which are correct do not get unnecessarily revised. Incorrect prefixes cannot always be immediately detected, as expected. Part of the REVISE actions are dispensable, but in a much lower frequency than in the restart-incremental paradigm.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Examples of incremental inference (from SNIPS and Movie) for TAPIR-Trf. Edited labels are marked by a diamond symbol, with the immediate past output at the top right corner for right-frontier edits. Red labels are incorrect with respect to the final output.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: Effect of the probability threshold τ on incremental and non-incremental metrics, using TAPIR-Trf. Increasing τ leads to improvement of incremental metrics at the cost of non-incremental performance.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure7: Additional inference examples from SF-SNIPS obtained with TAPIR-Trf. Edited labels are marked by a diamond symbol, with the immediate past output at the top right corner for right-frontier edits. Red labels are incorrect with respect to the final output. In the first example, how does it interpreted as an object name at t = {4, 5}, but is revised to a part of an album when TAPIR reads by. It still makes a mistake at the last step, as it edits the label for how from B-album to B-track when it is unnecessary. TAPIR initially labels this in rate this as B-object_select in the second example, which probably suits the available evidence at t = 2. When it encounters the UNK token, B-object_select is revised to O.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": ", a natural option for neural network-based", "figure_data": "the alert citizens fear the suspect person1thedetwritedet2alertnounwritedet noun3citizensnounrevise!detadjnoun4fearverbwritedetadjnounverb5thedetwritedetadj nounverbdet6suspectnounwritedetadj nounverbdet noun7personnounrevise!detadj nounverbdetadjnountimeinputincremental outputcontroller(recomputed) output hypothesesIncremental POS tags from: https://demos.explosion.ai/displacy?text=the%20alert%20citizens%20fear%20the%20suspect%20person&model=en_co_web_sm&cpu=1&cph=0", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Distribution of generated actions (train+val).", "figure_data": "TasksWRITE REVISESNIPS 0.7770.223ARW0.8110.189Movie0.7650.235NER0.8950.105Chunk0.6870.313PoS0.7690.231EWT0.7120.288", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table2shows that TAPIR is considerably faster compared to the reference model in incremental settings, as it offers, on average, ∼4.5× speed-up in terms of sequences per second.8 ", "figure_data": "TasksRef.TAPIR-TrfTAPIR-LTSNIPS1.1034.958 (4.50×)8.983 (8.15×)ARW2.3398.734 (3.73×)5.959 (2.55×)Movie0.9273.520 (3.80×)3.432 (3.70×)NER0.6754.465 (6.62×)4.502 (6.67×)Chunk0.6882.714 (3.95×)1.912 (2.78×)PoS0.6724.111 (6.12×)7.400 (11.01×)EWT0.8193.659 (4.47×)3.122 (3.81×)Average 1.0324.594 (4.45×)5.044 (4.89×)", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Comparison of incremental inference speed on test sets. TAPIR is ∼4.5× faster compared to the reference model. All results are in sentences/sec.", "figure_data": "", "figure_id": "tab_4", "figure_label": "2", "figure_type": "table" }, { "figure_caption": ".05 88.57 90.47 89.45 85.95 88.07 87.28 ARW 95.63 93.35 95.17 95.15 92.84 93.65 94.50 Movie 83.98 82.85 83.26 82.95 81.40 83.16 82.21 NER 78.25 74.13 76.85 78.04 73.12 73.79 75.75 Chunk 88.35 86.85 87.48 87.52 85.03 86.43 85.79 ", "figure_data": "TAPIR-TrfTAPIR-LTTasks Ref.-D1D2-D1D2SNIPS 91PoS 92.28 91.32 91.35 91.49 90.90 90.83 90.65EWT 92.14 90.84 92.00 91.95 90.20 91.34 90.93", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Hyperparameter search space for the reference model and TAPIR. The reference model and the reviser share the same search space.", "figure_data": "", "figure_id": "tab_7", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Hyperparameters for our experiments. We use the same hyperparameters for the delayed variants.", "figure_data": "TasksDatasetPublicationLicenseDownloadableSlot fillingSNIPSCoucke et al. (2018)CC0link / preproc.Slot fillingAlarm, reminder, & weatherSchuster et al. (2019)CC BY-SAlinkSlot fillingMIT Movie, eng corpusLiu et al. (2013)-linkNER Chunking PoS taggingCoNLL-2003Tjong Kim Sang and De Meulder (2003)text: NIST research agreement; annotation: -linkPoS tagging Universal Dependencies, EWT Silveira et al. (2014)CC BY-SA 4.0link", "figure_id": "tab_10", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Details about each dataset.", "figure_data": "No. of Seq.Token SizeAvg. Seq. LengthTasksTrainValidTest Labels Vocab Size Train & ValidTestTrain & ValidTestSF-SNIPS13,0847007007211,765124,0846,3549.0029.077SF-ARW30,521 4,181 8,621284,215251,91562,5917.2597.260SF-Movie8,797978 2,443256,71099,49124,68610.17810.105NER-CoNLL14,041 3,249 3,452926,882254,97946,39414.74713.440Chunk-CoNLL 14,041 3,249 3,4522326,882254,97946,39414.74713.440PoS-CoNLL14,041 3,249 3,4524726,882254,97946,39414.74713.440PoS-UD-EWT 12,543 2,001 2,0771821,917232,74125,45616.00312.256", "figure_id": "tab_11", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Descriptive statistics of the datasets. The vocabulary size is computed from training and validation sets.", "figure_data": "TasksWRITEREVISETasksWRITEREVISESF-SNIPS0.7630.237SF-SNIPS0.7940.206SF-ARW0.8310.169SF-ARW0.8380.162SF-Movie0.7640.236SF-Movie0.7720.228NER-CoNLL0.9040.096NER-CoNLL0.9100.090Chunk-CoNLL0.8380.162Chunk-CoNLL0.7900.210PoS-CoNLL0.7020.298PoS-CoNLL0.8190.181PoS-UD-EWT0.7850.215PoS-UD-EWT0.7710.229(a) Training sets(b) Training and validation sets", "figure_id": "tab_12", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Mean of WRITE and REVISE action ratios per sentence for training sets and combination of training and validation sets. Most of the time, the mean of the WRITE action ratio is higher compared to the REVISE action ratio.", "figure_data": "Percentage (%) by REVISE RatioTasks0-0.2 0.2-0.4 0.4-0.6 0.6-0.8 0.8-1SF-SNIPS48.6530.3017.563.410.08SF-ARW63.5423.3011.581.550.03SF-Movie47.6034.3915.632.300.09NER-CoNLL79.6715.534.150.640.01Chunk-CoNLL 61.7327.2310.210.810.01PoS-CoNLL39.1724.6325.3310.370.51PoS-UD-EWT50.5833.2414.082.060.05(a) Training setsPercentage (%) by REVISE RatioTasks0-0.2 0.2-0.4 0.4-0.6 0.6-0.8 0.8-1SF-SNIPS54.8028.6613.992.520.03SF-ARW65.0122.8810.711.350.04SF-Movie49.2134.3614.541.850.04NER-CoNLL81.2614.753.610.350.03Chunk-CoNLL 53.2025.0018.633.130.03PoS-CoNLL59.3527.6411.091.840.08PoS-UD-EWT47.6132.8216.812.680.08(b) Training and validation sets", "figure_id": "tab_13", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Distribution of examples in each dataset by their REVISE action ratio for training sets and combination of training and validation sets. Most of the examples in the datasets have considerably low REVISE action ratio (< 0.6).", "figure_data": "TAPIR-TrfTAPIR-LTTasksRef. Model-D1D2-D1D2SF-SNIPS91.4288.26 91.10 90.59 88.12 87.80 88.75SF-ARW95.5594.94 94.96 95.17 93.90 94.63 94.60SF-Movie85.2084.69 84.90 84.30 84.25 84.36 84.33NER-CoNLL84.6980.95 84.52 84.68 82.38 82.90 82.52Chunk-CoNLL89.0288.19 88.86 88.69 85.76 86.92 87.19PoS-CoNLL93.0792.73 92.87 92.81 92.48 92.23 91.98PoS-UD-EWT91.8890.35 91.33 91.67 89.99 90.96 90.69", "figure_id": "tab_14", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Non-incremental performance of the models on validation sets (F1 for the first group, accuracy for the second group).", "figure_data": "TAPIR-TrfTAPIR-LTTasksRef. Model-D1D2-D1D2SF-SNIPS16.238.7 38.7 38.7 29.7 29.7 29.7SF-ARW4.413.6 13.6 13.6 30.7 30.7 30.7SF-Movie7.221.0 21.0 21.0 25.7 25.7 25.7NER-CoNLL16.744.7 44.7 44.7 43.7 43.7 43.7Chunk-CoNLL16.744.1 44.1 44.1 45.2 45.2 45.2PoS-CoNLL16.742.0 42.0 42.0 33.8 33.8 33.8PoS-UD-EWT12.534.7 34.7 34.7 38.9 38.9 38.9", "figure_id": "tab_15", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Number of parameters for each model, in millions.", "figure_data": "", "figure_id": "tab_16", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Overall distribution of actions and prefixes on test sets using TAPIR-Trf. W represents WRITE and R represents REVISE. C and I denote correct and incorrect output prefixes, respectively.", "figure_data": "ActionPrefix", "figure_id": "tab_17", "figure_label": "13", "figure_type": "table" } ]
Patrick Kahardipraja; Brielen Madureira; David Schlangen
[ { "authors": "Takuya Akiba; Shotaro Sano; Toshihiko Yanase; Takeru Ohta; Masanori Koyama", "journal": "Association for Computing Machinery", "ref_id": "b0", "title": "Optuna: A next-generation hyperparameter optimization framework", "year": "2019" }, { "authors": "Naveen Arivazhagan; Colin Cherry; Wolfgang Macherey; Chung-Cheng Chiu; Semih Yavuz; Ruoming Pang; Wei Li; Colin Raffel", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "Monotonic infinite lookback attention for simultaneous machine translation", "year": "2019" }, { "authors": "Naveen Arivazhagan; Colin Cherry; Wolfgang Macherey; George Foster", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "Re-translation versus streaming for simultaneous translation", "year": "2020" }, { "authors": "Michaela Atterer; Timo Baumann; David Schlangen", "journal": "", "ref_id": "b3", "title": "No sooner said than done? testing incrementality of semantic interpretations of spontaneous speech", "year": "2009-09-06" }, { "authors": "Timo Baumann; Okko Buß; David Schlangen", "journal": "Dialogue & Discourse", "ref_id": "b4", "title": "Evaluation and optimisation of incremental processors", "year": "2011" }, { "authors": "Niels Beuck; Arne Köhn; Wolfgang Menzel", "journal": "Northern European Association for Language Technology (NEALT", "ref_id": "b5", "title": "Decision strategies for incremental POS tagging", "year": "2011" }, { "authors": "Angelica Chen; Vicky Zayats; Daniel Walker; Dirk Padfield", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Teaching BERT to wait: Balancing accuracy and latency for streaming disfluency detection", "year": "2022" }, { "authors": "T Q Ricky; Yulia Chen; Jesse Rubanova; David K Bettencourt; Duvenaud", "journal": "", "ref_id": "b7", "title": "Neural ordinary differential equations", "year": "2018" }, { "authors": "Jianpeng Cheng; Li Dong; Mirella Lapata", "journal": "Association for Computational Linguistics", "ref_id": "b8", "title": "Long short-term memory-networks for machine reading", "year": "2016" }, { "authors": "Chung-Cheng Chiu; Colin Raffel", "journal": "", "ref_id": "b9", "title": "Monotonic chunkwise attention", "year": "2018-04-30" }, { "authors": "Djork-Arné Clevert; Thomas Unterthiner; Sepp Hochreiter", "journal": "", "ref_id": "b10", "title": "Fast and accurate deep network learning by exponential linear units (elus)", "year": "2016-05-02" }, { "authors": "Alice Coucke; Alaa Saade; Adrien Ball; Théodore Bluche; Alexandre Caulier; David Leroy; Clément Doumouro; Thibault Gisselbrecht; Francesco Caltagirone; Thibaut Lavril; Maël Primet; Joseph Dureau", "journal": "", "ref_id": "b11", "title": "Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces", "year": "2018" }, { "authors": "E Haihong; Peiqing Niu; Zhongfu Chen; Meina Song", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "A novel bi-directional interrelated model for joint intent detection and slot filling", "year": "2019" }, { "authors": "Lyn Frazier; Keith Rayner", "journal": "Cognitive Psychology", "ref_id": "b13", "title": "Making and correcting errors during sentence comprehension: Eye movements in the analysis of structurally ambiguous sentences", "year": "1982" }, { "authors": "Xavier Glorot; Yoshua Bengio", "journal": "PMLR", "ref_id": "b14", "title": "Understanding the difficulty of training deep feedforward neural networks", "year": "2010" }, { "authors": "Edouard Grave; Armand Joulin; Nicolas Usunier", "journal": "", "ref_id": "b15", "title": "Improving neural language models with a continuous cache", "year": "2017-04-24" }, { "authors": "Alex Graves; Greg Wayne; Ivo Danihelka", "journal": "", "ref_id": "b16", "title": "Neural turing machines", "year": "2014" }, { "authors": "Alvin Grissom; I I ; He He; Jordan Boyd-Graber; John Morgan; Hal Daumé; Iii ", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Don't until the final verb wait: Reinforcement learning for simultaneous machine translation", "year": "2014" }, { "authors": "Jiatao Gu; Graham Neubig; Kyunghyun Cho; O K Victor; Li", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Learning to translate in real-time with neural machine translation", "year": "2017" }, { "authors": "Shoutao Guo; Shaolei Zhang; Yang Feng", "journal": "Association for Computational Linguistics", "ref_id": "b19", "title": "Turning fixed to adaptive: Integrating postevaluation into simultaneous machine translation", "year": "2022" }, { "authors": "Michael Hahn", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b20", "title": "Theoretical limitations of selfattention in neural sequence models", "year": "2020" }, { "authors": "Sepp Hochreiter; Jürgen Schmidhuber", "journal": "Neural Comput", "ref_id": "b21", "title": "Long short-term memory", "year": "1997" }, { "authors": "Ke Hu; Tara N Sainath; Ruoming Pang; Rohit Prabhavalkar", "journal": "IEEE", "ref_id": "b22", "title": "Deliberation model based twopass end-to-end speech recognition", "year": "2020-05-04" }, { "authors": "Patrick Kahardipraja; Brielen Madureira; David Schlangen", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Towards incremental transformers: An empirical analysis of transformer models for incremental NLU", "year": "2021" }, { "authors": "Yuki Kamide", "journal": "Language and Linguistics Compass", "ref_id": "b24", "title": "Anticipatory processes in sentence processing", "year": "2008" }, { "authors": "Angelos Katharopoulos; Apoorv Vyas; Nikolaos Pappas; François Fleuret", "journal": "", "ref_id": "b25", "title": "Transformers are RNNs: Fast autoregressive transformers with linear attention", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b26", "title": "", "year": "" }, { "authors": "Ayush Kaushal; Aditya Gupta; Shyam Upadhyay; Manaal Faruqui", "journal": "Association for Computational Linguistics", "ref_id": "b27", "title": "Efficient encoders for streaming sequence tagging", "year": "2023" }, { "authors": "Gerard Kempen; Edward Hoenkamp", "journal": "", "ref_id": "b28", "title": "Incremental sentence generation: Implications for the structure of a syntactic processor", "year": "1982" }, { "authors": "Gerard Kempen; Edward Hoenkamp", "journal": "Cognitive Science", "ref_id": "b29", "title": "An incremental procedural grammar for sentence formulation", "year": "1987" }, { "authors": "Arne Köhn", "journal": "Association for Computational Linguistics", "ref_id": "b30", "title": "Incremental natural language processing: Challenges, strategies, and evaluation", "year": "2018" }, { "authors": "J M Willem; Levelt", "journal": "The MIT Press", "ref_id": "b31", "title": "Speaking: From Intention to Articulation", "year": "1993" }, { "authors": "Wei Li; James Qin; Chung-Cheng Chiu; Ruoming Pang; Yanzhang He", "journal": "ISCA", "ref_id": "b32", "title": "Parallel rescoring with transformer for streaming on-device speech recognition", "year": "2020-10-29" }, { "authors": "Zekang Li; Cheng Niu; Fandong Meng; Yang Feng; Qian Li; Jie Zhou", "journal": "", "ref_id": "b33", "title": "Incremental transformer with deliberation decoder for document grounded conversations", "year": "2019" }, { "authors": "Fei Liu; Luke Zettlemoyer; Jacob Eisenstein", "journal": "Association for Computational Linguistics", "ref_id": "b34", "title": "The referential reader: A recurrent entity network for anaphora resolution", "year": "2019" }, { "authors": "Jingjing Liu; Panupong Pasupat; Scott Cyphers; James R Glass", "journal": "IEEE", "ref_id": "b35", "title": "Asgard: A portable architecture for multilingual dialogue systems", "year": "2013-05-26" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b36", "title": "Decoupled weight decay regularization", "year": "2019-05-06" }, { "authors": "Thang Luong; Hieu Pham; Christopher D Manning", "journal": "Association for Computational Linguistics", "ref_id": "b37", "title": "Effective approaches to attention-based neural machine translation", "year": "2015" }, { "authors": "Mingbo Ma; Liang Huang; Hao Xiong; Renjie Zheng; Kaibo Liu; Baigong Zheng; Chuanqiang Zhang; Zhongjun He; Hairong Liu; Xing Li; Hua Wu; Haifeng Wang", "journal": "Association for Computational Linguistics", "ref_id": "b38", "title": "STACL: Simultaneous translation with implicit anticipation and controllable latency using prefix-to-prefix framework", "year": "2019" }, { "authors": "Xutai Ma; Juan Miguel Pino; James Cross; Liezl Puzon; Jiatao Gu", "journal": "", "ref_id": "b39", "title": "Monotonic multihead attention", "year": "2020-04-26" }, { "authors": "Matthew Mackay; Paul Vicol; Jimmy Ba; Roger B Grosse", "journal": "", "ref_id": "b40", "title": "Reversible recurrent neural networks", "year": "2018" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b41", "title": "", "year": "" }, { "authors": "Brielen Madureira; David Schlangen", "journal": "Association for Computational Linguistics", "ref_id": "b42", "title": "Incremental processing in the age of non-incremental encoders: An empirical assessment of bidirectional models for incremental NLU", "year": "2020" }, { "authors": "Jan Niehues; Son Thai; Eunah Nguyen; Thanh-Le Cho; Kevin Ha; Markus Kilgour; Matthias Müller; Sebastian Sperber; Alex Stüker; Waibel", "journal": "ISCA", "ref_id": "b43", "title": "Dynamic transcription for low-latency speech translation", "year": "2016-09-08" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher Manning", "journal": "Association for Computational Linguistics", "ref_id": "b44", "title": "GloVe: Global vectors for word representation", "year": "2014" }, { "authors": "Colin Raffel; Minh-Thang Luong; Peter J Liu; Ron J Weiss; Douglas Eck", "journal": "", "ref_id": "b45", "title": "Online and lineartime attention by enforcing monotonic alignments", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b46", "title": "", "year": "" }, { "authors": "Morteza Rohanian; Julian Hough", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "Best of both worlds: Making high accuracy non-incremental transformer-based disfluency detection incremental", "year": "2021" }, { "authors": "David E Rumelhart; Geoffrey E Hinton; Ronald J Williams", "journal": "Nature", "ref_id": "b48", "title": "Learning representations by backpropagating errors", "year": "1986" }, { "authors": "Tara N Sainath; Ruoming Pang; David Rybach; Yanzhang He; Rohit Prabhavalkar; Wei Li; Mirkó Visontai; Qiao Liang; Trevor Strohman; Yonghui Wu; Ian Mcgraw; Chung-Cheng Chiu", "journal": "ISCA", "ref_id": "b49", "title": "Twopass end-to-end speech recognition", "year": "2019-09" }, { "authors": "David Schlangen; Gabriel Skantze", "journal": "Association for Computational Linguistics", "ref_id": "b50", "title": "A general, abstract model of incremental dialogue processing", "year": "2009" }, { "authors": "David Schlangen; Gabriel Skantze", "journal": "Dialogue & Discourse", "ref_id": "b51", "title": "A general, abstract model of incremental dialogue processing", "year": "2011" }, { "authors": "Sebastian Schuster; Sonal Gupta; Rushin Shah; Mike Lewis", "journal": "Association for Computational Linguistics", "ref_id": "b52", "title": "Cross-lingual transfer learning for multilingual task oriented dialog", "year": "2019" }, { "authors": "Natalia Silveira; Timothy Dozat; Marie-Catherine De Marneffe; Samuel Bowman; Miriam Connor; John Bauer; Chris Manning", "journal": "European Language Resources Association (ELRA", "ref_id": "b53", "title": "A gold standard dependency corpus for English", "year": "2014" }, { "authors": "Yiping Song; Cheng-Te Li; Jian-Yun Nie; Ming Zhang; Dongyan Zhao; Rui Yan", "journal": "International Joint Conferences on Artificial Intelligence Organization", "ref_id": "b54", "title": "An ensemble of retrieval-based and generation-based humancomputer conversation systems", "year": "2018" }, { "authors": "Sainbayar Sukhbaatar; Arthur Szlam; Jason Weston; Rob Fergus", "journal": "", "ref_id": "b55", "title": "End-to-end memory networks", "year": "2015" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b56", "title": "", "year": "" }, { "authors": "Xiaobing Sun; Wei Lu", "journal": "Association for Computational Linguistics", "ref_id": "b57", "title": "Implicit n-grams induced by recurrence", "year": "2022" }, { "authors": "Erik F Tjong; Kim Sang; Fien De; Meulder ", "journal": "", "ref_id": "b58", "title": "Introduction to the CoNLL-2003 shared task: Language-independent named entity recognition", "year": "2003" }, { "authors": "Ke Tran; Arianna Bisazza; Christof Monz", "journal": "Association for Computational Linguistics", "ref_id": "b59", "title": "The importance of being recurrent for modeling hierarchical structure", "year": "2018" }, { "authors": "Javier Turek; Shailee Jain; Vy Vo; Mihai Capotȃ; Alexander Huth; Theodore Willke", "journal": "", "ref_id": "b60", "title": "Approximating stacked and bidirectional recurrent architectures with the delayed recurrent neural network", "year": "2020" }, { "authors": " Pmlr", "journal": "", "ref_id": "b61", "title": "", "year": "" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Ł Ukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b62", "title": "Attention is all you need", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b63", "title": "", "year": "" }, { "authors": "Weiran Wang; Ke Hu; Tara N Sainath", "journal": "IEEE", "ref_id": "b64", "title": "Deliberation of streaming rnn-transducer by nonautoregressive decoding", "year": "2022-05-27" }, { "authors": "Jason Weston; Sumit Chopra; Antoine Bordes", "journal": "", "ref_id": "b65", "title": "Memory networks", "year": "2015-05-07" }, { "authors": "Jason Weston; Emily Dinan; Alexander Miller", "journal": "Association for Computational Linguistics", "ref_id": "b66", "title": "Retrieve and refine: Improved sequence generation models for dialogue", "year": "2018" }, { "authors": "Yingce Xia; Fei Tian; Lijun Wu; Jianxin Lin; Tao Qin; Nenghai Yu; Tie-Yan Liu", "journal": "", "ref_id": "b67", "title": "Deliberation networks: Sequence generation beyond one-pass decoding", "year": "2017" }, { "authors": "Curran Associates; Inc ", "journal": "", "ref_id": "b68", "title": "", "year": "" }, { "authors": "Baigong Zheng; Renjie Zheng; Mingbo Ma; Liang Huang", "journal": "Association for Computational Linguistics", "ref_id": "b69", "title": "Simpler and faster learning of adaptive policies for simultaneous translation", "year": "2019" }, { "authors": "Renjie Zheng; Mingbo Ma; Baigong Zheng; Kaibo Liu; Liang Huang", "journal": "Association for Computational Linguistics", "ref_id": "b70", "title": "Opportunistic decoding with timely correction for simultaneous translation", "year": "2020" }, { "authors": "Lukáš Žilka; Filip ", "journal": "Cham. Springer International Publishing", "ref_id": "b71", "title": "Lectrack: Incremental dialog state tracking with long short-term memory networks", "year": "2015" }, { "authors": "", "journal": "", "ref_id": "b72", "title": "I-track I-album I-album B-album I-album O B-artist I-album I-album B-track I", "year": "" } ]
[ { "formula_coordinates": [ 4, 108.57, 327.07, 180.57, 27.9 ], "formula_id": "formula_0", "formula_text": "z = tanh(W ỹ ỹ + b z ) (2) ϕ = tanh(W in h + W out z + b ϕ ) (3)" }, { "formula_coordinates": [ 4, 100.14, 590.14, 184.75, 15.86 ], "formula_id": "formula_1", "formula_text": "U = W c γ p i + W h h t + W k kt-1 + b u (4" }, { "formula_coordinates": [ 4, 284.89, 594.1, 4.24, 9.46 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 4, 100.11, 609.34, 189.03, 14.19 ], "formula_id": "formula_3", "formula_text": "s t i = softmax(v tanh(U ))(5)" }, { "formula_coordinates": [ 4, 109.04, 737.77, 180.1, 33.71 ], "formula_id": "formula_4", "formula_text": "kt ct = N i=1 s t i • γ p i c i+max (0,t-N -1)(6)" }, { "formula_coordinates": [ 4, 353.89, 161.29, 170.52, 77.14 ], "formula_id": "formula_5", "formula_text": "  i t f t o t ĉt   =   σ σ σ tanh   W • [ kt , x t ] (7) c t = f t ct + i t ĉt (8) k t = o t tanh(c t )(9)" }, { "formula_coordinates": [ 4, 320.96, 285.63, 203.45, 46.33 ], "formula_id": "formula_6", "formula_text": "π θ (a t |a <t , x ≤t , y <t ) = σ(θ k t + b k ) (10) a t = REVISE, if σ(θ k t + b k ) ≥ τ WRITE, otherwise(11)" }, { "formula_coordinates": [ 5, 105.52, 444.74, 183.61, 12.42 ], "formula_id": "formula_7", "formula_text": "L = CE(y gold , y) + BCE(a LT , a) (12)" }, { "formula_coordinates": [ 15, 348.97, 573.73, 175.44, 26.29 ], "formula_id": "formula_8", "formula_text": "Att i (Q, K, V ) = φ(Q i ) S p φ(Q i ) Z p(13)" }, { "formula_coordinates": [ 15, 323.25, 603.09, 201.16, 34.29 ], "formula_id": "formula_9", "formula_text": "S p = p j=1 φ(K j )V j ; Z p = p j=1 φ(K j )(14)" }, { "formula_coordinates": [ 15, 363.73, 729.23, 160.68, 27.9 ], "formula_id": "formula_10", "formula_text": "S i = S i-1 + φ(K i )V i (15) Z i = Z i-1 + φ(K i )(16)" }, { "formula_coordinates": [ 16, 329.89, 447.46, 194.52, 88.38 ], "formula_id": "formula_11", "formula_text": "Att t n (Q, K, V ) = φ(Q n ) S n φ(Q n ) Z n (17) Att t n-1 (Q, K, V ) = φ(Q n-1 ) S n φ(Q n-1 ) Z n (18) Att t n-2 (Q, K, V ) = φ(Q n-2 ) S n φ(Q n-2 ) Z n(19)" }, { "formula_coordinates": [ 16, 324.48, 659.32, 195.39, 26.23 ], "formula_id": "formula_12", "formula_text": "Att t-1 n-1 (Q, K, V ) = φ(Q n-1 ) S n-1 φ(Q n-1 ) Z n-1 (20" }, { "formula_coordinates": [ 16, 519.87, 667.78, 4.54, 9.46 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 18, 70.87, 213.47, 453.54, 119.73 ], "formula_id": "formula_14", "formula_text": "Require: Incremental processor ψ, reviser η, caches Γ h , Γ z , Γ p , controller ξ, policy π θ , input X, input buffer X buf , output buffer Y buf 1: Initialise: h 0 ← 0, x 1 ⇐ X, k1 ← 0, c1 ← 0, t ← 1 2: while t ≤ |X| do 3: h t ← ψ(h t-1 , x t ), ỹt ← f ψ (h t ), y t ← softmax(ỹ t ) 4: if Γ p = ∅ then 5: for i ← 1 to min (t -1, N ) do 6: γ p i ⇐ Γ p , e t i ← f ξ (γ p i , h t , kt-1 ) 7:" }, { "formula_coordinates": [ 18, 72.5, 333.75, 324.27, 83.28 ], "formula_id": "formula_15", "formula_text": "s t ← softmax(e t ), kt ← i s t i γ p i , ct ← i s t i c i+max (0,t-N -1) 9: end if 10: k t , c t ← ξ( kt , ct , x t ) 11: a t ← π θ (k t ), X buf ⇐ x t 12: if |Γ h | = N then 13: del γ h 1" }, { "formula_coordinates": [ 18, 72.5, 458.57, 220.46, 77.86 ], "formula_id": "formula_16", "formula_text": "Y buf ⇐ y t , z ← f z (ỹ t ), ϕ ← f φ (h t , z) 18: if |Γ z | = N and |Γ p | = N then 19: del γ z 1 , del γ p 1 20: end if 21: Γ z ⇐ z, Γ p ⇐ ϕ 22:" }, { "formula_coordinates": [ 18, 72.5, 537.3, 319.04, 26.24 ], "formula_id": "formula_17", "formula_text": "ỹη ≤t ← f η (η(X buf )), Y buf ← softmax(ỹ η ≤t ), Γ z ← ∅, Γ p ← ∅ 24:" }, { "formula_coordinates": [ 18, 72.5, 564.4, 233.18, 39.79 ], "formula_id": "formula_18", "formula_text": "h j ⇐ Γ h , z ← f z (ỹ η j ), ϕ ← f φ (h j , z) 26: Γ z ⇐ z, Γ p ⇐ ϕ 27:" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0" ], "table_ref": [], "text": "In full-text search applications, the primary goal is to effectively retrieve and match relevant documents based on user queries. By focusing on finding the first form, or the lemma, of a word, the search process can be streamlined and optimized. The lemma serves as a normalized representation of a word's different inflected forms, allowing for a more accurate comparison between user queries and document content. This approach reduces the complexity and computational overhead associated with full morphological analysis, which includes extracting all possible forms of a word along with their grammatical properties. By prioritizing lemma retrieval, full-text search engines can achieve faster response times and more precise results, while minimizing the resources required for processing large volumes of text data.\nConsequently, building upon the foundation of pymorphy [1], the golemma library was developed to address the challenge of efficiently identifying the first form, or lemma, of words in the Russian language." }, { "figure_ref": [], "heading": "Challenges with Russian Language", "publication_ref": [], "table_ref": [], "text": "Lemmatization and stemming both reduce words to their base forms but operate differently. Stemming, a simple rule-based process, removes suffixes without considering context, often yielding invalid words. Lemmatization, conversely, uses a vocabulary and morphological analysis to derive the base form, or \"lemma,\" considering context and generating valid words.\nStemming is designed for English and other Western languages with simpler inflectional structures, whereas Russian's complex inflectional structure poses challenges for stemming algorithms. Russian words can have multiple inflections, stems, and can form compound words, making rule-based systems less effective.\nLemmatization is generally more effective for languages with complex inflectional structures, such as Russian. It employs dictionaries and morphological rules to determine a word's base form, considering grammatical context and accounting for multiple stems and compound words. Lemmatization returns valid dictionary words, making it useful for natural language processing tasks like text classification, information retrieval, and machine translation. Its flexibility in handling various grammatical forms and tenses makes it suitable for tasks like text generation and summarization.\nIn summary, lemmatization is a sophisticated, accurate approach suitable for handling Russian's complexity, yielding valid words for a range of NLP tasks." }, { "figure_ref": [], "heading": "Definition of Paradigm", "publication_ref": [], "table_ref": [], "text": "In the context of the pymorphy2 library, a paradigm refers to a collection of inflected forms of a word that possess the same grammatical properties and share a common lemma. A paradigm is characterized by a range of grammatical categories, such as number, tense, gender, and others, and may encompass multiple forms of a word for each category.\nFor instance, the paradigm for the Russian word \"бежать\" (to run) would comprise forms like \"бегу\" (I run), \"бежишь\" (you run), \"бежит\" (he/she/it runs), \"бежим\" (we run), \"бежите\" (you run), and \"бежат\" (they run), as well as all potential forms of the word in various tenses and aspects.\nThe pymorphy2 library utilizes the OpenCorpora project's morphological dictionary as its data source, which supplies comprehensive information on the grammatical properties and inflectional forms of Russian words.\nWhen employing pymorphy2, the library loads the morphological dictionary and establishes a set of paradigms for each word. These paradigms are then utilized to generate the inflected forms of a word, in addition to determining the lemma and grammatical properties of a given word form. Furthermore, the paradigms enable the generation of all possible forms of a word, which proves beneficial for tasks such as text generation, summarization, and question answering.\nIn golemma, the concept of a paradigm found in pymorphy has been reevaluated and simplified. We consider the following structure as a paradigm: The reason for this reevaluation and simplification of the concept of a paradigm in golemma, compared to pymorphy2, is to increase the efficiency and speed of retrieving the first form of a word. This is a critical aspect in fulltext search, where the main goal is not necessarily to generate all possible forms of a word or to determine its grammatical properties, but rather to quickly and accurately identify the base form, or lemma, of a given word.\ntype Paradigm struct { C u t P r e f i x i nt C u t S u f f i x i nt\nThe simplified paradigm structure in golemma reduces the computational overhead associated with handling morphological data and focuses on the essential elements needed for lemmatization. The use of 'CutPrefix', 'CutSuffix', 'AddPrefix', and 'AddSuffix' provides a straightforward way to transform a word form back to its lemma, which can be done with minimal processing. This leads to a more responsive and efficient full-text search process, especially when dealing with large volumes of text." }, { "figure_ref": [], "heading": "Retrieving Paradigms", "publication_ref": [], "table_ref": [], "text": "OpenCorpora is an invaluable resource for paradigm retrieval due to its comprehensive and accurate morphological dictionary for the Russian language. The project contains a vast collection of annotated linguistic data, including grammatical properties and inflectional forms for Russian words. By utilizing Open-Corpora, developers can access a reliable source of information to create efficient and precise algorithms for paradigm extraction. This, in turn, enhances the effectiveness of natural language processing tasks, such as lemmatization, which greatly benefits from accurate paradigm retrieval.\nThe SAX (Simple API for XML)1 parser provides numerous advantages. As an event-driven parser, it does not store the entire XML document in memory. Instead, it reads and processes the document sequentially, making it highly memory-efficient, particularly when handling large XML files like those found in OpenCorpora. Owing to its event-driven nature, the SAX parser is generally faster than other parsing methods, such as DOM (Document Object Model), which loads the entire XML document into memory prior to processing. This speed is especially beneficial when parsing large datasets like OpenCorpora, as it can save significant time and resources.\nThe <lemmata> section in OpenCorpora's dictionary XML2 contains information about the lemmas, or base forms, of words in the Russian language. Each entry in the <lemmata> section represents a lemma and is accompanied by its grammatical properties, such as part of speech, gender, case, and number. Additionally, the section provides details about the inflected forms of each lemma, which are essential for understanding the different ways a word can appear in a text.\nThe dictionary format we get after parsing is a Python dictionary where each key is a unique integer identifier u, and the corresponding value is a tuple containing two elements:\n• The first element of the tuple is the normal form n of the word.\n• The second element of the tuple is a set of inflected forms i k of the word.\nu → (n, {i 0 , i 1 , . . .})\nThis dictionary is a source for paradigm retrieval algorithm. In simple words, the paradigm function takes two inputs, a normal form and an inflected form of a word, and calculates their Longest Common Substring (LCSS). Then, it extracts the prefixes and suffixes of both forms by removing the LCSS. Finally, the function returns a tuple containing the lengths of the inflected form's prefix and suffix, as well as the normal form's prefix and suffix. This tuple essentially represents the paradigm that connects the normal form and the inflected form.\nThe goal is to build two Python dictionaries (p k refers to the identifier of a particular paradigm):\n• (cut_prefix, cut_suffix, add_prefix, add_suffix) → p k • murmur3(n) → {(i 0 , p 0 ), (i 1 , p 1 ), . . .}\nMurmurHash33 is a non-cryptographic hash function that is popular for its speed and uniform distribution of hash values which make it a good choice for full-text search applications.\nAs of May 2023, we've successfully extracted 2488 unique paradigms from a grand total of 391,842 lemmas in the OpenCorpora dataset" }, { "figure_ref": [], "heading": "Building Dictionary", "publication_ref": [], "table_ref": [], "text": "Building dictionary is done in two main steps:\n• Saving the Paradigms: The function first retrieves the paradigms from the retriever object, rearranges them as pairs of (value, key), sorts them based on the value and then writes them into the file. The paradigms are saved as a map with pairs using the MessagePack (msgpack) packer. • Preparing and Writing the Dictionary: The dictionary is structured as a map where the key is the hash of the word's normal form (calculated using MurmurHash3), and the value is a list of tuples, each containing a word form and its paradigm ID. This structure is reversed to a form where the key is the word form, and the value is a tuple containing two lists: one with the hashes and one with the paradigm IDs. Then, the function sorts the items based on the word form, and groups together entries with the same word form into a single entry. Finally, these items are written to the file as a map with pairs using the MessagePack packer." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "The file format, therefore, consists of two main sections: The first is a map of paradigms, and the second is a map of words, each associated with a list of hashes and a list of paradigm IDs.\nThe MessagePack binary format 4 is used for efficient storage and quick retrieval. The generated file serves as a vital tool for efficient lemmatization in full-text search, mapping different forms of words to their first form. Its binary structure, combined with the use of MurmurHash3, allows for rapid retrieval and accurate results, thus enhancing the performance of full-text searches.\nWe've developed a compact and efficient way to store the entirety of a language dictionary, optimizing it specifically for the retrieval of word's first forms. This has been achieved by combining advanced data structures, hashing techniques, and compression methods. The resulting system offers a significantly reduced storage footprint, along with improved retrieval speeds. This makes it an ideal solution for enhancing the performance of full-text searches, where rapid and accurate lemmatization is key." } ]
Advancing Full-Text Search Lemmatization Techniques with Paradigm Retrieval from OpenCorpora
[ { "figure_caption": "AddPr efix s tri ng AddSuffix s tri ng } The first form of a word is obtained by applying the paradigm to it. Applying a paradigm entails removing CutP ref ix characters from the left, CutSuf f ix characters from the right, and then concatenating AddP ref ix on the left and AddSuf f ix on the right.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "1 ,1, ть ) --------→ рубить.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" } ]
Dmitriy Kalugin-Balashov
[ { "authors": "Mikhail Korobov", "journal": "Springer International Publishing", "ref_id": "b0", "title": "Morphological Analyzer and Generator for Russian and Ukrainian Languages", "year": "2015" } ]
[ { "formula_coordinates": [ 2, 240.24, 550.63, 130.62, 33.86 ], "formula_id": "formula_0", "formula_text": "type Paradigm struct { C u t P r e f i x i nt C u t S u f f i x i nt" }, { "formula_coordinates": [ 4, 262.8, 211.29, 85.7, 10.65 ], "formula_id": "formula_1", "formula_text": "u → (n, {i 0 , i 1 , . . .})" }, { "formula_coordinates": [ 4, 148.68, 353.35, 241.83, 22.67 ], "formula_id": "formula_2", "formula_text": "• (cut_prefix, cut_suffix, add_prefix, add_suffix) → p k • murmur3(n) → {(i 0 , p 0 ), (i 1 , p 1 ), . . .}" } ]
2023-05-21
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b19", "b17", "b18", "b0", "b0", "b1" ], "table_ref": [], "text": "The field of computer vision has seen significant advancements in recent years, particularly in the area of generative AI. In the domain of image generation, Stable Diffusion has revolutionized content creation by providing open software to generate arbitrary high-fidelity RGB images from text prompts. This work builds on top of Stable Diffusion [20] v1.4 and proposes a Latent Diffusion Model for 3D (LDM3D). Unlike the original model, LDM3D is capable of generating both image and depth map data from a given text prompt as can be seen in Figure 1. It allows users to generate complete RGBD representations of text prompts, bringing them to life in vivid and immersive 360°v iews.\nOur LDM3D model was fine-tuned on a dataset of tuples containing an RGB image, depth map and caption. This dataset was constructed from a subset of the LAION-400M dataset, a large-scale image-caption dataset that contains over 400 million image-caption pairs. The depth maps used in fine-tuning were generated by the DPT-Large depth estimation model [18,19], which provides highly accurate relative depth estimates for each pixel in an image. The use of accurate depth maps was crucial in ensuring that we are able to generate 360°views that are realistic and immersive, allowing users to experience their text prompts in vivid detail.\nTo showcase the potential of LDM3D, we have developed DepthFusion, an application that uses the generated 2D RGB images and depth maps to compute a 360°projection using TouchDesigner [1]. TouchDesigner is a versatile platform that enables the creation of immersive and interactive multimedia experiences. Our application harnesses the power of TouchDesigner to create unique and engaging 360°views that bring text prompts to life in vivid detail. DepthFusion has the potential to revolutionize the way we experience digital content. Whether it's a description of a tranquil forest, a bustling cityscape, or a futuristic sci-fi Inference Figure 1. LDM3D overview. Illustrating the training pipeline: the 16-bit grayscale depth maps are packed into 3-channel RGB-like depth images, which are then concatenated with the RGB images along the channel dimension. This concatenated RGBD input is passed through the modified KL-AE and mapped to the latent space. Noise is added to the latent representation, which is then iteratively denoised by the U-Net model. The text prompt is encoded using a frozen CLIP-text encoder and mapped to various layers of the U-Net using crossattention. The denoised output from the latent space is fed into the KL-decoder and mapped back to pixel space as a 6-channel RGBD output. Finally, the output is separated into an RGB image and a 16-bit grayscale depth map. Blue frame: text-to-image inference pipeline. Initiating from a Gaussian distributed noise sample in the 64x64x4-dimensional latent space. Given a text prompt, this pipeline generates an RGB image and its corresponding depth map.\nworld, DepthFusion can generate immersive and engaging 360°views that allow users to experience their text prompts in a way that was previously impossible. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design.\nIn summary, our contributions are threefold. (1) We propose LDM3D, a novel diffusion model that outputs RGBD images (RGB images with corresponding depth maps) given a text prompt. (2) We develop DepthFusion, an application to create immersive 360°-view experiences based on RGBD images generated with LDM3D.\n(3) Through extensive experiments, we validate the quality of our generated RGBD images and 360°-view immersive videos." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b10", "b11", "b13", "b21", "b27", "b2", "b4", "b18", "b29", "b19", "b22", "b18", "b7" ], "table_ref": [], "text": "Monocular depth estimation is the task of estimating depth values for each pixel of a single given RGB image. Recent work has shown great performance in depth estimation using deep learning models based on convolutional neural networks [11,12,14,22,28,29]. Later, attentionbased Transformer models were adopted to overcome the issue of a limited receptive field in CNNs, allowing the model to consider global contexts when predicting depth values [3,5,19,30]. Most recently diffusion models have also been applied to depth estimation to leverage the revolutionary generation capabilities of such methods.\nDiffusion models have demonstrated amazing capabilities in generating highly detailed images based on an input prompt or condition [16,20,23]. The use of depth estimates has previously been used in diffusion models as an additional condition to perform depth-to-image generation [31]. Later, [24] and [6] showed that monocular depth estimation can also be modeled as a denoising diffusion process through the use of images as an input condition. In this work we propose a diffusion model that simultaneously generates an RGB image and its corresponding depth map given a text prompt as input. While our proposed model may be functionally comparable to an image generation and depth estimation model in cascade, there are several differences, challenges, and benefits of our proposed combined model. An adequate monocular depth estimation model requires large and diverse data [15,19], however, as there is no depth ground truth available for generated images it is hard for offthe-shelf depth estimation models to adapt to the outputs of the diffusion model. Through joint training, the generation of depth is much more infused with the image generation process allowing the diffusion model to generate more detailed and accurate depth values. Our proposed model also differs from the standard monocular depth estimation task as the reference images are now novel images that are also generated by the model. A similar task of generating multiple images simultaneously can be linked to video generation using diffusion models [8,9,26]. Video diffusion models mostly build on [9] which proposed a 3D U-Net to jointly model a fixed number of continuous frame images which are then used to compose a video. However, since we only require two outputs (depth and RGB) which do not necessarily require the same spatial and temporal dependencies as videos, we utilize a different approach in our model." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "This section describes the LDM3D model's methodology, training process, and distinct characteristics that facilitate concurrent RGB image and depth map creation, as well as immersive 360-degree view generation based on LDM3D output." }, { "figure_ref": [], "heading": "LDM-3D", "publication_ref": [ "b19", "b6", "b20", "b19" ], "table_ref": [], "text": "3.1.1 Model Architecture LDM3D is a 1.6 billion parameter KL-regularized diffusion model, adapted from Stable Diffusion [20] with minor modifications, allowing it to generate images and depth maps simultaneously from a text prompt, see Fig. 1.\nThe KL-autoencoder used in our model is a variational autoencoder (VAE) architecture based on [7], which incorporates a KL divergence loss term. To adapt this model for our specific needs, we modified the first and last Conv2d layers of the KL-autoencoder. These adjustments allowed the model to accommodate the modified input format, which consists of concatenated RGB images and depth maps.\nThe generative diffusion model utilizes a U-Net backbone [21] architecture, primarily composed of 2D convolutional layers. The diffusion model was trained on the learned, low-dimensional, KL-regularized latent space, similar to [20].\nEnabling more accurate reconstructions and efficient high-resolution synthesis, compared to transformer-based diffusion model trained in pixel space.\nFor text conditioning, a frozen CLIP-text encoder [17] is employed, and the encoded text prompts are mapped to various layers of the U-Net using cross-attention. This approach effectively generalizes to intricate natural language text prompts, generating high-quality images and depth maps in a single pass, only having 9,600 additional parameters compared to the reference Stable Diffusion model." }, { "figure_ref": [], "heading": "Preprocessing the data", "publication_ref": [], "table_ref": [], "text": "The model was fine-tuned on a subset of the LAION-400M [25] dataset, which contains image and caption pairs. The depth maps utilized in fine-tuning the LDM3D model were generated by the DPT-Large depth estimation model running inference at its native resolution of 384 × 384. Depth maps were saved in 16-bit integer format and were converted into 3-channel RGB-like arrays to more closely match the input requirements of the stable diffusion model which was pre-trained on RGB images. To achieve this conversion, the 16-bit depth data was unpacked into three separate 8-bit channels. It should be noted that one of these channels is zero for the 16-bit depth data, but this structure is designed to be compatible with a potential 24-bit depth map input. This reparametrization allowed us to encode depth information in an RGB-like image format while preserving complete depth range information.\nThe original RGB images and the generated RGB-like depth maps were then normalized to have values within the [0, 1] range. To create an input suitable for the autoencoder model training, the RGB images and RGB-like depth maps were concatenated along the channel dimension. This process resulted in an input image of size 512x512x6, where the first three channels correspond to the RGB image and the latter three channels represent the RGB-like depth map. The concatenated input allowed the LDM3D model to learn the joint representation of both RGB images and depth maps, enhancing its ability to generate coherent RGBD outputs." }, { "figure_ref": [], "heading": "Fine-tuning Procedure.", "publication_ref": [ "b19", "b19", "b9", "b6", "b19" ], "table_ref": [], "text": "The fine-tuning process comprises two stages, similar to the technique presented in [20]. In the first stage, we train an autoencoder to generate a lower-dimensional, perceptually equivalent data representation. Subsequently, we fine-tune the diffusion model using the frozen autoencoder, which simplifies training and increases efficiency. This method outperforms transformer-based approaches by effectively scaling to higher-dimensional data, resulting in more accurate reconstructions and efficient high-resolution image and depth synthesis without the complexities of balancing reconstruction and generative capabilities.\nAutoencoder fine-tuning. The KL-autoencoder was finetuned on a training set consisting of 8233 samples, an validation set containing 2059 samples. Each sample in these sets included a caption as well as a corresponding image and depth map pair, as previously described in the preprocessing section.\nFor the fine-tuning of our modified autoencoder, we used a KL-autoencoder architecture with a downsampling factor of 8 time the pixel space image resolution. This downsampling factor was found to be optimal in terms of fast training process and high-quality image synthesis [20].\nDuring the fine-tuning process, we used the Adam optimizer with a learning rate of 10 -5 and a batch size of 8. We trained the model for 83 epochs, and we sampled the outputs after each epoch to monitor the progress. The loss function for both the images and depth data consisted of a combination of perceptual loss [32] and patch-based adversarial-type loss [10], which were originally used in the pre-training of the KL-AE [7].\nL Autoencoder = min E,D max ψ L rec (x, D(E(x))) -L adv (D(E(x))) + log D ψ (x) + L reg (x; E, D)(1)\nHere D(E(x)) are the reconstructed images, L rec (x, D(E(x))) is the perceptual reconstruction loss, L adv (D(E(x))) is the adversarial loss, D ψ (x) is a patch based discriminator loss, and L reg (x; E, D) is the KL-regularisation loss.\nDiffusion model fine-tuning Following the autoencoder fine-tuning, we proceeded to the second stage, which involved fine-tuning the diffusion model. This was achieved using the frozen autoencoder's latent representations as input, with a latent input size of 64x64x4.\nFor this stage, we employed the Adam optimizer with a learning rate of 10 -5 and a batch size of 32 . We train the diffusion model for 178 epochs with the loss function:\nL LDM3D := Eε(x), ∼ N (0, 1), t || -θ (z t , t)|| 2 2 (2)\nwhere θ (z t , t) is the predicted noise by the denoising U-Net, and t is uniformly sampled.\nWe initiate the LDM3D fine-tuning using the weights from the Stable Diffusion v1.4 [20] model as a starting point. We monitor the progress throughout fine-tuning by sampling the generated images and depth maps, assessing their quality and ensuring the model's convergence.\nCompute Infrastructure All training runs reported in this work are conducted on an Intel AI supercomputing cluster comprising of Intel Xeon processors and Intel Habana Gaudi AI accelerators. The LDM3D model training run is scaled out to 16 accelerators (Gaudis) on the corpus of 9,600 tupples (text caption, RGB image, depth map). The KL-autoencoder used in our LDM3D model was trained on Nvidia A6000 GPUs." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [ "b12" ], "table_ref": [], "text": "In line with previous studies, we assess text-to-image generation performance using the MS-COCO [13] validation set. To measure the quality of the generated images, we employ Fréchet Inception Distance (FID), Inception Score (IS), and CLIP similarity metrics. The autoencoder's performance is evaluated using the relative FID score, a popular metric for comparing the quality of reconstructed images with their corresponding original input images. The evaluation was carried out on 27,265 samples, 512x512-sized from the LAION-400M dataset." }, { "figure_ref": [ "fig_4", "fig_4", "fig_4" ], "heading": "Immersive Experience Generation", "publication_ref": [ "b0", "b19", "b17", "b18" ], "table_ref": [], "text": "AI models for image generation have become prominent in the space of AI art, they are typically designed for 2D representations of diffused content. In order to project imagery onto a 3D immersive environment, modifications in mapping and resolution needed to be considered to achieve an acceptable result. Another previous limitation of correctly projected outputs occurs when perception is lost due to the monoscopic perspective of a single point of view. Modern viewing devices and techniques require disparity between two view points to achieve the experience of stereoscopic immersion. Recording devices typically capture footage from two cameras at a fixed distance so that a 3D output can be generated based on the disparity and camera parameters. In order to achieve the same from single images, however, an offset in pixel space must be calculated. With the LDM3D model, a depth map is extracted separately from RGB color space and can be used to differentiate a proper \"left\" and \"right\" perspective of the same image space in 3D.\nFirst, the initial image is generated and its corresponding depth map is stored, see Fig. 2a. Using TouchDesigner [1], the RGB color image is projected to the outside of an equirectangular spherical polar object in 3D space see Fig. 2b. The perspective is set at origin 0,0,0 inside of the spherical object as the center of viewing the immersive space. The vertex points of the sphere are defined as an equal distance in all directions from the point of origin. The depth map is then used as instructions to manipulate the distance from origin to the corresponding vertex point based on monotone color values. Values closer to 1.0 move the vertex points closer to the origin, while values of 0.0 are scaled to a further distance from the origin. Values of 0.5 result in no vertex manipulation. From a monoscopic view at 0,0,0, no alteration in image can be perceived since the \"rays\" extend linearly from the origin outward. However, with the dual perspective of stereoscopic viewpoints, the pixels of the mapped RGB image are distorted in a dynamic fashion to give the illusion of depth. This same effect can also be observed while moving the single viewpoint away from origin 0,0,0 as the vertex distances scale equally against their initial calculation. Since the RGB color space and depth map pixels occupy the same regions, objects that have perceived geometric shapes are given approximate depth via their own virtual geometric dimensions in the render engine within TouchDesigner. Fig. 2 explains the entire pipeline. This approach is not limited to the TouchDesigner platform and may also be replicated inside similar rendering engines and software suites that have the ability to utilize RGB and depth color space in their pipelines. .4 [20] and depth maps to DPT-Large [18,19], on 512 × 512 images from the COCO validation dataset. Captions from top to bottom:\"a close up of a sheet of pizza on a table\", \"A picture of some lemons on a table\", \"A little girl with a pink bow in her hair eating broccoli\", \"A man is on a path riding a horse\",\"A muffin in a black muffin wrap next to a fork\", \"a white polar bear drinking water from a water source next to some rocks\"." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "In the following, we show the high quality of the generated images and depth maps of our LDM3D model. We also show the impact on performance of the autoencoder when adding the depth modality." }, { "figure_ref": [ "fig_6", "fig_4" ], "heading": "Qualitative Evaluation", "publication_ref": [], "table_ref": [], "text": "A qualitative analysis of the generated images and depth maps reveals that our LDM3D model can effectively generate visually coherent outputs that correspond well to the provided text prompts. The generated images exhibit fine details and complex structures, while the depth maps accurately represent the spatial information of the scenes, see Fig. 3 . These results highlight the potential of our model for various applications, including 3D scene reconstruction and immersive content creation, see Fig. 2. A video with examples of the immersive 360-views that can be generated using our complete pipeline can be found at https://t.ly/TYA5A." }, { "figure_ref": [], "heading": "Quantitative Image Evaluation", "publication_ref": [ "b1" ], "table_ref": [], "text": "Our LDM3D model demonstrates impressive performance in generating high-quality images and depth maps from text prompts. When evaluated on the MS-COCO validation set, the model achieves competitive scores to the Stable diffusion baseline using FID and CLIP similarity metrics, see Tab. 1. There is a degradation in the inception score (IS), which might indicate that our model generates images that are close to the real images in terms of their feature distributions, as could be derived by the similar FID scores, but they might lack diversity or some aspects of image quality that IS captures. Nevertheless, IS is considered to be a less robust metric than FID because it struggles with capturing intra-class diversity [4], is highly sensitive to model parameters and implementations, whereas FID is better at assessing the similarity between distributions of real and generated images while being less sensitive to minor changes in network weights that don't impact image quality [2]. The high CLIP similarity score indicates that the model maintains a high level of detail and fidelity with respect to the text prompts. In addition, we investigate the relationship between key hyperparameters and the quality of the generated images. We plot the FID and IS scores against the classifier-free diffusion guidance scale factor (Fig. 4), the number of denoising steps (Fig. 5), and the training step (Fig. 7). Additionally, we plotted the CLIP similarity score against the classifier-free diffusion guidance scale factor in see Fig. 6." }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "Fig. 4 indicates that the optimal classifier-free diffusion guidance scale factor that produces the best balance between image quality and diversity is around s=5, higher than reported on Stable diffusion v1.4 (s=3). Fig. 6 indicated that the alignment of the generated images with the input text prompts is nearly unaffected as the scale factor changes for scale factors larger than 5. Fig. 5 indicates that the image quality increases with the number of denoising steps, the most significant improvement occurs when increasing the DDIM steps from 50 to 100. " }, { "figure_ref": [ "fig_5" ], "heading": "Quantitative Depth Evaluation", "publication_ref": [ "b18" ], "table_ref": [], "text": "Our LDM3D model jointly outputs images and their corresponding depth maps. Since there is no ground truth depth reference for these images, we define a reference model against which to compute depth metrics. For this, we select the ZoeDepth metric depth estimation model. LDM3D outputs depth in disparity space, as it was fine-tuned using depth maps produced by DPT-Large. We align these depth maps to reference ones produced by ZoeDepth. This alignment is done in disparity space in a global least-squares fitting manner similar to the approach in [19]. Points to be fitted to are determined via random sampling applied to the intersected validity maps of the estimated and target depth maps, where valid depth is simply defined to be nonnegative. The alignment procedure computes per-sample scale and shift factors that are applied to the LDM3D and DPT-Large depth maps to align the depths to ZoeDepth values. All depth maps are then inverted to bring them into metric depth space. The two depth metrics we compute are absolute relative error (AbsRel) and root mean squared error (RMSE). Metrics are aggregated over a 6k subset of images from the 30k set used for image evaluation. Tab. 2 shows that LDM3D achieves similar depth accuracy as DPT-Large, demonstrating the success of our finetuning approach. A corresponding visualization is shown in Fig. 8." }, { "figure_ref": [], "heading": "Autoencoder Performance", "publication_ref": [], "table_ref": [], "text": "We first evaluate the performance of our fine-tuned KL-AE using the relative FID score, see Tab. 3. Our findings show a minor but measurable decline in the quality of reconstructed images compared to the pre-trained KL-AE. This can be attributed to the increased data compression ratio when incorporating depth information alongside RGB images in the pixel space, but keeping the latent space dimensions unchanged. Note that the adjustments made to the AE are minimal, adding only 9,615 parameters to the pretrained AE. We expect that further modifications to the AE can further improve performance. In the current architecture, this decrease in quality is compensated by fine-tuning the diffusion U-Net. The resulting LDM3D model performs on par with vanilla Stable Diffusion as shown in the previous sections." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In conclusion, this research paper introduces LDM3D, a novel diffusion model that generates RGBD images from text prompts. To demonstrate the potential of LDM3D we also develop DepthFusion, an application that creates immersive and interactive 360-view experiences using the generated RGBD images in TouchDesigner. The results of this research have the potential to revolutionize the way we experience digital content, from entertainment and gaming to architecture and design. The contributions of this paper pave the way for further advancements in the field of multiview generative AI and computer vision. We look forward to seeing how this space will continue to evolve and hope that the presented work will be useful for the community." } ]
This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360°-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences.
LDM3D: Latent Diffusion Model for 3D
[ { "figure_caption": "(a)Step 1: Img-to-img inference pipeline for LDM3D. initiating from a panoramic image and corresponding depth map computed using DPT-Large[18,19]. The RGBD input is processed through the LDM3D image-to-image pipeline, generating a transformed image and depth map guided by the given text prompt.(b) Step 2: LDM3D generated image is projected on a sphere, using vertex manipulation based on diffused depth map, followed by meshing. (c) Step 3: Image generation from different viewpoints, and video assembly.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 .Figure 3 .23Figure 2. Immersive experience generation pipeline.", "figure_data": "", "figure_id": "fig_1", "figure_label": "23", "figure_type": "figure" }, { "figure_caption": "Figure 4 .Figure 5 .45Figure 4. FID / IS vs. Classifier-free diffusion guidance scale factor. Evaluation of text-conditional image synthesis on 2000 samples, 512 x 512-sized from MS-COCO [13] dataset, with 50 DDIM [27] steps.", "figure_data": "", "figure_id": "fig_2", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 .Figure 7 .67Figure6. CLIP similarity score vs. Classifier-free diffusion guidance scale factor. Averaged on 2000 samples, 512 x 512-sized generated from MS-COCO[13] dataset captions, with 50 DDIM[27] steps.", "figure_data": "", "figure_id": "fig_3", "figure_label": "67", "figure_type": "figure" }, { "figure_caption": "Table 2 .2Depth evaluation comparing LDM3D and DPT-Large with respect to ZoeDepth-N that serves as a reference model.", "figure_data": "", "figure_id": "fig_4", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Depth visualization to accompany Tab. 2.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Table 3 .3Comparison of KL-autoencoder fine-tuning approaches. The pre-trained KL-AE was evaluated on 31,471 images, and the fine-tuned KL-AE on 27,265 images, 512x512-sized from the LAION-400M [25] dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Text-to-Image synthesis. Evaluation of text-conditional image synthesis on the 512 x 512-sized MS-COCO[13] dataset with 50 DDIM[27] steps. Our model is on par with the Stable diffusion models with the same number of parameters (1.06B). IS and CLIP similarity scores are averaged over 30k captions from the MS-COCO dataset.", "figure_data": "FID↓IS↑CLIP↑SD v1.428.08 34.17±0.76 26.13 ± 2.81SD v1.527.39 34.02 ± 0.79 26.13 ± 2.79LDM3D (ours) 27.82 28.79 ± 0.49 26.61±2.92", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" } ]
Gabriela Ben; Melech Stan; Diana Wofk; Alex Redden; Will Saxton; Jean Yu Intel; Estelle Aflalo; Shao-Yen Tseng; Fabio Nonato; Matthias Müller; Vasudev Lal; Concat Rgbd
[ { "authors": " Touchdesigner", "journal": "", "ref_id": "b0", "title": "", "year": "2004" }, { "authors": "Shane Barratt; Rishi Sharma", "journal": "", "ref_id": "b1", "title": "A note on the inception score", "year": "2018" }, { "authors": "Farooq Shariq; Ibraheem Bhat; Peter Alhashim; Wonka", "journal": "", "ref_id": "b2", "title": "Adabins: Depth estimation using adaptive bins", "year": "2021-06" }, { "authors": "Ali Borji", "journal": "", "ref_id": "b3", "title": "Pros and cons of gan evaluation measures: New developments", "year": "2021" }, { "authors": "Zeyu Cheng; Yi Zhang; Chengkai Tang", "journal": "IEEE Sensors Journal", "ref_id": "b4", "title": "Swindepth: Using transformers and multi-scale fusion for monocular-based depth estimation", "year": "2021" }, { "authors": "Yiqun Duan; Zheng Zhu; Xianda Guo", "journal": "", "ref_id": "b5", "title": "Diffusiondepth: Diffusion denoising approach for monocular depth estimation", "year": "2023" }, { "authors": "Patrick Esser; Robin Rombach; Björn Ommer", "journal": "", "ref_id": "b6", "title": "Taming transformers for high-resolution image synthesis", "year": "2020" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Fleet", "journal": "", "ref_id": "b7", "title": "Imagen video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey A Gritsenko; William Chan; Mohammad Norouzi; David J ", "journal": "", "ref_id": "b8", "title": "Fleet. Video diffusion models", "year": "2022" }, { "authors": "Phillip Isola; Jun-Yan Zhu; Tinghui Zhou; Alexei A Efros", "journal": "", "ref_id": "b9", "title": "Image-to-image translation with conditional adversarial networks", "year": "2018" }, { "authors": "Yevhen Kuznietsov; Jorg Stuckler; Bastian Leibe", "journal": "", "ref_id": "b10", "title": "Semisupervised deep learning for monocular depth map prediction", "year": "2017-07" }, { "authors": "Iro Laina; Christian Rupprecht; Vasileios Belagiannis; Federico Tombari; Nassir Navab", "journal": "", "ref_id": "b11", "title": "Deeper depth prediction with fully convolutional residual networks", "year": "2016" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; Lubomir Bourdev; Ross Girshick; James Hays; Pietro Perona; Deva Ramanan; C Lawrence Zitnick; Piotr Dollár", "journal": "", "ref_id": "b12", "title": "Microsoft coco: Common objects in context", "year": "2015" }, { "authors": "Armin Masoumian; Saddam Hatem A Rashwan; Julián Abdulwahab; Salman Cristiano; Domenec Asif; Puig", "journal": "Neurocomputing", "ref_id": "b13", "title": "Gcndepth: Self-supervised monocular depth estimation based on graph convolutional network", "year": "2023" }, { "authors": "Yue Ming; Xuyang Meng; Chunxiao Fan; Hui Yu", "journal": "Neurocomputing", "ref_id": "b14", "title": "Deep learning for monocular depth estimation: A review", "year": "2021" }, { "authors": "Alexander Quinn; Nichol ; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "PMLR", "ref_id": "b15", "title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022-07" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b16", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "René Ranftl; Alexey Bochkovskiy; Vladlen Koltun", "journal": "ICCV", "ref_id": "b17", "title": "Vision transformers for dense prediction", "year": "2021" }, { "authors": "René Ranftl; Katrin Lasinger; David Hafner; Konrad Schindler; Vladlen Koltun", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b18", "title": "Towards robust monocular depth estimation: Mixing datasets for zero-shot cross-dataset transfer", "year": "2007" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b19", "title": "High-resolution image synthesis with latent diffusion models", "year": "2006" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b20", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Anirban Roy; Sinisa Todorovic", "journal": "", "ref_id": "b21", "title": "Monocular depth estimation using neural regression forest", "year": "2016-06" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo-Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b22", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Saurabh Saxena; Abhishek Kar; Mohammad Norouzi; David J Fleet", "journal": "", "ref_id": "b23", "title": "Monocular depth estimation using diffusion models", "year": "2023" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b24", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni; Devi Parikh; Sonal Gupta; Yaniv Taigman", "journal": "", "ref_id": "b25", "title": "Make-a-video: Text-to-video generation without text-video data", "year": "2023" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b26", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Dan Xu; Elisa Ricci; Wanli Ouyang; Xiaogang Wang; Nicu Sebe", "journal": "", "ref_id": "b27", "title": "Multi-scale continuous crfs as sequential deep networks for monocular depth estimation", "year": "2017-07" }, { "authors": "Dan Xu; Wei Wang; Hao Tang; Hong Liu; Nicu Sebe; Elisa Ricci", "journal": "", "ref_id": "b28", "title": "Structured attention guided convolutional neural fields for monocular depth estimation", "year": "2018-06" }, { "authors": "Guanglei Yang; Hao Tang; Mingli Ding; Nicu Sebe; Elisa Ricci", "journal": "", "ref_id": "b29", "title": "Transformer-based attention networks for continuous pixel-wise prediction", "year": "2021-10" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b30", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b31", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" } ]
[ { "formula_coordinates": [ 4, 81.39, 108.85, 204.97, 53.64 ], "formula_id": "formula_0", "formula_text": "L Autoencoder = min E,D max ψ L rec (x, D(E(x))) -L adv (D(E(x))) + log D ψ (x) + L reg (x; E, D)(1)" }, { "formula_coordinates": [ 4, 58.52, 350.45, 227.85, 12.69 ], "formula_id": "formula_1", "formula_text": "L LDM3D := Eε(x), ∼ N (0, 1), t || -θ (z t , t)|| 2 2 (2)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b16", "b4", "b5", "b20", "b30", "b31", "b39", "b40", "b41", "b21", "b29", "b8", "b18", "b26", "b0", "b9", "b29" ], "table_ref": [], "text": "Point cloud registration (PCR) is an important and fundamental problem in 3D computer vision and has a wide range of applications in localization [13], 3D object detection [17] and 3D reconstruction [25]. Given two 3D scans of the same object (or scene), the goal of PCR is to estimate a six-degree-of-freedom (6-DoF) pose transformation that accurately aligns the two input point clouds. Using pointto-point feature correspondences is a popular and robust solution to the PCR problem. However, due to the limitations of existing 3D keypoint detectors & descriptors, the limited overlap between point clouds and data noise, corre- spondences generated by feature matching usually contain outliers, resulting in great challenges to accurate 3D registration.\nThe problem of 3D registration by handling correspondences with outliers has been studied for decades. We classify them into geometric-only and deep-learned methods. For geometric-only methods [5,6,21,31,32,[39][40][41][42], random sample consensus (RANSAC) and its variants perform an iterative sampling strategy for registration. Although RANSAC-based methods are simple and efficient, their performance is highly vulnerable when the outlier rate increases, and it requires a large number of iterations to obtain acceptable results. Also, a series of global registration methods based on branch-and-bound (BnB) are proposed to search the 6D parameter space and obtain the optimal global solution. The main weakness of these methods is the high computational complexity, especially when the correspondence set is of a large magnitude and has an extremely high outlier rate. For deep-learned methods, some [1-4, 9, 10, 14, 16, 18, 19, 27, 36] focus on improving one module in the registration process, such as investigating more discriminate keypoint feature descriptors or more effective correspondence selection techniques, while the others [22,30,44] focus on registration in an end-to-end manner. However, deep-learned based methods require a large amount of data for training and usually lack generalization on different datasets. At present, it is still very challenging to achieve accurate registrations in the presence of heavy outliers and in cross-dataset conditions.\nIn this paper, we propose a geometric-only 3D registration method based on maximal cliques (MAC). The key insight is to loosen the previous maximum clique constraint, and mine more local consensus information in a graph to generate accurate pose hypotheses. We first model the initial correspondence set as a compatibility graph, where each node represents a single correspondence and each edge between two nodes indicates a pair of compatible correspondences. Second, we search for maximal cliques in the graph and then use node-guided clique filtering to match each graph node with the appropriate maximal clique containing it. Compared with the maximum clique, MAC is a looser constraint and is able to mine more local information in a graph. This helps us to achieve plenty of correct hypotheses from a graph. Finally, transformation hypotheses are computed for the selected cliques by the SVD algorithm. The best hypothesis is selected to perform registration using popular hypothesis evaluation metrics in the RANSAC family. To summarize, our main contributions are as follows:\n• We introduce a hypothesis generation method named MAC. Our MAC method is able to mine more local information in a graph, compared with the previous maximum clique constraint. We demonstrate that hypotheses generated by MAC are of high accuracy even in the presence of heavy outliers.\n• Based on MAC, we present a novel PCR method, which achieves state-of-the-art performance on U3M, 3DMatch, 3DLoMatch and KITTI datasets. Notably, our geometric-only MAC method outperforms several state-of-the-art deep learning methods [3,9,19,27]. MAC can also be inserted as a module into multiple deep-learned frameworks [1,10,18,30,44] to boost their performance. MAC combined with GeoTransformer achieves the state-of-the-art registration recall of 95.7% / 78.9% on 3DMatch / 3DLoMatch." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Geometric-only PCR Methods", "publication_ref": [ "b5", "b19", "b45", "b4", "b30", "b31", "b39", "b40", "b40", "b31", "b4", "b30", "b39", "b41", "b5", "b20" ], "table_ref": [], "text": "Various geometric-only methods [6,8,20,37,46] have been proposed recently. Typically, RANSAC and its variants [5,13,31,32,[39][40][41] remain the dominant approaches to the problem of estimating a 6-DoF pose from correspondences. RANSAC iteratively samples correspondences from the initial set, generating and evaluating geometric estimations for each subset until a satisfactory solution is obtained. Efficient and robust evaluation metrics are extremely important for using RANSAC to achieve accurate registration. To address the current problems of timeconsuming and noise-sensitive evaluation metrics, [41] analyzes the contribution of inliers and outliers during the computation and proposed several metrics that can effectively improve the registration performance of RANSAC. A large number of variants have also been proposed to achieve further improvement. For example, Rusu et al. [32] presented the simple consensus-based initial alignment (SAC-IA) method, which samples correspondences spread out on the point cloud and leverages the Huber penalty for evaluation. Graph cut RANSAC (GC-RANSAC) [5] uses the graph-cut algorithm before model re-fitting in the local optimization step. Compatibility-guided sample consensus (CG-SAC) [31] additionally considers the normal information of key points during the sampling process. Yang et al. [40] proposed the sample consensus by sampling compatibility triangles (SAC-COT) method, which generates estimations by ranking and sampling ternary loops from the compatibility graph. Although many previous efforts have been made, these methods suffer from low time efficiency and limited accuracy in cases with high outlier rates.\nA series of globally optimal methods based on BnB have been proposed recently. Yang et al. [42] proposed globally optimal ICP (GO-ICP), which rationalizes the planning of ICP update tasks at different stages, and its biggest advantage is that it minimizes the local optimum. Bustos and Chin [6] presented guaranteed outlier removal (GORE), which calculates the tight lower bound and tight upper bound for each correspondence and reduces the size of correspondence set by rejecting true outliers. Motivated by GORE, Li [21] proposed a polynomial time outlier removal method, which seeks the tight lower and upper bound by calculating the costs of correspondence matrix (CM) and augmented correspondence matrix (ACM). However, BnB techniques are sensitive to the cardinality of the input and are time-consuming for large-scale inputs." }, { "figure_ref": [], "heading": "Deep-learned PCR Methods", "publication_ref": [ "b3", "b0", "b9", "b9", "b3", "b0", "b8", "b26", "b13", "b29", "b29" ], "table_ref": [], "text": "In addition to geometric-only methods, recent works also adopt deep learning techniques to perform PCR. Some methods aim to detect more repeatable keypoints [4,18] and extract more descriptive features [1,10]. FCGF [10] computes the features in a single pass through a fully convolutional neural network without keypoint detection. D3Feat [4] uses a fully convolutional network to obtain local information of point clouds and a joint learning framework to achieve 3D local feature detection and description. Predator [18] applies an attention mechanism to extract salient points in overlapping regions of the point clouds, thus achieving robust registration in the presence of low overlap rates. Spinnet [1] extracts local features which are rotationally invariant and sufficiently informative to enable accurate registration. Some methods [3, 9, 14, 27] focus on efficiently distinguishing correspondences as inliers and outliers. Deep global registration (DGR) [9] and 3DReg-Net [27] classify a given correspondence by training endto-end neural networks and using operators such as sparse convolution and point-by-point MLP. PointDSC [3] explicitly explores spatial consistency for removing outlier correspondences and 3D point cloud registration. Fu et al. [14] proposed a registration framework that utilizes deep graph matching (RGM) that can find robust and accurate pointto-point correspondences. More recently, several methods [30,44] follow the detection-free methods and estimate the transformation in an end-to-end way. CoFiNet [44] extracts correspondences from coarse to fine without keypoint detection. GeoTransformer [30] learns geometric features for robust superpoint matching and is robust in low-overlap cases and invariant to rigid transformation.\nMAC(c1)=(c1,c3,c4,c5) MAC(c3)=(c1,c3,c4,c5) MAC(c2)=(c2,c6,c7) … MAC(c9)=(c1,c9,c10) MAC(c10)=(c1,c9,c10) Node-guided maximal cliques c7 c7 c6 c6 c2 c2 c5 c5 c4 c4 c3 c3 c1 c1 R3, t3 c10 c10 c9 c9 c1 c1\nWhile deep learning techniques have demonstrated a great potential for PCR, these methods require a large amount of training data and their generalization is not always promising. By contrast, MAC does not require any training data and achieves more advanced performance than several deep-learned methods. Moreover, MAC can be served as a drop-on module in deep learning frameworks to boost their performance." }, { "figure_ref": [], "heading": "MAC", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_1" ], "heading": "Problem Formulation", "publication_ref": [], "table_ref": [], "text": "For two point clouds P s and P t to be aligned, we first extract local features for them using geometric or learned descriptors. Let p s and p t denote the points in the P s and P t , respectively. An initial correspondence set C initial = {c} is formed by matching feature descriptors, where c = (p s , p t ). MAC estimates the 6-DoF pose transformation between P s and P t from C initial .\nOur method is technically very simple, and its pipeline is shown in Fig. 2." }, { "figure_ref": [], "heading": "Graph Construction", "publication_ref": [], "table_ref": [], "text": "The graph space can more accurately depict the affinity relationship between correspondences than the Euclidean space. Therefore, we model the initial correspondences as a compatibility graph, where correspondences are represented by nodes and edges link nodes that are geometrically compatible. Here, we consider two approaches to construct a compatibility graph.\n• First Order Graph. The first order graph (FOG) is constructed based on the rigid distance constraint between the correspondence pair (c i , c j ), which can be quantitatively measured as:\nS dist (c i , c j ) = p s i -p s j -p t i -p t j .(1)\nThe compatibility score between c i and c j is given as:\nS cmp (c i , c j ) = exp(- S dist (c i , c j ) 2 2d 2 cmp ),(2)\nwhere d cmp is a distance parameter. Notably, if S cmp (c i , c j ) is greater than a threshold t cmp , c i and c j form an edge e ij and S cmp (c i , c j ) is the weight of e ij , otherwise S cmp (c i , c j ) will be set to 0. Since the compatibility graph is undirected, the weight matrix W F OG is symmetric.\n• Second Order Graph. The previous study [8] proposes a second order compatibility measure, which relates to the number of commonly compatible correspondences in the global set. The second order graph (SOG) evolves from FOG. The weight matrix W SOG can be calculated as:\nW SOG = W F OG ⊙ (W F OG × W F OG ),(3)\nwhere ⊙ represents the element-wise product between two matrices.\nBoth graph construction methods can adapt to our frameworks. Compared with FOG, 1) SOG has stricter edge construction conditions and a higher degree of compatibility with adjacent nodes; 2) SOG is sparser, which facilitates a more rapid search of cliques. In Sec. 4.5, we experimentally compare FOG and SOG in our MAC framework." }, { "figure_ref": [], "heading": "Search Maximal Cliques", "publication_ref": [], "table_ref": [], "text": "Given an undirected graph\nG = (V, E), clique C = (V ′ , E ′ ), V ′ ⊆ V, E ′ ⊆ E is a subset of G,\nin which any two nodes are connected by edges. A maximal clique is a clique that cannot be extended by adding any nodes. In particular, the maximal clique with the most nodes is the maximum clique of a graph. Searching for Maximal cliques. To generate hypotheses, RANSAC-based methods repeatedly take random samples from the correspondence set. Nevertheless, they fail to fully mine the affinity relationships between correspondences. Theoretically, inliers would form cliques in the graph, because inliers are usually geometrically compatible with each other. Previous works [23,24,28,37] focus on searching for maximum cliques in the graph, however, the maximum clique is a very tight constraint that only focuses on the global consensus information in a graph. Instead, we loosen the constraint and leverage maximal cliques to mine more local graph information.\nBy using the igraph maximal cliques function in the igraph 1 C++ library, which makes use of a modified Bron-Kerbosch algorithm [12], the search of maximal cliques can be very efficient. The process's worst time complexity is O(d(n -d)3 (d/3) ), where d is the degeneracy of the graph. Note that d is typically small in our problem because the graph is usually sparse when dealing with point cloud correspondences. Node-guided Clique Selection. After executing the maximal clique searching procedure, we obtain the maximal clique set MAC initial . In practice, MAC initial usually contains tens of thousands of maximal cliques, which will make it very time-consuming if we consider all maximal cliques. We introduce a node-guided clique selection method in this section to reduce |MAC initial |. First, we calculate the weight for each clique in MAC initial . Given a clique C i = (V i , E i ), the weight w Ci is calculated as:\nw Ci = ej ∈Ei w ej ,(4)\nwhere w ej represents the weight of edge e j in W SOG . A node may be included by multiple maximal cliques and we 1 https://igraph.org only retain the one with the greatest weight for that node. Then, duplicated cliques are removed from the rest, obtaining MAC selected . The motivation behind this is to use information about the local geometric structure around graph nodes to find the best consistent set of corresponding nodes.\nIt is clear that the number of maximal cliques |MAC selected | will not exceed |V|. We could send these maximal cliques directly to the following stages for 3D registration. However, when |V| is quite large, the number of retained maximal cliques can still be very large. Here, we propose several techniques to further filter the maximal cliques.\n• Normal consistency. In the maximal cliques, we find that the normal consistency is satisfied between each correspondence. Given two correspondences c i = (p s i , p t i ), c j = (p s j , p t j ) and the normal vectors n s i , n s j , n t i , n t j at the four points, the angular difference α s ij = ∠(n s i , n s j ), α t ij = ∠(n t i , n t j ) between the normal vectors can be calculated then. The following inequality ought to hold if c i and c j are normal consistent:\nsinα s ij -sinα t ij < t α ,(5)\nwhere t α is a threshold for determining whether the angular differences are similar.\n• Clique ranking. We organize MAC selected in a descending order using the clique's weight w Ci . The top-K ones are supposed to be more likely to produce correct hypotheses. This makes it flexible to control the number of hypotheses.\nThese techniques' experimental analysis is presented in Sec. 4.5." }, { "figure_ref": [], "heading": "Hypothesis Generation and Evaluation", "publication_ref": [ "b8", "b26", "b29", "b40" ], "table_ref": [], "text": "Each maximal clique filtered from the previous step represents a consistent set of correspondences. By applying the SVD algorithm to each consistency set, we can obtain a set of 6-DoF pose hypotheses.\n• Instance-equal SVD. Transformation estimation of correspondences is often implemented with SVD. Instance-equal means that the weights of all correspondences are equal.\n• Weighted SVD. Assigning weights to correspondences is commonly adopted by recent PCR methods [8, 9,27,30]. Correspondence weights can be derived by solving the eigenvectors of a compatibility matrix constructed for a compatibility graph. Here, we take the primary eigenvalues of W SOG as correspondence weights.\nThe final goal of MAC is to estimate the optimal 6-DoF rigid transformation (composed of a rotation pose R * ∈ SO(3) and a translation pose t * ∈ R 3 ) that maximizes the objective function as follow:\n(R * , t * ) = arg max R,t N i=1 s(c i ),(6)\nwhere c i ∈ C initial , N = |C initial |, and s(c i ) represents the score of c i . We consider several RANSAC hypothesis evaluation metrics here [41], including mean average error (MAE), mean square error (MSE) and inlier count. Their behaviors will be experimentally compared in Sec. 4.5. The best hypothesis is taken to perform 3D registration then." }, { "figure_ref": [], "heading": "Experiments 4.1. Experimental Setup", "publication_ref": [ "b44", "b14", "b39", "b8", "b32", "b42", "b31", "b9" ], "table_ref": [], "text": "Datasets. We consider four datasets, i.e, the objectscale dataset U3M [26], the scene-scale indoor datasets 3DMatch [45] & 3DLoMatch [18], and the scene-scale outdoor dataset KITTI [15]. U3M has 496 point cloud pairs. 3DLoMatch is the subset of 3DMatch, where the overlap rate of the point cloud pairs ranges from 10% to 30%, which is very challenging. For KITTI, we follow [3, 8] and obtain 555 pairs of point clouds for testing. Evaluation Criteria. We follow [40] that employs the root mean square error (RMSE) metric to evaluate the 3D point cloud registration performance on the U3M object-scale dataset. In addition, we employ the rotation error (RE) and translation error (TE) to evaluate the registration results on the scene-scale dataset. By referring to the settings in [9], the registration is considered successful when the RE ≤ 15°, TE ≤ 30 cm on 3DMatch & 3DLoMatch datasets, and RE ≤ 5°, TE ≤ 60 cm on KITTI dataset. We define a dataset's registration accuracy as the ratio of success cases to the number of point cloud pairs to be registered. Implementation Details. Our method is implemented in C++ based on the point cloud library (PCL) [33] and igraph library. For U3M, we use the Harris3D (H3D) [34] keypoint detector and the signatures of histograms of orientation (SHOT) [35] descriptor for initial correspondence generation as in [43]. For 3DMatch and 3DLoMatch datasets, we use the fast point features histograms (FPFH) [32] descriptor and fully convolutional geometric features (FCGF) [10] descriptor to generate the initial correspondence set. " }, { "figure_ref": [], "heading": "Results on U3M Dataset", "publication_ref": [ "b39", "b37", "b31", "b45", "b41", "b10" ], "table_ref": [], "text": "We perform an extensive comparison in Fig. 3. Here, the following methods are tested, including SAC-COT [40], OSAC [38], SAC-IA [32], RANSAC [13], SC 2 -PCR [8], FGR [46], GO-ICP [42], and PPF [11], where the former four are RANSAC-based methods. The RMSE threshold is varied from 0.5 pr to 5 pr with a step of 0.5 pr.\nThe results indicate that MAC performs best and significantly outperforms all tested RANSAC-fashion estimators, such as SAC-COT, OSAC, SAC-IA, and RANSAC. The registration performance of MAC based on the MAE evaluation metric is the best on U3M. " }, { "figure_ref": [], "heading": "Results on 3DMatch & 3DLoMatch Datasets", "publication_ref": [ "b19", "b45", "b30", "b26", "b8", "b18", "b9", "b0", "b29" ], "table_ref": [ "tab_0", "tab_1", "tab_2" ], "text": "PCR methods comparison. Both geometric-only and deep-learned methods are considered for comparison, including SM [20], FGR [46], RANSAC [13], TEASER++ [37], CG-SAC [31], SC 2 -PCR [8], 3DRegNet [27], DGR [9], DHVR [19] and PointDSC [3]. Results are shown in Tables 1 and2.\nThe following conclusions can be made: 1) regardless of which descriptor is used, MAC outperforms all compared methods on both 3DMatch and 3DLoMatch datasets, indicating its strong ability to register indoor scene point clouds; 2) even compared with deep-learned methods, MAC still achieves better performance without any data training; 3) in addition to the registration recall (RR) metric, MAC achieves the best RE and TE metrics. This indicates that registrations by MAC are very accurate and MAC is able to align low overlapping data. Boosting deep-learned methods with MAC. Several kinds of state-of-the-art deep-learned methods are integrated with MAC for evaluation. The considered methods are FCGF [10], SpinNet [1], Predator [18], CoFiNet [44] and Geo-Transformer [30]. Each method is tested under a different number of samples, which refer to the number of sampled points or correspondences. Results are reported in Table 3.\nRemarkably, MAC dramatically improves the registration recall under all tested methods on both 3DMatch and 3DLoMatch datasets. Notably, the performance of Spin-Net, Predator and CoFiNet after boosting by MAC exceeds " }, { "figure_ref": [], "heading": "Results on KITTI Dataset", "publication_ref": [ "b8", "b30" ], "table_ref": [ "tab_3" ], "text": "In Table 4, the results of DGR [9], PointDSC [3], TEASER++ [37], RANSAC [13], CG-SAC [31], SC 2 -PCR [8] and MAC are reported for comparison.\nAs shown by the table, in terms of the registration recall performance, MAC presents the best and is tied for the best results with FPFH and FCGF descriptor settings, respectively. MAC also has a lower TE than the state-ofthe-art geometric-only method SC 2 -PCR. Note that outdoor point clouds are significantly sparse and non-uniformly distributed. The registration experiments on the object, indoor scene, and outdoor scene datasets consistently verify that MAC holds good generalization ability in different application contexts." }, { "figure_ref": [], "heading": "Analysis Experiments", "publication_ref": [ "b6" ], "table_ref": [ "tab_4", "tab_5", "tab_6", "tab_4", "tab_4", "tab_4", "tab_4", "tab_4", "tab_4", "tab_5", "tab_2" ], "text": "In this section, we perform ablation studies and analysis experiments on both 3DMatch and 3DLoMatch datasets. We progressively experiment with the techniques proposed in Sec. 3, and the results are shown in Table 5. The quality of generated hypotheses is analyzed in Table 6. The performance upper bound is studied in Table 7. Table 8 presents the time efficiency analysis of MAC. Performing feature matching selection. Before 3D registration, a popular way is to perform outlier rejection to reduce the correspondence set. Here we employ geometric consistency (GC) [7], which is independent of the feature space and associates the largest consistent cluster relating to the compatibility among correspondences.\nBy comparing Row 1 and 2 of Table 5, GC has a negative impact on MAC performance, potentially due to that some inliers are also removed in this process. This demonstrates that MAC can still perform well even if the initial correspondence set is directly utilized as input without any filtering. Graph construction choices. We test the performance of MAC by using different graph construction approaches. As shown in Row 1 and 3 of Table 5, the registration recall obtained by using SOG is 1.6% higher than using FOG when combined with FPFH, and 0.06% higher when combined with FCGF on 3DMatch. Also, the registration recall obtained by using SOG is 0.12% higher than using FOG when combined with FPFH, and 0.56% higher when combined with FCGF on 3DLoMatch. Therefore, SOG is more suitable for MAC. Detailed analyzing descriptions can be found in the supplementary. Maximum or maximal clique. To justify the advantages of maximal cliques, we change the search strategy of MAC to the maximum cliques and test the registration performance.\nAs shown in Row 1 and 9 in Table 5, applying maximal cliques surpasses maximum by 9.8% when combined with FPFH, and 5.55% higher when combined with FCGF on 3DMatch. Besides, the registration recall obtained by using maximal cliques is 8.03% higher than using the maximum cliques when combined with FPFH and 10.45% higher when combined with FCGF on 3DLoMatch. There are several reasons for this: 1) maximal cliques include the maximum cliques and additionally consider local graph constraints, so the search for maximal cliques can make use of both local and global information in the compatibility graph; 2) the maximum clique is a very tight constraint which requires maximizing the number of mutually compatible correspondences, but it does not guarantee the opti-mal result. Node-guided clique selection. We compare the performance with and without node-guided (NG) clique selection for maximal cliques search.\nComparing Row 1 and 4 in Table 5, using NG achieves a recall improvement of 0.37% when combined with FPFH, and 0.5% improvement when combined with FCGF on 3DMatch. Also, using NG achieves a recall improvement of 0.23% with FPFH and 0.73% improvement with FCGF on 3DLoMatch. It is worth noting that while NG improves recall, the mean RE and mean TE are also decreasing. For example, NG reduces the mean RE by 0.1°and the mean TE by 0.11 cm with FPFH on 3DLoMatch. NG effectively reduces the number of calculations in the subsequent steps and promises accurate hypotheses. Different approaches for clique filtering. We test the effectiveness of the two filtering methods, normal consistency and clique ranking.\n1) Normal consistency: comparing Row 1 and 8 in Table 5, NC slightly degrades MAC's performance. 2) Clique ranking: Row 10 to 14 demonstrate that the registration recall tends to increase as K increases, suggesting that larger K yields a subset of cliques that generate more correct hypotheses. Remarkably, setting K to 100 can already achieve outstanding performance. Employing instance-equal or weighted SVD. The comparisons of instance-equal and weighted SVD are shown in Rows 1 and 5 of Table 5 Weighted SVD is slightly inferior to instance-equal SVD. This suggests that samples in MACs are already very consistent, indicating no additional weighting strategies are required.\nVarying hypothesis evaluation metrics. Here we compare three evaluation metrics, including MAE, MSE and inlier count, for MAC hypothesis evaluation.\nAs shown in Row 1, 6 and 7, MAC with MAE achieves the best performance. In Table 5, MAE achieves a recall improvement of 0.24% when combined with FPFH, and 0.31% improvement when combined with FCGF on 3DMatch compared with commonly used inlier count metric. Also, MAE has a 1.74% improvement when combined with FPFH, and 0.05% when combined with FCGF on 3DLoMatch compared with inlier count. MAE is also very effective in reducing RE and TE. For instance, MAE reduces the mean RE by 0.35°and the mean TE by 0.49 cm with FPFH on 3DLoMatch. Comparison with RANSAC hypotheses. We evaluate the quality of the generated hypotheses by comparing the hypotheses from RANSAC and MAC with the ground truth transformation. The results are shown in Table 6.\nCompared to RANSAC, which randomly selects correspondences and generates hypotheses from the correspondence set without geometric constraints, MAC effectively generates more convincing hypotheses from maximal cliques in the compatibility graph, which fully exploits the consensus information in the graph. The performance upper bound of MAC. Given an ideal hypothesis evaluation metric, allowing a point cloud pair can be aligned as long as correct hypotheses can be generated. This can test the performance upper bound of MAC. We vary the judging threshold for the number of generated correct hypotheses and report the results in Impressively, MAC-1 achieves registration recalls of 98.46% / 91.24% on 3DMatch / 3DLoMatch. This indicates that even on low overlapping datasets, MAC is able to produce correct hypotheses for most point cloud pairs. In addition, we can deduce that MAC's performance can be further improved with better hypothesis evaluation metrics. Time consumption of MAC. We employ Predator [18] to generate correspondences with different magnitudes to test the time performance of MAC. The time consumption is reported in Table 8.\nThe following observations can be made. 1) In general, MAC can complete 3D registration in only tens of milliseconds when the number of correspondences is smaller than 1000. Even with an input with 2500 correspondences, the time consumption is about 0.29 seconds. Note that MAC is implemented on the CPU only. 2) As the number of correspondences increases from 250 to 2500, there is an increase in time cost for graph construction due to W SOG computation taking more time. 3) When the number of correspondences reaches 5000, there is a large rise in the time cost of MAC's registration. The significant increase in the input size makes the search for maximal cliques more timeconsuming. However, MAC is not sensitive to the cardinality of the input correspondence set, as verified in Table 3. Hence, using sparse inputs for MAC can produce outstanding performance while making registration efficient." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Conclusion", "publication_ref": [], "table_ref": [ "tab_6", "tab_0" ], "text": "In this paper, we presented MAC to solve PCR by using the maximal clique constraint to generate precise pose hypotheses from correspondences. Our method achieves state-of-the-art performance on all tested datasets and can adapt to deep-learned methods to boost their performance. Limitation. As shown in Table 7 and Table 1, MAC produces accurate hypotheses but may fail to find them. In the future, we plan to develop a more convincing hypothesis evaluation technique utilizing semantic information. As shown in Fig. 4: 1) SOG considers the commonly compatible matches in the global set of the matched pairs rather than only the geometric consistency, making it more consistent and more robust in the case of high outlier rates; 2) SOG is sparser than FOG, and therefore beneficial in making the search of cliques more rapid.\nThe weights of the edge e ij = (c i , c j ) in the FOG are transformed as follows to obtain the corresponding secondorder weights:\nw SOG (e ij ) = w F OG (e ij )• e ik ∈E e jk ∈E w F OG (e ik ) • w F OG (e jk ) .(7)\nIf no remaining nodes form edges with both c i and c j , w SOG (e ij ) will be 0, which demonstrates that e ij will be removed from SOG then. In Fig. 4(b), the four edges e 12 , e 56 , e 78 and e 89 are removed, and the whole graph is divided into subgraphs that contain several cliques naturally." }, { "figure_ref": [], "heading": "B. Additional Experiments", "publication_ref": [ "b28", "b44", "b31", "b9", "b0" ], "table_ref": [ "tab_10", "tab_8" ], "text": "The information of all tested datasets is presented in Table 9. Results on ETH. Additionally, we also test our method on the outdoor dataset ETH [29], which contains more complex geometries compared with 3DMatch [45]. FPFH [32], FCGF [10], and Spinnet [1] are employed to generate correspondences, from which registration will then be performed by RANSAC-50K and MAC. The number of sampled points or correspondences is set to 5000. Registration is considered successful when the RE ≤ 15°and TE ≤ 30 cm. The quality of generated correspondence and registration results are reported in Table 10 The results suggest that when a defect in a descriptor leads to a very low inlier rate for generating the correspondence set, MAC is still effective in finding the accurate consistent subset from it, thus greatly boosting the registration recall. The registration recall obtained by using MAC is 24.2% higher than RANSAC when combined with FPFH, and 18.51% higher when combined with FCGF. " }, { "figure_ref": [ "fig_4", "fig_5", "fig_7", "fig_8" ], "heading": "C. Visualizations", "publication_ref": [], "table_ref": [], "text": "We show more registration results in Figs. 5678. " }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Acknowledgments. This work is supported in part by the National Natural Science Foundation of China (NFSC) (No.U19B2037 and 62002295), Shaanxi Provincial Key R&D Program (No.2021KWZ-03), and the Fundamental Research Funds for the Central Universities (No.D5000220352)." } ]
As a fundamental problem in computer vision, 3D point cloud registration (PCR) aims to seek the optimal pose to align a point cloud pair. In this paper, we present a 3D registration method with maximal cliques (MAC). The key insight is to loosen the previous maximum clique constraint, and mine more local consensus information in a graph for accurate pose hypotheses generation: 1) A compatibility graph is constructed to render the affinity relationship between initial correspondences. 2) We search for maximal cliques in the graph, each of which represents a consensus set. We perform node-guided clique selection then, where each node corresponds to the maximal clique with the greatest graph weight. 3) Transformation hypotheses are computed for the selected cliques by the SVD algorithm and the best hypothesis is used to perform registration. Extensive experiments on U3M, 3DMatch, 3DLoMatch and KITTI demonstrate that MAC effectively increases registration accuracy, outperforms various state-of-the-art methods and boosts the performance of deep-learned methods. MAC combined with deep-learned methods achieves stateof-the-art registration recall of 95.7% / 78.9% on 3DMatch / 3DLoMatch.
3D Registration with Maximal Cliques
[ { "figure_caption": "Figure 1 .1Figure 1. Comparison of maximal and maximum cliques on a low overlapping point cloud pair. Maximal cliques (MAC) effectively choose the optimal 6-DoF transformation hypothesis with low rotation error (RE) and translation error (TE) for two point clouds with a low inlier ratio, while the maximum clique fails in this case.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 .2Figure 2. Pipeline of MAC. 1. Construct a graph for the initial correspondence set. 2. Select a set of maximal cliques from the graph as the consistent sets. 3. Generate and evaluate the hypotheses according to the consistent sets. 4. Select the best hypothesis to perform 3D registration.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. An example that illustrates the relationship between FOG and SOG. (a) FOG and its weight matrix. (b) SOG and its weight matrix.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Registration process-visualizations of MAC on 3DMatch.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Qualitative comparison on 3DLoMatch. Red and green represent failed and successful registration, respectively.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 .7Figure 7. Qualitative comparison on KITTI.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 .8Figure 8. Qualitative comparison on ETH.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Registration results on 3DMatch dataset.", "figure_data": "The", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Registration results on 3DLoMatch dataset.", "figure_data": "FPFHFCGFRR(%) RE(°) TE(cm) RR(%) RE(°) TE(cm)i) TraditionalRANSAC-1M [13]0.67 10.2715.069.777.0114.87RANSAC-4M [13]0.45 10.3920.0310.446.9115.14TEASER++ [37]35.154.3810.9646.764.1212.89SC 2 -PCR [8]38.574.0310.3158.733.8010.44ii) Deep learnedDGR [9]19.885.0713.5343.804.1710.82PointDSC [3]20.384.0410.2556.203.8710.48MAC40.883.669.4559.853.509.75# Samples3DMatch RR(%) 5000 2500 1000 500 2503DLoMatch RR(%) 5000 2500 1000 500250FCGF [10]85.1 84.7 83.3 81.6 71.440.141.738.235.426.8SpinNet [1]88.6 86.6 85.5 83.5 70.259.854.948.339.826.8Predator [18]89.0 89.9 90.6 88.5 86.659.861.262.460.858.1CoFiNet [44]89.3 88.9 88.4 87.4 87.067.566.264.263.161.0GeoTransformer [30]92.0 91.8 91.8 91.4 91.275.074.874.274.173.5FCGF+MAC91.3 92.2 91.6 90.4 85.6 6.2↑ 7.5↑ 8.3↑ 8.8↑ 14.2↑ 17.1↑ 14.3↑ 14.4↑ 7.0↑ 57.2 56.0 52.6 42.432.1 5.3↑SpinNet+MAC95.3 95.1 93.3 91.4 81.2 6.7↑ 8.5↑ 7.8↑ 7.9↑ 11.0↑ 13.0↑ 15.0↑ 10.9↑ 15.0↑ 5.3↑ 72.8 69.9 59.2 54.8 32.1Predator+MAC94.6 94.4 94.0 93.5 92.3 5.6↑ 4.5↑ 3.4↑ 5.0↑ 5.7↑70.9 11.1↑ 9.2↑ 70.469.8 7.4↑67.2 6.4↑64.1 6.0↑CoFiNet+MAC94.1 94.4 94.5 93.8 92.7 4.8↑ 5.5↑ 6.1↑ 6.4↑ 5.7↑71.6 4.1↑71.5 5.3↑70.6 6.4↑69.2 6.1↑68.1 7.1↑GeoTransformer+MAC95.7 95.7 95.2 95.3 94.6 3.7↑ 3.9↑ 3.4↑ 3.9↑ 3.4↑78.9 3.9↑78.7 3.9↑78.2 4.0↑77.7 3.6↑76.6 3.1↑", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Performance boosting for deep-learned methods when combined with MAC.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Registration results on KITTI dataset.", "figure_data": "FPFHFCGFRR(%) RE(°) TE(cm) RR(%) RE(°) TE(cm)i) TraditionalFGR [46]5.230.8643.8489.540.4625.72TEASER++ [37]91.171.0317.9894.960.3813.69RANSAC [13]74.411.5530.2080.360.7326.79CG-SAC [31]74.230.7314.0283.240.5622.96SC 2 -PCR [8]99.280.398.6897.840.3320.58ii) Deep learnedDGR [9]77.121.6433.1096.900.3421.70PointDSC [3]98.920.388.3597.840.3320.32MAC99.460.408.4697.840.3419.34that of GeoTransformer. MAC working with GeoTrans-former achieves state-of-the-art registration recall of 95.7%/ 78.9% on 3DMatch / 3DLoMatch. The results suggestthat: 1) MAC can greatly boost existing deep-learned meth-ods; 2) MAC is not sensitive to the number of samples.", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Analysis experiments on 3DMatch / 3DLoMatch. FOG: First order compatibility graph. SOG: Second order compatibility graph.", "figure_data": "FOG SOG GC MC NG NC CR SVD W-SVD MAE MSE # inlierRR(%)RE(°)TE(cm)1)✓✓✓✓83.86 / 39.14 2.17 / 4.01 6.51 / 9.942)✓✓✓✓✓77.02 / 26.61 2.10 / 3.83 6.19 / 9.493)✓✓✓✓82.26 / 39.02 2.12 / 3.98 6.43 / 9.894)✓✓✓83.49 / 38.91 2.22 / 4.11 6.65 / 10.055)✓✓✓✓83.67 / 38.85 2.15 / 4.03 6.53 / 9.82FPFH6)✓✓✓✓84.10 / 40.88 1.96 / 3.66 6.18 / 9.457)✓✓✓✓82.93 / 39.98 1.95 / 3.66 6.12 / 9.488)✓✓✓✓✓82.44 / 38.46 2.16 / 3.97 6.41 / 9.859)✓✓✓✓✓74.06 / 31.11 2.08 / 3.89 6.17 / 9.8210) Top100✓✓✓✓✓82.01 / 37.79 2.13 / 4.02 6.42 / 9.8211) Top200✓✓✓✓✓83.18 / 38.85 2.16 / 4.08 6.55 / 9.9112) Top500✓✓✓✓✓83.06 / 38.85 2.14 / 4.03 6.47 / 9.8113) Top1000✓✓✓✓✓83.30 / 38.91 2.16 / 4.05 6.53 / 9.8414) Top2000✓✓✓✓✓83.36 / 38.79 2.14 / 4.02 6.52 / 9.781)✓✓✓✓93.41 / 59.80 2.04 / 3.78 6.33 / 10.162)✓✓✓✓✓91.68 / 49.97 1.99 / 3.64 6.23 / 9.903)✓✓✓✓93.35 / 59.24 2.04 / 3.67 6.28 / 9.994)✓✓✓92.91 / 59.07 2.06 / 3.88 6.33 / 10.205)✓✓✓✓93.16 / 59.46 2.04 / 3.76 6.26 / 10.00FCGF6)✓✓✓✓93.72 / 59.85 1.89 / 3.50 6.03 / 9.757)✓✓✓✓93.59 / 59.01 1.86 / 3.49 6.00 / 9.618)✓✓✓✓✓93.28 / 59.63 2.02 / 3.73 6.24 / 9.989)✓✓✓✓✓87.86 / 49.35 2.00 / 3.61 6.09 / 9.6010) Top100✓✓✓✓✓92.42 / 57.44 2.00 / 3.75 6.21 / 10.0011) Top200✓✓✓✓✓93.22 / 57.83 2.01 / 3.75 6.29 / 10.0612) Top500✓✓✓✓✓93.22 / 58.90 2.02 / 3.78 6.33 / 10.0213) Top1000✓✓✓✓✓93.35 / 59.40 2.05 / 3.78 6.32 / 10.1814) Top2000✓✓✓✓✓93.35 / 59.52 2.04 / 3.78 6.33 / 10.19", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ". Comparison of the number of correct hypotheses generated by MAC and RANSAC on 3DMatch and 3DLoMatch.", "figure_data": "3DMatch3DLoMatch# hypothesesRANSACMACRANSACMACFCGF FPFH FCGFFPFH FCGF FPFH FCGF FPFH10010.450.7661.9450.671.250.0530.4712.2220020.761.50119.20 89.272.520.0955.5717.5950051.743.68269.06 162.416.210.21109.32 23.321000103.657.39456.18 217.32 12.430.41156.11 26.022000208.24 14.90 669.32 254.13 24.800.81202.12 29.313DMatch3DLoMatchRR(%)RR(%)MAC-198.4691.24MAC-597.1083.32MAC-1096.4377.93MAC-2094.7070.47MAC-5091.1356.37MAC-origin93.7259.85", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Registration recall on 3DMatch with FCGF setting based on judging MAC's hypotheses. MAC-n: a point cloud pair is considered alignable if at least n hypotheses are correct.", "figure_data": "", "figure_id": "tab_6", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "", "figure_data": "# correspondencesGraph ConstructionSearch Maximal CliquesNode-guided Clique SelectionPose EstimationTotal2501.03 (14.55%)5.24 (74.01%)0.58 (8.19%)0.23 (3.25%)7.085004.07 (17.54%)15.67 (67.51%)3.12 (13.44%)0.35 (1.51%)23.21100016.90 (29.85%)36.60 (64.65%)1.88 (3.32%)1.23 (2.18%)56.612500153.92 (53.29%) 104.03 (36.02%)4.97 (1.72%)25.93 (8.97%)288.855000887.03 (27.16%) 1579.61 (48.37%) 65.40 (2.00%) 733.38 (22.47%) 3265.42Table 8. Average consumed time (ms) per point cloud pair onthe 3DMatch dataset. Predator is used for generating correspon-dences.", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "and Table 11, respectively. Inlier ratio (%) of generated correspondence on ETH dataset.", "figure_data": "Gazebo Summer Winter Autumn Summer WoodAvg.FPFH [32]0.420.240.210.260.29FCGF [10]2.341.251.351.681.62Spinnet [1]16.6713.7312.2014.6714.40Gazebo Summer Winter Autumn Summer WoodAvg.FPFH [32]16.8510.0310.4310.4011.92FCGF [10]54.3528.0352.1751.2042.78Spinnet [1]98.3783.05100.0099.2092.57FPFH+MAC46.74 29.89↑27.68 17.65↑33.04 22.61↑43.20 32.80↑36.12 24.20↑FCGF+MAC75.54 21.19↑42.91 14.88↑71.30 19.13↑73.60 22.40↑61.29 18.51↑Spinnet+MAC98.91 0.54↑87.54 4.49↑100.00 -100.00 0.80↑94.67 2.10↑", "figure_id": "tab_8", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Registration recall (%) boosting for various descriptors combined with MAC on ETH dataset.", "figure_data": "", "figure_id": "tab_9", "figure_label": "11", "figure_type": "table" }, { "figure_caption": "Information of all tested datasets. MAC working with Spinnet achieves a registration recall of 94.67% on ETH. Time and memory analysis. Efficiency and memory consumption results of several well-performed methods are shown in Tables12 and 13, respectively. Regarding efficiency experiments, all methods have been tested for ten rounds, and the mean and standard deviation results are reported. All methods were executed in the CPU. The results indicate that MAC is quite lightweight and efficient when the input correspondence number is less than 2.5k.", "figure_data": "DatasetData typeNuisancesApplication scenario# Matching pairsU3M [26]ObjectLimited overlap, self-occlusionRegistration4963DMatch [45]Indoor sceneOcclusion, real noiseRegistration16233DLoMatch [18]Indoor sceneLimited overlap, occlusion, real noiseRegistration1781KITTI [15]Outdoor sceneClutter, occlusion, real noiseDetection, registration, segmentation555ETH [29]Outdoor scene Limited overlap, clutter, occlusion, real noiseFeature description, registration713", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Comparisons on average time consumption (ms).", "figure_data": "# Corr.250500100025005000PointDSC3531.46 3538.26 3582.57 3634.22 3736.10TEASER++ 1631.92 1634.77 2029.22 2266.84 2484.83SC 2 -PCR448.01453.18508.40621.27690.22MAC15.5917.4323.4952.79150.86", "figure_id": "tab_11", "figure_label": "12", "figure_type": "table" }, { "figure_caption": "Comparisons on average memory consumption (MB).", "figure_data": "", "figure_id": "tab_12", "figure_label": "13", "figure_type": "table" } ]
Xiyu Zhang; Jiaqi Yang; Shikun Zhang; Yanning Zhang
[ { "authors": "Sheng Ao; Qingyong Hu; Bo Yang; Andrew Markham; Yulan Guo", "journal": "", "ref_id": "b0", "title": "Spinnet: Learning a general surface descriptor for 3d point cloud registration", "year": "2021" }, { "authors": "Yasuhiro Aoki; Hunter Goforth; Rangaprasad Arun Srivatsan; Simon Lucey", "journal": "", "ref_id": "b1", "title": "Pointnetlk: Robust & efficient point cloud registration using pointnet", "year": "2019" }, { "authors": "Xuyang Bai; Zixin Luo; Lei Zhou; Hongkai Chen; Lei Li; Zeyu Hu; Hongbo Fu; Chiew-Lan Tai", "journal": "IEEE", "ref_id": "b2", "title": "Pointdsc: Robust point cloud registration using deep spatial consistency", "year": "2021" }, { "authors": "Xuyang Bai; Zixin Luo; Lei Zhou; Hongbo Fu; Long Quan; Chiew-Lan Tai", "journal": "", "ref_id": "b3", "title": "D3feat: Joint learning of dense detection and description of 3d local features", "year": "2020" }, { "authors": "Daniel Barath; Jiří Matas", "journal": "", "ref_id": "b4", "title": "Graph-cut ransac", "year": "2018" }, { "authors": "Alvaro Parra; Bustos ; Tat-Jun Chin", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b5", "title": "Guaranteed outlier removal for point cloud registration with correspondences", "year": "2017" }, { "authors": "Hui Chen; Bir Bhanu", "journal": "Pattern Recognition Letters", "ref_id": "b6", "title": "3d free-form object recognition in range images using local surface patches", "year": "2007" }, { "authors": "Zhi Chen; Kun Sun; Fan Yang; Wenbing Tao", "journal": "", "ref_id": "b7", "title": "Sc2-pcr: A second order spatial compatibility for efficient and robust point cloud registration", "year": "2022" }, { "authors": "Christopher Choy; Wei Dong; Vladlen Koltun", "journal": "IEEE", "ref_id": "b8", "title": "Deep global registration", "year": "2020" }, { "authors": "Christopher Choy; Jaesik Park; Vladlen Koltun", "journal": "", "ref_id": "b9", "title": "Fully convolutional geometric features", "year": "2019" }, { "authors": "Bertram Drost; Markus Ulrich; Nassir Navab; Slobodan Ilic", "journal": "IEEE", "ref_id": "b10", "title": "Model globally, match locally: Efficient and robust 3d object recognition", "year": "2010" }, { "authors": "David Eppstein; Maarten Löffler; Darren Strash", "journal": "Springer", "ref_id": "b11", "title": "Listing all maximal cliques in sparse graphs in near-optimal time", "year": "2010" }, { "authors": "A Martin; Robert C Fischler; Bolles", "journal": "Communications of the ACM", "ref_id": "b12", "title": "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography", "year": "1981" }, { "authors": "Kexue Fu; Shaolei Liu; Xiaoyuan Luo; Manning Wang", "journal": "", "ref_id": "b13", "title": "Robust point cloud registration framework based on deep graph matching", "year": "2021" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "IEEE", "ref_id": "b14", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Zan Gojcic; Caifa Zhou; Jan D Wegner; Andreas Wieser", "journal": "", "ref_id": "b15", "title": "The perfect match: 3d point cloud matching with smoothed densities", "year": "2019" }, { "authors": "Yulan Guo; Mohammed Bennamoun; Ferdous Sohel; Min Lu; Jianwei Wan", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b16", "title": "3d object recognition in cluttered scenes with local surface features: A survey", "year": "2014" }, { "authors": "Shengyu Huang; Zan Gojcic; Mikhail Usvyatsov; Andreas Wieser; Konrad Schindler", "journal": "", "ref_id": "b17", "title": "Predator: Registration of 3d point clouds with low overlap", "year": "2021" }, { "authors": "Junha Lee; Seungwook Kim; Minsu Cho; Jaesik Park", "journal": "", "ref_id": "b18", "title": "Deep hough voting for robust global registration", "year": "2021" }, { "authors": "Marius Leordeanu; Martial Hebert", "journal": "", "ref_id": "b19", "title": "A spectral technique for correspondence problems using pairwise constraints", "year": "2005" }, { "authors": "Jiayuan Li", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b20", "title": "A practical o (n2) outlier removal method for point cloud registration", "year": "2021" }, { "authors": "Yang Li; Tatsuya Harada", "journal": "", "ref_id": "b21", "title": "Lepard: Learning partial point cloud matching in rigid and deformable scenes", "year": "2022" }, { "authors": "Muyuan Lin; Varun Murali; Sertac Karaman", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b22", "title": "A planted clique perspective on hypothesis pruning", "year": "2022" }, { "authors": "Yu-Kai Lin; Wen-Chieh Lin; Chieh-Chih Wang", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b23", "title": "Kclosest points and maximum clique pruning for efficient and effective 3-d laser scan matching", "year": "2022" }, { "authors": "Mohammed Ajmal S Mian; Robyn A Bennamoun; Owens", "journal": "International Journal of Shape Modeling", "ref_id": "b24", "title": "Automatic correspondence for 3d modeling: an extensive review", "year": "2005" }, { "authors": "Mohammed Ajmal S Mian; Robyn A Bennamoun; Owens", "journal": "International Journal of Computer Vision", "ref_id": "b25", "title": "A novel representation and feature matching algorithm for automatic pairwise registration of range images", "year": "2006" }, { "authors": "Srikumar Dias Pais; Ramalingam; Madhav Venu; Jacinto C Govindu; Rama Nascimento; Pedro Chellappa; Miraldo", "journal": "IEEE", "ref_id": "b26", "title": "3dregnet: A deep neural network for 3d point registration", "year": "2020" }, { "authors": "Alvaro Parra; Tat-Jun Chin; Frank Neumann; Tobias Friedrich; Maximilian Katzmann", "journal": "", "ref_id": "b27", "title": "A practical maximum clique algorithm for matching with pairwise constraints", "year": "2019" }, { "authors": "Ming Franc ¸ois Pomerleau; Francis Liu; Roland Colas; Siegwart", "journal": "The International Journal of Robotics Research", "ref_id": "b28", "title": "Challenging data sets for point cloud registration algorithms", "year": "2012" }, { "authors": "Zheng Qin; Hao Yu; Changjian Wang; Yulan Guo; Yuxing Peng; Kai Xu", "journal": "", "ref_id": "b29", "title": "Geometric transformer for fast and robust point cloud registration", "year": "2022" }, { "authors": "Siwen Quan; Jiaqi Yang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b30", "title": "Compatibility-guided sampling consensus for 3-d point cloud registration", "year": "2020" }, { "authors": "Bogdan Radu; Nico Rusu; Michael Blodow; Beetz", "journal": "IEEE", "ref_id": "b31", "title": "Fast point feature histograms (fpfh) for 3d registration", "year": "2009" }, { "authors": "Bogdan Radu; Steve Rusu; Cousins", "journal": "IEEE", "ref_id": "b32", "title": "3d is here: Point cloud library (pcl)", "year": "2011" }, { "authors": "Ivan Sipiran; Benjamin Bustos", "journal": "The Visual Computer", "ref_id": "b33", "title": "Harris 3d: a robust extension of the harris operator for interest point detection on 3d meshes", "year": "2011" }, { "authors": "Federico Tombari; Samuele Salti; Luigi Di; Stefano ", "journal": "Springer", "ref_id": "b34", "title": "Unique signatures of histograms for local surface description", "year": "2010" }, { "authors": "Haiping Wang; Yuan Liu; Zhen Dong; Wenping Wang", "journal": "", "ref_id": "b35", "title": "You only hypothesize once: Point cloud registration with rotation-equivariant descriptors", "year": "2022" }, { "authors": "Heng Yang; Jingnan Shi; Luca Carlone", "journal": "IEEE Transactions on Robotics", "ref_id": "b36", "title": "Teaser: Fast and certifiable point cloud registration", "year": "2020" }, { "authors": "Jiaqi Yang; Zhiguo Cao; Qian Zhang", "journal": "Information Sciences", "ref_id": "b37", "title": "A fast and robust local descriptor for 3d point cloud registration", "year": "2016" }, { "authors": "Jiaqi Yang; Jiahao Chen; Siwen Quan; Wei Wang; Yanning Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b38", "title": "Correspondence selection with loose-tight geometric voting for 3d point cloud registration", "year": "2022" }, { "authors": "Jiaqi Yang; Zhiqiang Huang; Siwen Quan; Zhaoshuai Qi; Yanning Zhang", "journal": "IEEE Transactions on Geoscience and Remote Sensing", "ref_id": "b39", "title": "Sac-cot: Sample consensus by sampling compatibility triangles in graphs for 3-d point cloud registration", "year": "2021" }, { "authors": "Jiaqi Yang; Zhiqiang Huang; Siwen Quan; Qian Zhang; Yanning Zhang; Zhiguo Cao", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b40", "title": "Toward efficient and robust metrics for ransac hypotheses and 3d rigid registration", "year": "2021" }, { "authors": "Jiaolong Yang; Hongdong Li; Dylan Campbell; Yunde Jia", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b41", "title": "Go-icp: A globally optimal solution to 3d icp pointset registration", "year": "2015" }, { "authors": "Jiaqi Yang; Yang Xiao; Zhiguo Cao; Weidong Yang", "journal": "Pattern Recognition Letters", "ref_id": "b42", "title": "Ranking 3d feature correspondences via consistency voting", "year": "2019" }, { "authors": "Hao Yu; Fu Li; Mahdi Saleh; Benjamin Busam; Slobodan Ilic", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b43", "title": "Cofinet: Reliable coarse-to-fine correspondences for robust pointcloud registration", "year": "2021" }, { "authors": "Andy Zeng; Shuran Song; Matthias Nießner; Matthew Fisher; Jianxiong Xiao; Thomas Funkhouser", "journal": "", "ref_id": "b44", "title": "3dmatch: Learning local geometric descriptors from rgb-d reconstructions", "year": "2017" }, { "authors": "Qian-Yi Zhou; Jaesik Park; Vladlen Koltun", "journal": "Springer", "ref_id": "b45", "title": "Fast global registration", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 298.32, 108.69, 135.87, 59.35 ], "formula_id": "formula_0", "formula_text": "MAC(c1)=(c1,c3,c4,c5) MAC(c3)=(c1,c3,c4,c5) MAC(c2)=(c2,c6,c7) … MAC(c9)=(c1,c9,c10) MAC(c10)=(c1,c9,c10) Node-guided maximal cliques c7 c7 c6 c6 c2 c2 c5 c5 c4 c4 c3 c3 c1 c1 R3, t3 c10 c10 c9 c9 c1 c1" }, { "formula_coordinates": [ 3, 343.68, 467.9, 201.43, 12.69 ], "formula_id": "formula_1", "formula_text": "S dist (c i , c j ) = p s i -p s j -p t i -p t j .(1)" }, { "formula_coordinates": [ 3, 359.42, 513.97, 185.7, 27.16 ], "formula_id": "formula_2", "formula_text": "S cmp (c i , c j ) = exp(- S dist (c i , c j ) 2 2d 2 cmp ),(2)" }, { "formula_coordinates": [ 3, 345.6, 704.17, 199.51, 9.68 ], "formula_id": "formula_3", "formula_text": "W SOG = W F OG ⊙ (W F OG × W F OG ),(3)" }, { "formula_coordinates": [ 4, 50.11, 207.17, 236.25, 20.94 ], "formula_id": "formula_4", "formula_text": "G = (V, E), clique C = (V ′ , E ′ ), V ′ ⊆ V, E ′ ⊆ E is a subset of G," }, { "formula_coordinates": [ 4, 132.61, 639.38, 153.76, 20.08 ], "formula_id": "formula_5", "formula_text": "w Ci = ej ∈Ei w ej ,(4)" }, { "formula_coordinates": [ 4, 392.58, 333.13, 152.54, 12.69 ], "formula_id": "formula_6", "formula_text": "sinα s ij -sinα t ij < t α ,(5)" }, { "formula_coordinates": [ 5, 106.65, 132.6, 179.72, 30.32 ], "formula_id": "formula_7", "formula_text": "(R * , t * ) = arg max R,t N i=1 s(c i ),(6)" }, { "formula_coordinates": [ 11, 50.11, 512.01, 244.97, 38.55 ], "formula_id": "formula_8", "formula_text": "w SOG (e ij ) = w F OG (e ij )• e ik ∈E e jk ∈E w F OG (e ik ) • w F OG (e jk ) .(7)" } ]
2023-10-30
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b1", "b14", "b15", "b17", "b24", "b66", "b69", "b71", "b78", "b92", "b16", "b73", "b74", "b71", "b1", "b11", "b60", "b59", "b45", "b48" ], "table_ref": [], "text": "The field of image generation has seen tremendous progress with the advent of diffusion models [2,15,16,18,25,67,70,72,79,93] and the availability of large-scale image-text paired datasets [17,74,75]. However, existing diffusion models still face challenges in generating visually pleasing text on images, and there is currently no specialized large-scale dataset for this purpose. The ability of AI models to generate accurate and coherent text on images is crucial, given the widespread use of text images in various forms (e.g., posters, book covers, memes, etc.) and the difficulty in creating high-quality text images, which typically require professional skills and numerous times of designers.\nTraditional solutions to creating text images involve using image processing tools like Photoshop to add text onto images directly. However, these often result in unnatural artifacts due to the background's complex texture or lighting variations. Recent efforts have used diffusion models to overcome the limitations of traditional methods and enhance text rendering quality. For instance, Imagen [72], eDiff-I [2], and DeepFloyd [12] observe diffusion models generate text better with T5 series text encoders [61] than the CLIP text encoder [60]. Liu et al. employ character-aware text encoders to improve text rendering [46]. Despite some success, these models only focus on text encoders, lacking control over the generation process. A concurrent work, GlyphDraw [49], improves the controllability of models by conditioning on the location and structures of Chinese characters. However, GlyphDraw does not support multiple text bounding-box generation, which is not applicable to many text images such as posters and book covers.\nIn this paper, we propose TextDiffuser, a flexible and controllable framework based on diffusion models. The framework consists of two stages. In the first stage, we use a Layout Transformer to locate the coordinates of each keyword in text prompts and obtain character-level segmentation masks.\nIn the second stage, we fine-tune the latent diffusion model by leveraging the generated segmentation masks as conditions for the diffusion process and text prompts. We introduce a character-aware loss in the latent space to further improve the quality of generated text regions. Figure 1 illustrates the application of TextDiffuser in generating accurate and coherent text images using text prompts alone or text template images. Additionally, TextDiffuser is capable of performing text inpainting 2to reconstruct incomplete images with text. To train our model, we use OCR tools and design filtering strategies to obtain 10 million high-quality image-text pairs with OCR annotations (dubbed as MARIO-10M), each with recognition, detection, and character-level segmentation annotations.\nExtensive experiments and user studies demonstrate the superiority of the proposed TextDiffuser over existing methods on the constructed benchmark MARIO-Eval. The code, model and dataset will be publicly available to promote future research." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b17", "b24", "b66", "b71", "b78", "b62", "b69", "b1", "b7", "b12", "b51", "b47", "b25", "b79", "b63", "b99", "b42", "b57", "b71", "b1", "b11", "b60", "b45", "b48", "b80", "b71", "b45", "b48", "b2", "b5", "b4", "b54", "b44", "b64", "b97", "b102", "b104", "b56", "b82", "b95", "b70", "b47", "b66", "b10", "b98", "b51", "b0", "b91", "b86", "b6", "b88", "b72", "b100", "b52", "b75", "b27", "b30", "b18", "b89", "b90", "b76", "b40", "b94", "b13", "b105", "b50", "b41", "b83", "b89", "b90", "b106", "b67", "b8", "b84", "b49", "b103", "b87", "b77", "b93", "b58", "b37", "b55", "b21", "b19", "b39", "b32", "b29", "b20", "b35", "b53" ], "table_ref": [], "text": "Text Rendering. Image generation has made significant progress with the advent of diffusion models [18,25,67,72,79,63,70,2,8,13,52,48,26,80], achieving state-of-the-art results compared with previous GAN-based approaches [64,100,43,58]. Despite rapid development, current methods still struggle with rendering accurate and coherent text. To mitigate this, Imagen [72], eDiff-I [2], and DeepFolyd [12] utilize a large-scale language model (large T5 [61]) to enhance the text-spelling knowledge. In [46], the authors noticed that existing text encoders are blind to token length and trained a character-aware variant to alleviate this problem. A concurrent work, GlyphDraw [49], focuses on generating high-quality images with Chinese texts with the guidance of text location and glyph images. Unlike this work, we utilize Transformer [81] to obtain the layouts of keywords, enabling the generation of texts in multiple lines. Besides, we use character-level segmentation masks as prior, which can be easily controlled (e.g., by providing a template image) to meet user needs.\nSeveral papers have put forward benchmarks containing a few cases regarding text rendering for evaluation. For example, Imagen [72] introduces DrawBench containing 200 prompts, in which 21 prompts are related to visual text rendering (e.g., A storefront with 'Hello World' written on it). According to [46], the authors proposed DrawText comprising creative 175 prompts (e.g., letter 'c' made from cactus, high-quality photo). GlyphDraw [49] designs 218 prompts in Chinese and English (e.g., Logo for a chain of grocery stores with the name 'Grocery'). Considering that existing benchmarks only contain a limited number of cases, we attempt to collect more prompts and combine them with existing prompts to establish a larger benchmark MARIO-Eval to facilitate comprehensive comparisons for future work.\nImage Inpainting. Image inpainting is the task of reconstructing missing areas in images naturally and coherently. Early research focused on leveraging low-level image structure and texture to address this task [3,6,5]. Later, deep learning architectures such as auto-encoder [55,45], GAN [65,98], VAE [103,105], and auto-regressive Transformers [57,83,96] were applied to tackle this problem. Recently, diffusion models have been used to generate high-quality and diverse results for unconditional image inpainting [71,48,67,11,99], text-conditional image inpainting [52,1] and image-conditional image inpainting [92]. Our work falls under the category of text-conditional image inpainting using diffusion models. In contrast to prior works that focused on completing images with natural backgrounds or objects, our method focuses on completing images with text-related rendering, also named text inpainting, by additional conditioning on a character-level segmentation mask.\nOptical Character Recognition. Optical Character Recognition (OCR) is an important task that has been studied in academia for a long period [87,7]. It has undergone a remarkable development in the last decade, contributing to many applications like autonomous driving [89,73], car license plate recognition [101,53], GPT models [76,28], etc. Various datasets [31,19,90,91] and downstream tasks are included within this field, such as text image recognition [77,41,95,14], detection [106,51,42,84], segmentation [90,91,107,68], super-resolution [9,85,50,104], as well as some generation tasks, including text image editing [88,78,94,59,38], document layout generation [56,22,20,40,33], font generation [30,21,36,54], etc. Among them, the font generation task is most relevant to our task. Font generation aims to create high-quality, aesthetically pleasing fonts based on given character images. In contrast, our task is more challenging as it requires the generated text to be legible, visually appealing, and coherent with the background in various scenarios." }, { "figure_ref": [ "fig_1" ], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "As illustrated in Figure 2, TextDiffuser consists of two stages: Layout Generation and Image Generation. We will detail the two stages and introduce the inference process next." }, { "figure_ref": [], "heading": "Stage1: Layout Generation", "publication_ref": [ "b19", "b66", "b59", "b80" ], "table_ref": [], "text": "In this stage, the objective is to utilize bounding boxes to determine the layout of keywords (enclosed with quotes specified by user prompts). Inspired by Layout Transformer [20], we utilize the Transformer architecture to obtain the layout of keywords. Formally, we denote the tokenized prompt as P = (p 0 , p 1 , ..., p L-1 ), where L means the maximum length of tokens. Following LDM [67], we use CLIP [60] and two linear layers to encode the sequence as CLIP(P) ∈ R L×d , where d is the dimension of latent space. To distinguish the keywords against others, we design a keyword embedding Key(P) ∈ R L×d with two entries (i.e., keywords and non-keywords). Furthermore, we encode the width of keywords with an embedding layer Width(P) ∈ R L×d . Together with the learnable positional embedding Pos(P) ∈ R L×d introduced in [81], we construct the whole embedding as follows:\nEmbedding(P) = CLIP(P) + Pos(P) + Key(P) + Width(P).\n(\n)1\nThe embedding is further processed with Transformer-based l-layer encoder Φ E and decoder Φ D to get the bounding boxes B ∈ R K×4 of K key words autoregressively:\nB = Φ D (Φ E (Embedding(P))) = (b 0 , b 1 , ..., b K-1 ).(2)\nSpecifically, we use positional embedding as the query for the Transformer decoder Φ D , ensuring that the n-th query corresponds to the n-th keyword in the prompt. The model is optimized with l1 loss, also denoted as |B GT -B| where B GT is the ground truth. Further, we can utilize some Python packages like Pillow to render the texts and meanwhile obtain the character-level segmentation mask C with |A| channels, where |A| denote the size of alphabet A. To this end, we obtain the layouts of keywords and the image generation process is introduced next. " }, { "figure_ref": [], "heading": "Stage2: Image Generation", "publication_ref": [ "b34", "b24", "b68" ], "table_ref": [], "text": "In this stage, we aim to generate the image guided by the segmentation masks C produced in the first stage. We use VAE [35] to encode the original image with shape H × W into 4-D latent space features F ∈ R 4×H ′ ×W ′ . Then we sample a time step T ∼ Uniform(0, T max ) and sample a Gaussian noise ϵ ∈ R 4×H ′ ×W ′ to corrupt the original feature, yielding F = √ ᾱT F + √ 1 -ᾱT ϵ where ᾱT is the coefficient of the diffusion process introduced in [25]. Also, we downsample the characterlevel segmentation mask C with three convolution layers, yielding 8-D Ĉ ∈ R 8×H ′ ×W ′ . We also introduce two additional features, called 1-D feature mask\nM ∈ R 1×H ′ ×W ′ and 4-D masked feature FM ∈ R 4×H ′ ×W ′ .\nIn the process of whole-image generation, M is set to cover all regions of the feature and FM is the feature of a fully masked image. In the process of part-image generation (also called text inpainting), the feature mask M represents the region where the user wants to generate, while the masked feature FM indicates the region that the user wants to preserve. To simultaneously train two branches, we use a masking strategy where a sample is fully masked with a probability of σ and partially masked with a probability of 1 -σ. We concatenate F, Ĉ, M, FM in the feature channel as a 17-D input and use denoising loss between the sampled noise ϵ and the predicted noise ϵ θ :\nl denoising = ||ϵ -ϵ θ ( F, Ĉ, M, FM , P, T )|| 2 2 .(3)\nFurthermore, we propose a character-aware loss to help the model focus more on text regions. In detail, we pre-train a U-Net [69] that can map latent features to character-level segmentation masks.\nDuring training, we fix its parameters and only use it to provide guidance by using a cross-entropy loss l char with weight λ char (See more details in Appendix A). Overall, the model is optimized with\nl = l denoising + λ char * l char .(4)\nFinally, the output features are fed into the VAE decoder to obtain the images. " }, { "figure_ref": [], "heading": "Inference Stage", "publication_ref": [], "table_ref": [], "text": "TextDiffuser provides a high degree of controllability and flexibility during inference in the following ways: (1) Generate images from user prompts. Notably, the user can modify the generated layout or edit the text to meet their personalized requirements;\n(2) The user can directly start from the second stage by providing a template image (e.g., a scene image, handwritten image, or printed image), and a segmentation model is pre-trained to obtain the character-level segmentation masks (Appendix B);\n(3) Users can modify the text regions of a given image using text inpainting. Moreover, this operation can be performed multiple times. These experimental results will be presented in the next section." }, { "figure_ref": [], "heading": "MARIO Dataset and Benchmark", "publication_ref": [], "table_ref": [], "text": "As there is no large-scale dataset designed explicitly for text rendering, to mitigate this issue, we collect 10 million image-text pairs with OCR annotations to construct the MARIO-10M Dataset.\nWe further collect the MARIO-Eval Benchmark from the subset of the MARIO-10M test set and other existing sources to serve as a comprehensive tool for evaluating text rendering quality." }, { "figure_ref": [ "fig_2" ], "heading": "MARIO-10M Dataset", "publication_ref": [ "b41", "b3", "b68", "b74" ], "table_ref": [], "text": "The MARIO-10M is a collection of about 10 million high-quality and diverse image-text pairs from various data sources such as natural images, posters, and book covers. Figure 3 illustrates some examples from the dataset. We design automatic schemes and strict filtering rules to construct annotations and clean noisy data (more details in Appendix D and Appendix E). The dataset contains comprehensive OCR annotations for each image, including text detection, recognition, and character-level segmentation annotations. Specifically, we use DB [42] for detection, PARSeq [4] for recognition, and manually train a U-Net [69] for segmentation. We analyze the performance of OCR tools in Appendix F. The total size of MARIO-10M is 10,061,720, from which we randomly chose 10,000,000 samples as the training set and 61,720 as the testing set. MARIO-10M is collected from three data sources:\nMARIO-LAION derives from the large-scale datasets LAION-400M [75]. After filtering, we obtained 9,194,613 high-quality text images with corresponding captions. This dataset comprises a broad range of text images, including advertisements, notes, posters, covers, memes, logos, etc.\nMARIO-TMDB derives from The Movie Database (TMDB), which is a community-built database for movies and TV shows with high-quality posters. We filter 343,423 English posters using the TMDB API out of 759,859 collected samples. Since each image has no off-the-shelf captions, we use prompt templates to construct the captions according to movie titles.\nMARIO-OpenLibrary derives from Open Library, which is an open, editable library catalog that creates a web page for each published book. We first collect 6,352,989 original-size Open Library covers in bulk. Then, we obtained 523,684 higher-quality images after filtering. Like MARIO-TMDB, we manually construct captions using titles due to the lack of off-the-shelf captions." }, { "figure_ref": [], "heading": "MARIO-Eval Benchmark", "publication_ref": [ "b71", "b45", "b48", "b23", "b28", "b59", "b22" ], "table_ref": [], "text": "The MARIO-Eval benchmark serves as a comprehensive tool for evaluating text rendering quality collected from the subset of the MARIO-10M test set and other sources. It comprises 5,414 prompts in total, including 21 prompts from DrawBenchText [72], 175 prompts from DrawTextCreative [46], 218 prompts from ChineseDrawText [49] and 5,000 image-text pairs from a subset of the MARIO-10M test set. The 5,000 image-text pairs are divided into three sets of 4,000, 500, and 500 pairs, and are named LAIONEval4000, TMDBEval500, and OpenLibraryEval500 based on their respective data sources. We offer examples in Appendix G to provide a clearer understanding of MARIO-Eval.\nEvaluation Criteria: We evaluate text rendering quality with MARIO-Eval from four aspects: (1) Fréchet Inception Distance (FID) [24] compares the distribution of generated images with the distribution of real images. (2) CLIPScore calculates the cosine similarity between the image and text representations from CLIP [29,60,23]. (3) OCR Evaluation utilizes existing OCR tools to detect and recognize text regions in the generated images. Accuracy, Precision, Recall, and F-measure are metrics to evaluate whether keywords appear in the generated images. (4) Human Evaluation is conducted by inviting human evaluators to rate the text rendering quality of generated images using questionnaires. More explanations are shown in Appendix H." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b59", "b59", "b81", "b46", "b9", "b38", "b24", "b26", "b66" ], "table_ref": [], "text": "For the first stage, we utilize the pre-trained CLIP [60] to obtain the embedding of given prompts. The number of Transformer layers l is set to 2, and the dimension of latent space d is set to 512. The maximum length of tokens L is set to 77 following CLIP [60]. We leverage a commonly used font \"Arial.ttf\" and set the font size to 24 to obtain the width embedding and also use this font for rendering. The alphabet A comprises 95 characters, including 26 uppercase letters, 26 lowercase letters, 10 digits, 32 punctuation marks, and a space character. After tokenization, only the first subtoken is marked as the keyword when several subtokens exist for a word.\nFor the second stage, we implement the diffusion process using Hugging Face Diffusers [82] and load the checkpoint \"runwayml/stable-diffusion-v1-5\". Notably, we only need to modify the input dimension of the input convolution layer (from 4 to 17), allowing our model to have a similar scale of parameters and computational time as the original model. In detail, the height H and W of input and output images are 512. For the diffusion process, the input is with spatial dimension H ′ = 64 and W ′ = 64. We set the batch size to 768 and trained the model for two epochs, taking four days using 8 Tesla V100 GPUs with 32GB memory. We use the AdamW optimizer [47] and set the learning rate to 1e-5. Additionally, we utilize gradient checkpoint [10] and xformers [39] for computational efficiency. During training, we follow [25] to set the maximum time step T max to 1,000, and the caption is dropped with a probability of 10% for classifier-free guidance [27]. When training the part-image generation branch, the detected text box is masked with a likelihood of 50%. We use 50 sampling steps during inference and classifier-free guidance with a scale of 7.5 following [67]." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [ "b71", "b71" ], "table_ref": [ "tab_0", "tab_1", "tab_2" ], "text": "Number of Transformer layers and the effectiveness of width embedding. We conduct ablation studies on the number of Transformer layers and whether to use width embedding in the Layout Transformer. The results are shown in Table 1. All ablated models are trained on the training set of MARIO-10M and evaluated on its test set. Results show that adding width embedding improves performance, boosting IoU by 2.1%, 2.9%, and 0.3% when the number of Transformer layers l is set to 1, 2, and 4, respectively. The optimal IoU is achieved using two Transformer layers and the width embedding is included. See more visualization results in Appendix I.\nCharacter-level segmentation masks provide explicit guidance for generating characters. The character-level segmentation masks provide explicit guidance on the position and content of characters during the generation process of TextDiffuser. To validate the effectiveness of using character-level segmentation masks, we train ablated models without using the masks and show results in Appendix The weight of character-aware loss. The experimental results are demonstrated in Table 2, where we conduct experiments with λ char ranging from [0, 0.001, 0.01, 0.1, 1]. We utilize DrawBenchText [72] for evaluation and use Microsoft Read API to detect and recognize the texts in generated images. We use Accuracy (Acc) as the metric to justify whether the detected words exactly match the keywords. We observe that the optimal performance is achieved when λ char is set to 0.01, where the score is increased by 9.8% compared with the baseline (λ char = 0).\nThe training ratio of whole/part-image generation branches. We explore the training ratio σ ranging from [0, 0.25, 0.5, 0.75, 1] and show results in Table 3. When σ is set to 1, it indicates that only the whole-image branch is trained and vice versa. We evaluate the model using DrawBenchText [72] for the whole-image generation branch. For the part-image generation branch, we randomly select 1,000 samples from the test set of MARIO-10M and randomly mask some of the detected text boxes. We utilize Microsoft Read API to detect and recognize the reconstructed text boxes in generated images while using the F-measure of text detection results and spotting results as metrics (denoted as Det-F and Spot-F, respectively). The results show that when the training ratio is set to 50%, the model performs better on average (0.716)." }, { "figure_ref": [ "fig_3", "fig_4", "fig_5", "fig_6" ], "heading": "Experimental Results", "publication_ref": [ "b66", "b101", "b11", "b11", "b101", "b71", "b1", "b48", "b62", "b45", "b48", "b87" ], "table_ref": [], "text": "Quantitative Results. For the whole-image generation task, we compare our method with Stable Diffusion (SD) [67], ControlNet [102], and DeepFloyd [12] in quantitative experiments with the publicly released codes and models detailed in Appendix K. DeepFloyd [12] uses two super-resolution modules to generate higher resolution 1024×1024 images compared with 512×512 images generated by other methods. We use the Canny map of printed text images generated with our first stage model as conditions for ControlNet [102]. Please note that we are not able to compare with Imagen [72], eDiff-i [2], and GlyphDraw [49] margin (e.g., 76.10% and 60.62% better than Stable Diffusion and DeepFloyd regarding F-measure), highlighting the significance of explicit guidance. As for the part-image generation task, we cannot evaluate our method since no methods are specifically designed for this task to our knowledge.\nQualitative Results. For the whole-image generation task, we further compare with closed-source DALL•E [63], Stable Diffusion XL (SD-XL), and Midjourney by showing qualitative examples generated with their official API services detailed in Appendix K. Figure 4 shows some images generated from prompts or printed text images by different methods. Notably, our method generates more readable texts, which are also coherent with generated backgrounds. On the contrary, although the images generated by SD-XL and Midjourney are visually appealing, some generated text does not contain the desired text or contains illegible characters with incorrect strokes. The results also show that despite the strong supervision signals provided to ControlNet, it still struggles to generate images with accurate text consistent with the background. We also initiate a comparison with the Character-Aware Model [46] and the concurrent work GlyphDraw [49] using samples from their papers as their open-source code, checkpoints or APIs are not available. Figure 5 shows that TextDiffuser performs better than these methods. For instance, the Character-Aware Model suffers from misspelling issues (e.g., 'm' in 'Chimpanzees') due to its lack of explicit control, and GlyphDraw struggles with rendering images containing multiple text lines. For the part-image generation task, we visualize some results in Figure 6. In contrast to text editing tasks [88], we give the model sufficient flexibility to generate texts with reasonable styles. For instance, the image in the second row and first column contains the word \"country\" in green, while the model generates the word \"country\" in yellow. This is reasonable since it follows the style of the nearest word \"range\". Besides, our method can render realistic text coherent with the background, even in complex cases such as clothing. More qualitative results are shown in Appendix L. Time and Parameter Efficiency For the time efficiency, the first stage of Layout Generation leverages an auto-regressive Transformer whose prediction time correlates with the number of keywords. Specifically, we conduct experiments to evaluate the time overhead for different numbers of keywords, including 1 (1.07±0.03s), 2 (1.12±0.09s), 4 (1.23±0.13s), 8 (1.57±0.12s), 16 (1.83±0.12s), and 32 (1.95±0.28s). Meanwhile, the second stage of image generation is independent of the number of queries (7.12±0.77s). For the parameter efficiency, TextDiffuser builds upon Stable Diffusion 1.5 (859M parameters), adding a Layout Transformer in the first stage (+25M parameters) and modifying the second stage (+0.75M parameters), augmenting it by only about 3% in terms of parameters. (b) For part-image generation, our method receives high scores from human evaluators in these two aspects. Text Color Controllability In Figure 7, we showcase TextDiffuser's capability in controlling the color of generated texts through language descriptions. The visualization results show that TextDiffuser can successfully control the color of rendered text, further enhancing its controllability." }, { "figure_ref": [], "heading": "Discussion and Conclusion", "publication_ref": [ "b66", "b48", "b1", "b85", "b65" ], "table_ref": [], "text": "Discussion. We show that TextDiffuser maintains the capability and generality to create general images without text rendering in Appendix N. Besides, we compare our method with a text editing model in Appendix O, showing that TextDiffuser generates images with better diversity. We also present the potential of TextDiffuser on the text removal task in Appendix P. As for the limitations and failure cases, TextDiffuser uses the VAE networks to encode images into low-dimensional latent spaces for computational efficiency following latent diffusion models [67,49,2], which has a limitation in reconstructing images with small characters as shown in Appendix Q. We also observed failure cases when generating images from long text and showed them in Appendix Q. As for the broader impact, TextDiffuser can be applied to many designing tasks, such as creating posters and book covers. Additionally, the text inpainting task can be used for secondary creation in many applications, such as Midjourney. However, there may be some ethical concerns, such as the misuse of text inpainting for forging documents. Therefore, techniques for detecting text-related tampering [86] need to be applied to enhance security. In conclusion, we propose a two-stage diffusion model called TextDiffuser to generate images with visual-pleasing texts coherent with backgrounds. Using segmentation masks as guidance, the proposed TextDiffuser shows high flexibility and controllability in the generation process. We propose MARIO-10M containing 10 million image-text pairs with OCR annotations. Extensive experiments and user studies validate that our method performs better than existing methods on the proposed benchmark MARIO-Eval. For future work, we aim to address the limitation of generating small characters by using OCR priors following OCR-VQGAN [66] and enhance TextDiffuser's capabilities to generate images with text in multiple languages. Disclaimer Please note that the model presented in this paper is intended for academic and research purposes ONLY. Any use of the model for generating inappropriate content is strictly prohibited and is not endorsed by this paper. The responsibility for any misuse or improper use of the model lies solely with the users who generated such content, and this paper shall not be held liable for any such use." }, { "figure_ref": [ "fig_9" ], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [ "tab_5" ], "text": "This research was supported by the Research Grant Council of the Hong Kong Special Administrative Region under grant number 16203122. More samples are shown in Figure 11. The number of texts per image in MARIO-10M is shown in Table 5. Also, the MARIO-10M dataset reveals that about 90% of the text regions maintain a horizontal orientation with rotation angles smaller than 5 degrees without perspective changes. Hence, our layout generation model is designed to predict horizontal bounding boxes by detecting the coordinates of their left-top and bottom-right points. Adapting our model to predict more realistic scene text is feasible by detecting enhanced coordinates, such as eight coordinates for four points." }, { "figure_ref": [], "heading": "C More Details in MARIO-10M", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "D MARIO-10M Caption Templates", "publication_ref": [], "table_ref": [], "text": "Since TMDB movie/TV posters and Open Library book covers have no off-the-shelf captions, we construct them based on their titles with the following templates. {XXX} is a placeholder for title.\nFor MARIO-TMDB: " }, { "figure_ref": [], "heading": "E MARIO-10M Filtering Rules", "publication_ref": [ "b0", "b7", "b41", "b68", "b18", "b3" ], "table_ref": [], "text": "We clean data with five strict filtering rules to obtain high-quality data with text:\n• Height and width are larger than 256. Low-resolution samples often contain illegible texts, negatively impacting the training process. • Will not trigger NSFW. For the MARIO-LAION subset, we filter out those samples triggering the \"not sure for work\" flag to mitigate ethical concerns. • The number of detected text boxes should be within [1,8]. We detect texts with DB [42].\nSamples with too many texts typically have small areas for each text, which makes them difficult to recognize. Therefore, we remove these samples from the dataset. • Text areas are more than 10% of the whole image. According to Appendix B, we train a UNet [69] using SynthText [19] to obtain character-level segmentation masks of each sample. This criterion ensures that the text regions will not be too small. • At least one detected text appears in the caption. Noticing that the original dataset contains many noisy samples, we add this constraint to increase the relevance between images and captions. We utilize PARSeq [4] for text recognition." }, { "figure_ref": [ "fig_10", "fig_17" ], "heading": "F Analysis of OCR Performance on MARIO-10M", "publication_ref": [ "b41", "b33" ], "table_ref": [ "tab_6" ], "text": "As we rely on OCR tools to annotate MARIO-10M, it is necessary to evaluate the performance of these tools. Specifically, we manually annotate 100 samples for text recognition, detection, and character-level segmentation masks, then compare them with the annotations given by OCR tools.\nThe results are shown in Table 6. We notice that the performance of existing methods is lower than their results on text detection and spotting benchmarks. Taking DB [42] as an example, it can achieve text detection 91.8% precision on ICDAR 2015 dataset [34] while only achieving 76% on MARIO-10M. This is because there are many challenging cases in MARIO-10M, such as blurry and small text. Besides, a domain gap may exist since DB is trained on scene text detection datasets, while MARIO-10M comprises text images in various scenarios. Future work may explore more advanced recognition, detection, and segmentation models to mitigate the noise in OCR annotations. We demonstrate some OCR results in Figure 12. • a graffiti art of the text 'free the pink' on a wall • A professionally designed logo for a bakery called 'Just What I Kneaded'.\n• scholarly elephant reading a newspaper with the headline 'elephants take over the world' ChineseDrawText:\n• A street sign on the street reads 'Heaven rewards those who work hard'\n• There is a book on the table with the title 'Girl in the Garden' • Kitten holding a sign that reads 'I want fish' • A robot writes 'Machine Learning' on a podium • In a hospital, a sign that says 'Do Not Disturb' Figure 13: We demonstrate five samples for LAIONEval4000 (top), TMDBEval500 (middle), and OpenLibrary500 (bottom)." }, { "figure_ref": [ "fig_11" ], "heading": "J Experiment without Explicit Guidance of Segmentation Masks", "publication_ref": [], "table_ref": [], "text": "As shown in Figure 15, we try to explore the generation without explicit guidance. For example, according to the first row, we set the value of character pixels to 1 and non-character pixels to 0 (i.e., remove the content and only provide the position guidance). We observe that the model can generate some words similar to keywords but contain some grammatical errors (e.g., a missing \"l\" in \"Hello\"). Further, according to the second row, we train TextDiffuser without segmentation masks (i.e., remove both position and content guidance). In this case, the experiment is equivalent to directly fine-tuning a pre-trained latent diffusion model on the MARIO-10M dataset. The results show that the text rendering quality worsens, demonstrating explicit guidance's significance. " }, { "figure_ref": [ "fig_14", "fig_14" ], "heading": "M More Details about User Study", "publication_ref": [], "table_ref": [], "text": "User study on the whole-image generation task. The questionnaire consists of 15 cases, each of which includes two multiple-choice questions:\n• Which of the following images has the best text rendering quality?\n• Which of the following images best matches the text description?\nIn particular, the first question focuses on the text rendering quality. Taking Figure 20 as an example 7 , we expect the model to render the word \"EcoGrow\" accurately (i.e., without any missing or additional characters). The second question, on the other hand, focuses on whether the overall image matches the given prompt. For example, in Figure 20 (G), although the generated text is correct, it fails to meet the requirement in the prompt that the letter looks like a plant. We instruct users to select the best option. In cases where multiple good options are difficult to distinguish, they could choose multiple options. If users are unsatisfied with the options, they could decide not to select any. User study on the part-image generation task. We aim to let users vote on the quality of text inpainting (from 4 to 1, the higher, the better). We also designed two questions:\n• How is the text rendering quality?\n• Does the drawn text harmonize with the unmasked region?\nSpecifically, the first question concentrates on the accuracy of the text. The second question focuses on whether the generated part is harmonious with the unmasked part (i.e., whether the background and texture are consistent). " }, { "figure_ref": [ "fig_16" ], "heading": "N Generating Images without Text", "publication_ref": [], "table_ref": [ "tab_7" ], "text": "To show the generality of TextDiffuser, we experiment with generating images that do not contain texts and show results in Figure 22. Although TextDiffuser is fine-tuned with MARIO-10M, it still maintains a good generation ability for generating general images. Therefore, users have more options when using TextDiffuser, demonstrating its flexibility. We also provide quantitative evaluations to demonstrate TextDiffuser's generality in generating non-text general images. We compare TextDiffuser with our baseline Stable Diffusion 1.5 as they have the same backbone. For a quantitative evaluation, the FID scores of 5,000 images generated by prompts randomly sampled from MSCOCO are as in Table 8. The results indicate that TextDiffuser can maintain the ability to generate natural images even after fine-tuning the domain-specific dataset. " }, { "figure_ref": [ "fig_2" ], "heading": "O Comparisons between TextDiffuser and a Text Editing Model", "publication_ref": [ "b87" ], "table_ref": [], "text": "We visualize some results in Figure 23 compared with a text editing model SRNet [88]. Please note that the introduced text inpainting task differs from the text editing task in three aspects: " }, { "figure_ref": [], "heading": "Appendix", "publication_ref": [ "b96" ], "table_ref": [], "text": "A Architecture of U-Net and Design of Character-Aware Loss As shown in Figure 9, the U-Net contains four downsampling operations and four upsampling operations. The input will be downsampled to a maximum of 1/16. To provide the character-aware loss, the input feature F is 4-D with spatial size 64 × 64, while the output is 96-D (the length of alphabet A plus a null symbol indicating the non-character pixel) also with spatial size 64 × 64. Subsequently, a cross-entropy loss is calculated between the output feature (need to convert the predicted noise into predicted features) and the resized 64 × 64 character-level segmentation mask C ′ . The U-Net is pre-trained using the training set of MARIO-10M for one epoch. We utilize the Adadelta optimizer [97] and set the learning rate to 1. When training the diffusion model, the U-Net is frozen and only used to provide character-aware guidance. " }, { "figure_ref": [], "heading": "B Character-Level Segmentation Model", "publication_ref": [ "b18", "b96" ], "table_ref": [], "text": "We train the character-level segmentation model based on U-Net, whose architecture is similar to the architecture shown in Figure 9. We set the input size to 256 × 256, ensuring that most characters are readable at this resolution. We train the segmentation model using synthesized scene text images [19], printed text images, and handwritten text images 3 , totaling about 4M samples. We employ data augmentation strategies (e.g., blurring, rotation, and color enhancement) to make the segmentation model more robust. The segmentation model is trained for ten epochs using the Adadelta optimizer [97] with a learning rate of 1. Figure 10 shows some samples in the training dataset. " }, { "figure_ref": [], "heading": "G Samples in MARIO-Eval", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Subset", "publication_ref": [ "b71", "b45", "b48" ], "table_ref": [], "text": "Size Off-the-shelf Captions GT Images LAIONEval4000 4,000 TMDBEval500 500 OpenLibraryEval500 500 DrawBenchText [72] 21 DrawTextCreative [46] 175 ChineseDrawText [49] 218\nAs illustrated in Table 7, MARIO-Eval contains 5,414 prompts with six subsets. The ground truth images of some samples are shown in Figure 13, and captions for each category are shown below:\nLAIONEval4000:\n• 'Royal Green' Wristband Set " }, { "figure_ref": [], "heading": "DrawTextCreative:", "publication_ref": [], "table_ref": [], "text": "• a grumpy sunflower with a 'no solar panels' sign • A photo of a rabbit sipping coffee and reading a book. The book title 'The Adventures of Peter Rabbit' is visible." }, { "figure_ref": [], "heading": "H Implementation Details of Evaluation Criteria", "publication_ref": [], "table_ref": [], "text": "To evaluate the performance of TextDiffuser quantitatively, we utilize three criteria, including FID, CLIPScore, and OCR Evaluation. We detail the calculation of each criterion below.\nFID. We calculate the FID score using the pytorch-fid repository. Please note that the proposed MARIO-Eval benchmark's three subsets (DrawTextCreative, DrawBenchText, and ChineseDrawText) do not contain ground truth images. Therefore, we utilize 5,000 images in the other three subsets (LAIONEval4000, TMDBEval500, OpenLibraryEval500) as the ground truth images. We calculate the FID score using the 5,414 generated images and the 5,000 ground truth images.\nCLIP Score. We calculate the CLIP score using the clipscore repository. However, as with the FID score, we cannot calculate the CLIP score for the DrawTextCreative, DrawBenchText, and ChineseDrawText subsets due to the lack of ground truth images. Therefore, we only calculate the score on LAIONEval4000, TMDBEval500, and OpenLibraryEval500 subsets and report the average CLIP score.\nOCR Evaluation. For the MARIO-Eval benchmark, we use quotation marks to indicate the keywords that need to be painted on the image. Taking the caption [A cat holds a paper saying 'Hello World'] as an example, the keywords are 'Hello' and 'World'. We then use Microsoft Read API to detect and recognize text in the image. We evaluate OCR performance using accuracy, precision, recall, and F-measure. If the detected text matches the keywords exactly, it is considered correct. Precision represents the proportion of detected text that matches the keywords, while recall represents the proportion of keywords that appear in the image. We report the mean values of precision and recall, and calculate the F-measure using the following formula:" }, { "figure_ref": [], "heading": "I Visualization of Layouts Generated by Layout Transformer", "publication_ref": [], "table_ref": [], "text": "We visualize some generated layouts in Figure 14, showing that the Transformer can produce reasonable layouts.\nA robot writing 'Ethics 101' in chalk on a blackboard.\nA storefront with 'Google Research Pizza Cafe' written on it.\nAn antique bottle labeled 'Energy Tonic' A giant shoe, with the caption 'shoe for hokey pokey' A poster titled 'Quails of North America', showing different kinds of quails.\nA storefront with 'Deep Learning' written on it. A storefront with 'Hello World' written on it.\nA sign that says 'Google Brain Toronto'.\nA sign that says 'NeurIPS'. " }, { "figure_ref": [], "heading": "K Baseline Methods Experimental Settings", "publication_ref": [ "b61", "b59", "b81" ], "table_ref": [], "text": "We introduced all baseline methods and their experimental settings when we used to compare them with the TextDiffuser as follows.\nDALL•E [62] utilizes a text encoder to map a given prompt into a corresponding representation space. A prior model is then employed to map the text encoding to an image encoding. Finally, an image decoder generates an image based on the image encoding. Since there is no available code and model, we obtain the results using the provided API 4 .\nStable Diffusion (SD) utilizes CLIP [60] text encoder to obtain the embedding of user prompts, pre-trained VAE to encode original images and conducts the diffusion process in the latent space for computation efficiency. We use the public pre-trained model \"runwayml/stable-diffusion-v1-5\" based on Hugging Face diffusers [82]. The number of sampling steps is 50, and the scale of classifier-free guidance is 7.5." }, { "figure_ref": [], "heading": "Stable Diffusion XL (SD-XL", "publication_ref": [ "b101", "b81", "b11", "b11", "b59", "b81" ], "table_ref": [], "text": ") is an upgraded version of SD, featuring more parameters and utilizing a more powerful language model. Consequently, it can be expected to better understand prompts compared to SD. As the source code and model are not publicly available, we obtained the generation results through a web API 5 .\nMidjourney 6 is a commercial project that runs on Discord, allowing users to interact with a bot via the command-line interface. We generated images using the default parameters of Midjourney. For example, we can generate an image using the following command: /imagine an image of 'hello world' in Midjourney.\nControlNet [102] aims to control diffusion models by adding conditions using zero-convolution layers. We use the public pre-trained model \"lllyasviel/sd-controlnet-canny\" released by ControltNet authors and the implementation from Hugging Face diffusers [82]. For fair comparisons, we use the printed text images generated by our first-stage model to generate Canny maps as the condition of ControlNet. We use default parameters for inference, where the low and high thresholds of canny map generation are set to 100 and 200, respectively. The number of inference steps is 20, and the scale of classifier-free guidance is 7.5.\nDeepFloyd [12] designs three cascaded pixel-based diffusion modules to generate images of increasing resolution: 64x64, 256x256, and 1024x1024. All stage modules use frozen text encoders based on T5 Transformer [12]. Compared with CLIP [60], the T5 Transformer is a powerful language model that enables more effective text understanding. We use the public pretrained models released by DeepFloyd authors and the implementation from Hugging Face diffusers [82]. We use default models and parameters for inference, where the three pretrained cascaded models are \"DeepFloyd/IF-I-XL-v1.0\", \"DeepFloyd/IF-II-L-v1.0\", and \"stabilityai/stable-diffusion-x4-upscaler\". A storefront with 'Google Research Pizza Cafe' written on it." }, { "figure_ref": [], "heading": "L Visualization of More Generation Results by Our TextDiffuser", "publication_ref": [], "table_ref": [], "text": "A storefront with 'Deep Learning' written on it.\nA storefront with 'Google Brain Toronto' written on it.\nA sign that says 'Hello World'.\nA sign that says 'Diffusion'.\nA sign that says 'Google Research Pizza Cafe'.\nA storefront with 'NeurIPS' written on it. " }, { "figure_ref": [], "heading": "P Experimental Results of Text Removal", "publication_ref": [ "b43" ], "table_ref": [], "text": "We demonstrate some results of text removal in Figure 24, and the cases are obtained from the paper of EraseNet [44]. We can easily transform the text inpainting task into a text removal task by providing a mask and setting all regions to non-character in the segmentation mask. Experimental results demonstrate that our method can achieve results similar to the ground truth. " }, { "figure_ref": [], "heading": "Q Limitations and Failure Cases", "publication_ref": [ "b66", "b48", "b1" ], "table_ref": [], "text": "We observe failure cases when generating images with small characters and from long text.\nGenerating images with small characters. TextDiffuser uses the VAE networks to encode images into low-dimensional latent spaces for computational efficiency following latent diffusion models [67,49,2]. However, the compression process can result in losing details when generating images with small characters. As illustrated in Figure 25, we observe that the VAE fails to reconstruct small characters, where reconstructed strokes are unclear and reduce the legibility of the text. According to the generated images, the small characters appear to have vague or disjointed strokes (e.g., the character 'l' in 'World' and character 'r' in 'Morning'), which could impact the readability. As shown in Figure 26, we notice that using a more powerful backbone, such as Stable Diffusion 2.1, can mitigate this issue. When the image resolution is enhanced from 512×512 to 768×768 using Stable Diffusion 2.1 (instead of 1.5), the latent space resolution also increases from 64×64 to 96×96, enhancing the character-level representation. As the cost, the inference latency rises from 8.5s to 12.0s with a batch size of 1. Therefore, how to render small characters while maintaining the same time cost is worth further study.\nGenerating images from long text. We observed failure cases when generating images from long text with many keywords, where the generated words in the layouts are disordered and overlapped," } ]
Diffusion models have gained increasing attention for their impressive generation abilities but currently struggle with rendering accurate and coherent text. To address this issue, we introduce TextDiffuser, focusing on generating images with visually appealing text that is coherent with backgrounds. TextDiffuser consists of two stages: first, a Transformer model generates the layout of keywords extracted from text prompts, and then diffusion models generate images conditioned on the text prompt and the generated layout. Additionally, we contribute the first large-scale text images dataset with OCR annotations, MARIO-10M, containing 10 million image-text pairs with text recognition, detection, and character-level segmentation annotations. We further collect the MARIO-Eval benchmark to serve as a comprehensive tool for evaluating text rendering quality. Through experiments and user studies, we show that TextDiffuser is flexible and controllable to create high-quality text images using text prompts alone or together with text template images, and conduct text inpainting to reconstruct incomplete images with text. The code, model, and dataset will be available at https://aka.ms/textdiffuser.
TextDiffuser: Diffusion Models as Text Painters
[ { "figure_caption": "Figure 1 :1Figure 1: TextDiffuser generates accurate and coherent text images from text prompts or together with template images, as well as conducting text inpainting to reconstruct incomplete images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: TextDiffuser consists of two stages. In the first Layout Generation stage, a Transformerbased encoder-decoder model generates character-level segmentation masks that indicate the layout of keywords in images from text prompts. In the second Image Generation stage, a diffusion model generates images conditioned on noisy features, segmentation masks, feature masks, and masked features (from left to right) along with text prompts. The feature masks can cover the entire or part of the image, corresponding to whole-image and part-image generation. The diffusion model learns to denoise features progressively with a denoising and character-aware loss. Please note that the diffusion model operates in the latent space, but we use the image pixels for better visualization.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustrations of three subsets of MARIO-10M. See more details in Appendix C.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Visualizations of whole-image generation compared with existing methods. The first three cases are generated from prompts and the last three cases are from given printed template images.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Comparison with Character-Aware Model[46] and the concurrent GlyphDraw[49].", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Visualizations of part-image generation (text inpainting) from given images.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: Demonstration of using language descriptions to control text color.", "figure_data": "", "figure_id": "fig_6", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "For whole-image generation, our method clearly outperforms others in both aspects of text rendering quality and image-text matching.", "figure_data": "", "figure_id": "fig_7", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: User studies for whole-image generation and part-image generation tasks.", "figure_data": "", "figure_id": "fig_8", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: More samples in MARIO-10M.", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure 12: Visualization of some OCR annotations in MARIO-10M.", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Figure 15 :15Figure 15: Visualization of generation without explicit guidance.", "figure_data": "", "figure_id": "fig_11", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Figure 18 :18Figure 18: Visualization of more generation results by our TextDiffuser for the text-to-image with template task.", "figure_data": "", "figure_id": "fig_12", "figure_label": "18", "figure_type": "figure" }, { "figure_caption": "Figure 19 :19Figure 19: Visualization of more generation results for text inpainting. The images above the dash lines are from the test set of MARIO-10M, while the images below the dash lines are collected from the Midjourney community.", "figure_data": "", "figure_id": "fig_13", "figure_label": "19", "figure_type": "figure" }, { "figure_caption": "Figure 20 :20Figure 20: One case in the user study for the whole-image generation task.", "figure_data": "", "figure_id": "fig_14", "figure_label": "20", "figure_type": "figure" }, { "figure_caption": "Figure 21 :21Figure 21: One case in the user study for the part-image generation task.", "figure_data": "", "figure_id": "fig_15", "figure_label": "21", "figure_type": "figure" }, { "figure_caption": "Figure 22 :22Figure 22: Visualizations of general images generated by Stable Diffusion 1.5 and TextDiffuser.", "figure_data": "", "figure_id": "fig_16", "figure_label": "22", "figure_type": "figure" }, { "figure_caption": "( 1 )1The text editing task usually relies on the synthesized text image dataset for training (synthesizing two images with the different text given a background image and font as pairs). In contrast, the text inpainting task follows the mask-and-recover training scheme and can be trained with any text images. (2) Text editing emphasizes the preservation of the original fonts, while text inpainting allows for greater freedom. For example, we conduct four samplings for each case, and the generated results exhibit diversity and present reasonable font styles. (3) Text editing tasks cannot add text, highlighting the significance of the introduced text inpainting task.", "figure_data": "", "figure_id": "fig_17", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 23 :Figure 25 :2325Figure 23: Comparison with text editing model. Four cases are obtained from the paper of SRNet.", "figure_data": "", "figure_id": "fig_18", "figure_label": "2325", "figure_type": "figure" }, { "figure_caption": "Figure 26 :26Figure 26: Pre-trained on high-resolution Stable Diffusion 2.1 enhances the legibility of small text.", "figure_data": "", "figure_id": "fig_19", "figure_label": "26", "figure_type": "figure" }, { "figure_caption": "Figure 27 :27Figure 27: The issue of dealing with a large number of keywords.", "figure_data": "", "figure_id": "fig_20", "figure_label": "27", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Ablation about Layout Transformer.", "figure_data": "#Layer Width(P) IoU↑1-✓0.268 0.2892-✓0.269 0.2984-✓0.294 0.297", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Ablation on weight of character-aware loss.", "figure_data": "λ char Acc↑0 0.3960.001 0.4860.01 0.4940.1 0.4201 0.400", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation on twobranch training ratio σ.", "figure_data": "ratio Acc↑ / Det-F↑ / Spot-F↑00.344 / 0.870 / 0.6630.250.562 / 0.899 / 0.6360.50.552 / 0.881 / 0.7150.750.524 / 0.921 / 0.69510.494 / 0.380 / 0.218", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "The performance of text-to-image compared with existing methods. TextDiffuser performs the best regarding CLIPScore and OCR evaluation while achieving comparable performance on FID.", "figure_data": "MetricsStableDiffusion [67] ControlNet [102] DeepFloyd [12] TextDiffuserFID↓51.29551.48534.90238.758CLIPScore↑0.30150.34240.32670.3436OCR(Accuracy)↑0.00030.23900.02620.5609OCR(Precision)↑0.01730.52110.14500.7846OCR(Recall)↑0.02800.67070.22450.7802OCR(F-measure)↑0.02140.58650.17620.7824", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Number of texts per image in MARIO-10M.", "figure_data": "#Words12345678#Images 592,153 1,148,481 1,508,185 1,610,056 1,549,852 1,430,750 1,229,714 930,809#Ratio5.9%11.5%15.1%16.1%15.5%14.3%12.3%9.3%", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "OCR performance on MARIO-10M. IOU (binary) means we treat each pixel as two classes: characters and non-characters. The evaluation of recognition is included in the spotting task.", "figure_data": "DetectionSpottingSegmentationPrecision Recall F-measure Precision Recall F-measure IOU (binary) IOU0.760.790.780.730.750.740.700.59PrideChittyabouthocolPeppalig2015GREENEHIGHMonkeYWOOL2003epopLONORESPOLISHDancesportRILEYCAPTURELIESRELEASESTAGRAVEIEANIENEDIARYFREE!", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "FID scores on MSCOCO compared with Stable Diffusion.", "figure_data": "Sampling Steps Stable Diffusion TextDiffuser5026.4727.7210027.0227.04", "figure_id": "tab_7", "figure_label": "8", "figure_type": "table" } ]
Jingye Chen; Yupan Huang; Tengchao Lv; Lei Cui; Qifeng Chen; Furu Wei; Hkust
[ { "authors": "Omri Avrahami; Dani Lischinski; Ohad Fried", "journal": "", "ref_id": "b0", "title": "Blended diffusion for text-driven editing of natural images", "year": "2022" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro", "journal": "", "ref_id": "b1", "title": "ediffi: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "Coloma Ballester; Marcelo Bertalmio; Vicent Caselles; Guillermo Sapiro; Joan Verdera", "journal": "IEEE transactions on image processing", "ref_id": "b2", "title": "Filling-in by joint interpolation of vector fields and gray levels", "year": "2001" }, { "authors": "Darwin Bautista; Rowel Atienza", "journal": "", "ref_id": "b3", "title": "Scene text recognition with permuted autoregressive sequence models", "year": "2022" }, { "authors": "Marcelo Bertalmio; Guillermo Sapiro; Vincent Caselles; Coloma Ballester", "journal": "", "ref_id": "b4", "title": "Image inpainting", "year": "2000" }, { "authors": "Marcelo Bertalmio; Luminita Vese; Guillermo Sapiro; Stanley Osher", "journal": "IEEE transactions on image processing", "ref_id": "b5", "title": "Simultaneous structure and texture image inpainting", "year": "2003" }, { "authors": "L Glenn; Mehdi Cash; Hatamian", "journal": "Computer vision", "ref_id": "b6", "title": "Optical character recognition by the method of moments", "year": "1987" }, { "authors": "Huiwen Chang; Han Zhang; Jarred Barber; Jose Maschinot; Lu Lezama; Ming-Hsuan Jiang; Kevin Yang; Murphy; Michael William T Freeman; Rubinstein", "journal": "", "ref_id": "b7", "title": "Muse: Text-to-image generation via masked generative transformers", "year": "2023" }, { "authors": "Jingye Chen; Bin Li; Xiangyang Xue", "journal": "", "ref_id": "b8", "title": "Scene text telescope: Text-focused scene image super-resolution", "year": "2021" }, { "authors": "Tianqi Chen; Bing Xu; Chiyuan Zhang; Carlos Guestrin", "journal": "", "ref_id": "b9", "title": "Training deep nets with sublinear memory cost", "year": "2016" }, { "authors": "Hyungjin Chung; Byeongsu Sim; Jong-Chul Ye", "journal": "", "ref_id": "b10", "title": "Come-closer-diffuse-faster: Accelerating conditional diffusion models for inverse problems through stochastic contraction", "year": "2021" }, { "authors": " Deepfloyd", "journal": "", "ref_id": "b11", "title": "", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b12", "title": "Diffusion models beat gans on image synthesis", "year": "2021" }, { "authors": "Shancheng Fang; Hongtao Xie; Yuxin Wang; Zhendong Mao; Yongdong Zhang", "journal": "", "ref_id": "b13", "title": "Read like humans: Autonomous, bidirectional and iterative language modeling for scene text recognition", "year": "2021" }, { "authors": "Zhida Feng; Zhenyu Zhang; Xintong Yu; Yewei Fang; Lanxin Li; Xuyi Chen; Yuxiang Lu; Jiaxiang Liu; Weichong Yin; Shikun Feng", "journal": "", "ref_id": "b14", "title": "Ernie-vilg 2.0: Improving text-to-image diffusion model with knowledge-enhanced mixture-of-denoising-experts", "year": "2023" }, { "authors": "Rinon Gal; Yuval Alaluf; Yuval Atzmon; Or Patashnik; H Amit; Gal Bermano; Daniel Chechik; Cohen-Or", "journal": "", "ref_id": "b15", "title": "An image is worth one word: Personalizing text-to-image generation using textual inversion", "year": "2022" }, { "authors": "Jiaxi Gu; Xiaojun Meng; Guansong Lu; Lu Hou; Minzhe Niu; Hang Xu; Xiaodan Liang; Wei Zhang; Xin Jiang; Chunjing Xu", "journal": "", "ref_id": "b16", "title": "Wukong: 100 million large-scale chinese cross-modal pre-training dataset and a foundation framework", "year": "2022" }, { "authors": "Shuyang Gu; Dong Chen; Jianmin Bao; Fang Wen; Bo Zhang; Dongdong Chen; Lu Yuan; Baining Guo", "journal": "", "ref_id": "b17", "title": "Vector quantized diffusion model for text-to-image synthesis", "year": "2022" }, { "authors": "Ankush Gupta; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b18", "title": "Synthetic data for text localisation in natural images", "year": "2016" }, { "authors": "Kamal Gupta; Alessandro Achille; Justin Lazarow; Larry Davis; Vijay Mahadevan; Abhinav Shrivastava", "journal": "", "ref_id": "b19", "title": "Layout generation and completion with self-attention", "year": "2021" }, { "authors": "Haibin He; Xinyuan Chen; Chaoyue Wang; Juhua Liu; Bo Du; Dacheng Tao; Yu Qiao", "journal": "", "ref_id": "b20", "title": "Diff-font: Diffusion model for robust one-shot font generation", "year": "2023" }, { "authors": "Yijuan Liu He; John Lu; Dinei Corring; Cha Florencio; Zhang", "journal": "", "ref_id": "b21", "title": "Diffusion-based document layout generation", "year": "2023" }, { "authors": "Jack Hessel; Ari Holtzman; Maxwell Forbes; Ronan Le Bras; Yejin Choi", "journal": "", "ref_id": "b22", "title": "Clipscore: A reference-free evaluation metric for image captioning", "year": "2021" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "", "ref_id": "b23", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "", "ref_id": "b24", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Chitwan Saharia; William Chan; David J Fleet; Mohammad Norouzi; Tim Salimans", "journal": "Journal of Machine Learning Research", "ref_id": "b25", "title": "Cascaded diffusion models for high fidelity image generation", "year": "2022" }, { "authors": "Jonathan Ho; Tim Salimans", "journal": "", "ref_id": "b26", "title": "Classifier-free diffusion guidance", "year": "2021" }, { "authors": "Shaohan Huang; Li Dong; Wenhui Wang; Yaru Hao; Saksham Singhal; Shuming Ma; Tengchao Lv; Lei Cui; Owais Khan Mohammed; Qiang Liu", "journal": "", "ref_id": "b27", "title": "Language is not all you need: Aligning perception with language models", "year": "2023" }, { "authors": "Yupan Huang; Bei Liu; Yutong Lu", "journal": "", "ref_id": "b28", "title": "Unifying multimodal transformer for bi-directional image and text generation", "year": "2021" }, { "authors": "Shir Iluz; Yael Vinker; Amir Hertz; Daniel Berio; Daniel Cohen-Or; Ariel Shamir", "journal": "", "ref_id": "b29", "title": "Wordas-image for semantic typography", "year": "2023" }, { "authors": "Max Jaderberg; Karen Simonyan; Andrea Vedaldi; Andrew Zisserman", "journal": "", "ref_id": "b30", "title": "Synthetic data and artificial neural networks for natural scene text recognition", "year": "2014" }, { "authors": "Jiabao Ji; Guanhua Zhang; Zhaowen Wang; Bairu Hou; Zhifei Zhang; Brian Price; Shiyu Chang", "journal": "", "ref_id": "b31", "title": "Improving diffusion models for scene text editing with dual encoders", "year": "2023" }, { "authors": "Abdu Akash; Thibaut Jyothi; Jiawei Durand; Leonid He; Greg Sigal; Mori", "journal": "", "ref_id": "b32", "title": "Layoutvae: Stochastic scene layout generation from a label set", "year": "2019" }, { "authors": "Dimosthenis Karatzas; Lluis Gomez-Bigorda; Anguelos Nicolaou; Suman Ghosh; Andrew Bagdanov; Masakazu Iwamura; Jiri Matas; Lukas Neumann; Vijay Ramaseshan Chandrasekhar; Shijian Lu", "journal": "", "ref_id": "b33", "title": "Icdar 2015 competition on robust reading", "year": "2015" }, { "authors": "P Diederik; Max Kingma; Welling", "journal": "", "ref_id": "b34", "title": "Auto-encoding variational bayes", "year": "2014" }, { "authors": "Yuxin Kong; Canjie Luo; Weihong Ma; Qiyuan Zhu; Shenggao Zhu; Nicholas Yuan; Lianwen Jin", "journal": "", "ref_id": "b35", "title": "Look closer to supervise better: One-shot font generation via component-based discriminator", "year": "2022" }, { "authors": "Praveen Krishnan; Rama Kovvuri; Guan Pang; Boris Vassilev; Tal Hassner", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b36", "title": "Textstylebrush: Transfer of text aesthetics from a single example", "year": "2023" }, { "authors": "Junyeop Lee; Yoonsik Kim; Seonghyeon Kim; Moonbin Yim; Seung Shin; Gayoung Lee; Sungrae Park", "journal": "", "ref_id": "b37", "title": "Rewritenet: Reliable scene text editing with implicit decomposition of text contents and styles", "year": "2022" }, { "authors": "Benjamin Lefaudeux; Francisco Massa; Diana Liskovich; Wenhan Xiong; Vittorio Caggiano; Sean Naren; Min Xu; Jieru Hu; Marta Tintore; Susan Zhang; Patrick Labatut; Daniel Haziza", "journal": "", "ref_id": "b38", "title": "xformers: A modular and hackable transformer modelling library", "year": "2022" }, { "authors": "Jianan Li; Jimei Yang; Aaron Hertzmann; Jianming Zhang; Tingfa Xu", "journal": "", "ref_id": "b39", "title": "Layoutgan: Generating graphic layouts with wireframe discriminators", "year": "2019" }, { "authors": "Minghao Li; Tengchao Lv; Jingye Chen; Lei Cui; Yijuan Lu; Dinei Florencio; Cha Zhang; Zhoujun Li; Furu Wei", "journal": "", "ref_id": "b40", "title": "Trocr: Transformer-based optical character recognition with pre-trained models", "year": "2023" }, { "authors": "Minghui Liao; Zhisheng Zou; Zhaoyi Wan; Cong Yao; Xiang Bai", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b41", "title": "Real-time scene text detection with differentiable binarization and adaptive scale fusion", "year": "2022" }, { "authors": "Wentong Liao; Kai Hu; Michael Ying Yang; Bodo Rosenhahn", "journal": "", "ref_id": "b42", "title": "Text to image generation with semantic-spatial aware gan", "year": "2022" }, { "authors": "Chongyu Liu; Yuliang Liu; Lianwen Jin; Shuaitao Zhang; Canjie Luo; Yongpan Wang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b43", "title": "Erasenet: End-to-end text removal in the wild", "year": "2020" }, { "authors": "Hongyu Liu; Bin Jiang; Yibing Song; Wei Huang; Chao Yang", "journal": "", "ref_id": "b44", "title": "Rethinking image inpainting via a mutual encoder-decoder with feature equalizations", "year": "2020" }, { "authors": "Rosanne Liu; Dan Garrette; Chitwan Saharia; William Chan; Adam Roberts; Sharan Narang; Irina Blok; Mohammad Mical; Noah Norouzi; Constant", "journal": "", "ref_id": "b45", "title": "Character-aware models improve visual text rendering", "year": "2022" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b46", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Andreas Lugmayr; Martin Danelljan; Andres Romero; Fisher Yu; Radu Timofte; Luc Van Gool", "journal": "", "ref_id": "b47", "title": "RePaint: Inpainting using Denoising Diffusion Probabilistic Models", "year": "2022" }, { "authors": "Jian Ma; Mingjun Zhao; Chen Chen; Ruichen Wang; Di Niu; Haonan Lu; Xiaodong Lin", "journal": "", "ref_id": "b48", "title": "Glyphdraw: Learning to draw chinese characters in image synthesis models coherently", "year": "2023" }, { "authors": "Jianqi Ma; Zhetong Liang; Lei Zhang", "journal": "", "ref_id": "b49", "title": "A text attention network for spatial deformation robust scene text image super-resolution", "year": "2022" }, { "authors": "Jianqi Ma; Weiyuan Shao; Hao Ye; Li Wang; Hong Wang; Yingbin Zheng; Xiangyang Xue", "journal": "IEEE transactions on multimedia (TMM)", "ref_id": "b50", "title": "Arbitrary-oriented scene text detection via rotation proposals", "year": "2018" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mc-Grew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b51", "title": "Glide: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2022" }, { "authors": "S Safaa; Jumana A Omran; Jarallah", "journal": "", "ref_id": "b52", "title": "Iraqi car license plate recognition using ocr", "year": "2017" }, { "authors": "Song Park; Sanghyuk Chun; Junbum Cha; Bado Lee; Hyunjung Shim", "journal": "", "ref_id": "b53", "title": "Multiple heads are better than one: Few-shot font generation with multiple localized experts", "year": "2021" }, { "authors": "Deepak Pathak; Philipp Krahenbuhl; Jeff Donahue; Trevor Darrell; Alexei A Efros", "journal": "", "ref_id": "b54", "title": "Context encoders: Feature learning by inpainting", "year": "2016" }, { "authors": "Akshay Gadi Patil; Omri Ben-Eliezer; Or Perel; Hadar Averbuch-Elor", "journal": "", "ref_id": "b55", "title": "Read: Recursive autoencoders for document layout generation", "year": "2020" }, { "authors": "Jialun Peng; Dong Liu; Songcen Xu; Houqiang Li", "journal": "", "ref_id": "b56", "title": "Generating diverse structure for image inpainting with hierarchical vq-vae", "year": "2021" }, { "authors": "Tingting Qiao; Jing Zhang; Duanqing Xu; Dacheng Tao", "journal": "", "ref_id": "b57", "title": "Mirrorgan: Learning text-toimage generation by redescription", "year": "2019" }, { "authors": "Yadong Qu; Qingfeng Tan; Hongtao Xie; Jianjun Xu; Yuxin Wang; Yongdong Zhang", "journal": "", "ref_id": "b58", "title": "Exploring stroke-level modifications for scene text editing", "year": "2023" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "", "ref_id": "b59", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "The Journal of Machine Learning Research", "ref_id": "b60", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b61", "title": "Hierarchical text-conditional image generation with clip latents", "year": "2022" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b62", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Scott Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee", "journal": "", "ref_id": "b63", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "Xiaoming Yurui Ren; Ruonan Yu; Thomas H Zhang; Shan Li; Ge Liu; Li", "journal": "", "ref_id": "b64", "title": "Structureflow: Image inpainting via structure-aware appearance flow", "year": "2019" }, { "authors": "Juan A Rodriguez; David Vazquez; Issam Laradji; Marco Pedersoli; Pau Rodriguez", "journal": "", "ref_id": "b65", "title": "Ocr-vqgan: Taming text-within-image generation", "year": "2023" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b66", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Xuejian Rong; Chucai Yi; Yingli Tian", "journal": "IEEE Transactions on Image Processing", "ref_id": "b67", "title": "Unambiguous scene text segmentation with referring expression comprehension", "year": "2019" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "", "ref_id": "b68", "title": "U-net: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b69", "title": "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Huiwen Chang; Chris A Lee; Jonathan Ho; Tim Salimans; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b70", "title": "Palette: Image-to-image diffusion models", "year": "2022" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Kamyar Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans", "journal": "", "ref_id": "b71", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Markus Schreiber; Fabian Poggenhans; Christoph Stiller", "journal": "", "ref_id": "b72", "title": "Detecting symbols on road surface for mapping and localization using ocr", "year": "2014" }, { "authors": "Christoph Schuhmann; Romain Beaumont; Richard Vencu; Cade Gordon; Ross Wightman; Mehdi Cherti; Theo Coombes; Aarush Katta; Clayton Mullis; Mitchell Wortsman", "journal": "", "ref_id": "b73", "title": "Laion-5b: An open large-scale dataset for training next generation image-text models", "year": "2022" }, { "authors": "Christoph Schuhmann; Richard Vencu; Romain Beaumont; Robert Kaczmarczyk; Clayton Mullis; Aarush Katta; Theo Coombes; Jenia Jitsev; Aran Komatsuzaki", "journal": "", "ref_id": "b74", "title": "Laion-400m: Open dataset of clip-filtered 400 million image-text pairs", "year": "2021" }, { "authors": "Yongliang Shen; Kaitao Song; Xu Tan; Dongsheng Li; Weiming Lu; Yueting Zhuang", "journal": "", "ref_id": "b75", "title": "Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface", "year": "2023" }, { "authors": "Baoguang Shi; Xiang Bai; Cong Yao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b76", "title": "An end-to-end trainable neural network for imagebased sequence recognition and its application to scene text recognition", "year": "2016" }, { "authors": "Wataru Shimoda; Daichi Haraguchi; Seiichi Uchida; Kota Yamaguchi", "journal": "", "ref_id": "b77", "title": "De-rendering stylized texts", "year": "2021" }, { "authors": "Jascha Sohl-Dickstein; Eric Weiss; Niru Maheswaranathan; Surya Ganguli", "journal": "", "ref_id": "b78", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "Jiaming Song; Chenlin Meng; Stefano Ermon", "journal": "", "ref_id": "b79", "title": "Denoising diffusion implicit models", "year": "2021" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "NeurIPS", "ref_id": "b80", "title": "Attention is all you need", "year": "2017" }, { "authors": "Suraj Patrick Von Platen; Anton Patil; Pedro Lozhkov; Nathan Cuenca; Kashif Lambert; Mishig Rasul; Thomas Davaadorj; Wolf", "journal": "", "ref_id": "b81", "title": "Diffusers: State-of-the-art diffusion models", "year": "2022" }, { "authors": "Ziyu Wan; Jingbo Zhang; Dongdong Chen; Jing Liao", "journal": "", "ref_id": "b82", "title": "High-fidelity pluralistic image completion with transformers", "year": "2021" }, { "authors": "Wenhai Wang; Enze Xie; Xiang Li; Wenbo Hou; Tong Lu; Gang Yu; Shuai Shao", "journal": "", "ref_id": "b83", "title": "Shape robust text detection with progressive scale expansion network", "year": "2019" }, { "authors": "Wenjia Wang; Enze Xie; Xuebo Liu; Wenhai Wang; Ding Liang; Chunhua Shen; Xiang Bai", "journal": "", "ref_id": "b84", "title": "Scene text image super-resolution in the wild", "year": "2020" }, { "authors": "Yuxin Wang; Hongtao Xie; Mengting Xing; Jing Wang; Shenggao Zhu; Yongdong Zhang", "journal": "", "ref_id": "b85", "title": "Detecting tampered scene text in the wild", "year": "2022" }, { "authors": "M James; Gene D White; Rohrer", "journal": "IBM Journal of research and development", "ref_id": "b86", "title": "Image thresholding for optical character recognition and other applications requiring character image extraction", "year": "1983" }, { "authors": "Liang Wu; Chengquan Zhang; Jiaming Liu; Junyu Han; Jingtuo Liu; Errui Ding; Xiang Bai", "journal": "", "ref_id": "b87", "title": "Editing text in the wild", "year": "2019" }, { "authors": "Zizhang Wu; Xinyuan Chen; Jizheng Wang; Xiaoquan Wang; Yuanzhu Gan; Muqing Fang; Tianhao Xu", "journal": "Applied Intelligence", "ref_id": "b88", "title": "Ocr-rtps: an ocr-based real-time positioning system for the valet parking", "year": "2023" }, { "authors": "Xingqian Xu; Zhifei Zhang; Zhaowen Wang; Brian Price; Zhonghao Wang; Humphrey Shi", "journal": "", "ref_id": "b89", "title": "Rethinking text segmentation: A novel dataset and a text-specific refinement approach", "year": "2021" }, { "authors": "Xixi Xu; Zhongang Qi; Jianqi Ma; Honglun Zhang; Ying Shan; Xiaohu Qie", "journal": "", "ref_id": "b90", "title": "Bts: A bi-lingual benchmark for text segmentation in the wild", "year": "2022" }, { "authors": "Binxin Yang; Shuyang Gu; Bo Zhang; Ting Zhang; Xuejin Chen; Xiaoyan Sun; Dong Chen; Fang Wen", "journal": "", "ref_id": "b91", "title": "Paint by example: Exemplar-based image editing with diffusion models", "year": "2022" }, { "authors": "Ling Yang; Zhilong Zhang; Yang Song; Shenda Hong; Runsheng Xu; Yue Zhao; Yingxia Shao; Wentao Zhang; Bin Cui; Ming-Hsuan Yang", "journal": "", "ref_id": "b92", "title": "Diffusion models: A comprehensive survey of methods and applications", "year": "2022" }, { "authors": "Boxi Yu; Yong Xu; Yan Huang; Shuai Yang; Jiaying Liu", "journal": "Neurocomputing", "ref_id": "b93", "title": "Mask-guided gan for robust text editing in the scene", "year": "2021" }, { "authors": "Deli Yu; Xuan Li; Chengquan Zhang; Tao Liu; Junyu Han; Jingtuo Liu; Errui Ding", "journal": "", "ref_id": "b94", "title": "Towards accurate scene text recognition with semantic reasoning networks", "year": "2020" }, { "authors": "Yingchen Yu; Fangneng Zhan; Rongliang Wu; Jianxiong Pan; Kaiwen Cui; Shijian Lu; Feiying Ma; Xuansong Xie; Chunyan Miao", "journal": "ACM MM", "ref_id": "b95", "title": "Diverse image inpainting with bidirectional and autoregressive transformers", "year": "2022" }, { "authors": "D Matthew; Zeiler", "journal": "", "ref_id": "b96", "title": "Adadelta: an adaptive learning rate method", "year": "2012" }, { "authors": "Yanhong Zeng; Jianlong Fu; Hongyang Chao; Baining Guo", "journal": "", "ref_id": "b97", "title": "Learning pyramid-context encoder network for high-quality image inpainting", "year": "2019" }, { "authors": "Guanhua Zhang; Jiabao Ji; Yang Zhang; Mo Yu; Tommi Jaakkola; Shiyu Chang", "journal": "", "ref_id": "b98", "title": "Towards coherent image inpainting using denoising diffusion implicit models", "year": "2023" }, { "authors": "Han Zhang; Tao Xu; Hongsheng Li; Shaoting Zhang; Xiaogang Wang; Xiaolei Huang; Dimitris N Metaxas", "journal": "", "ref_id": "b99", "title": "Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks", "year": "2017" }, { "authors": "Linjiang Zhang; Peng Wang; Hui Li; Zhen Li; Chunhua Shen; Yanning Zhang", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b100", "title": "A robust attentional framework for license plate recognition in the wild", "year": "2020" }, { "authors": "Lvmin Zhang; Maneesh Agrawala", "journal": "", "ref_id": "b101", "title": "Adding conditional control to text-to-image diffusion models", "year": "2023" }, { "authors": "Lei Zhao; Qihang Mo; Sihuan Lin; Zhizhong Wang; Zhiwen Zuo; Haibo Chen; Wei Xing; Dongming Lu", "journal": "", "ref_id": "b102", "title": "Uctgan: Diverse image inpainting based on unsupervised cross-space translation", "year": "2020" }, { "authors": "Minyi Zhao; Miao Wang; Fan Bai; Bingjia Li; Jie Wang; Shuigeng Zhou", "journal": "", "ref_id": "b103", "title": "C3-stisr: Scene text image super-resolution with triple clues", "year": "2022" }, { "authors": "Chuanxia Zheng; Tat-Jen Cham; Jianfei Cai", "journal": "", "ref_id": "b104", "title": "Pluralistic image completion", "year": "2019" }, { "authors": "Xinyu Zhou; Cong Yao; He Wen; Yuzhi Wang; Shuchang Zhou; Weiran He; Jiajun Liang", "journal": "", "ref_id": "b105", "title": "East: an efficient and accurate scene text detector", "year": "2017" }, { "authors": "Xinyan Zu; Haiyang Yu; Bin Li; Xiangyang Xue", "journal": "", "ref_id": "b106", "title": "Weakly-supervised text instance segmentation", "year": "2023" } ]
[ { "formula_coordinates": [ 3, 496.92, 586.7, 7.74, 8.64 ], "formula_id": "formula_0", "formula_text": ")1" }, { "formula_coordinates": [ 3, 196.99, 636.03, 307.67, 9.68 ], "formula_id": "formula_1", "formula_text": "B = Φ D (Φ E (Embedding(P))) = (b 0 , b 1 , ..., b K-1 ).(2)" }, { "formula_coordinates": [ 4, 108.55, 520.72, 395.45, 24.44 ], "formula_id": "formula_2", "formula_text": "M ∈ R 1×H ′ ×W ′ and 4-D masked feature FM ∈ R 4×H ′ ×W ′ ." }, { "formula_coordinates": [ 4, 216.46, 622.46, 288.21, 13.2 ], "formula_id": "formula_3", "formula_text": "l denoising = ||ϵ -ϵ θ ( F, Ĉ, M, FM , P, T )|| 2 2 .(3)" }, { "formula_coordinates": [ 4, 244.89, 692.38, 259.78, 9.65 ], "formula_id": "formula_4", "formula_text": "l = l denoising + λ char * l char .(4)" } ]
2023-08-24
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12" ], "table_ref": [], "text": "M ODERN deep neural networks are a powerful new tech- nology, exhibiting impressive performance in extensive scenarios ranging from perceptual information understanding to hard scientific problem deciphering. A typical class of network models is Convolutional Neural Networks (CNN), which are popular in semantic applications for visual data such as digital images.\nThe effectiveness of CNN models is, in general, due to the highly discriminative and data-adaptive representation, technically enabled by the composition of numerous nonlinear transformations with learnable parameters [1]. Besides their effectiveness, the emerging Deep Learning as a Service (DLaaS) paradigm allows developers to apply or deploy proven CNN models in a simple and efficient manner [2].\nThe above two factors have led to the widespread emergence of deep learning-based artificial intelligence systems in everyday life, even expanding to many security and trust sensitive scenarios [3], e.g., self-driving cars, surveillance, drones and robotics, voice command recognition, and Face ID on mobile phones.\nDespite the advantages with respect to discriminability, the robustness of deep CNN models has raised general concerns, especially in the computer vision community [4]. It has been shown that adversarial perturbations on the input example can cause significant fluctuations on such deep representations, even though such perturbations are quasi-imperceptible for humans [5].\nSince the first work by Szegedy et al. [6], various attack methods have been proposed for well crafting such adversarial perturbations. In general, design goals cover high fooling rate [7], low perceptual loss [8], efficient generation [9], high transferability with respect to models [10], high universality with respect to examples [11], less need for model knowledge [12], easy physical implementation [13], etc. Hence, these recent advances allow an adversary to perform effective evasion attacks at a low cost, which interfere in the deep representation and thus fundamentally destabilize the artificial system. In addition, the popularity of DLaaS-based development further increases this security threat. Specifically, a successful attack on few off-the-shelf deep models in DLaaS platform will affect a wide range of users and their systems, especially where safety-critical scenarios may be involved.\nIntegrating the above facts, it is generally agreed that adversarial perturbations have become a real threat that the security community has to face." }, { "figure_ref": [], "heading": "A. State of the Arts", "publication_ref": [ "b13", "b14" ], "table_ref": [], "text": "For protecting deep neural networks and their application systems, some defense strategies towards adversarial attacks are designed over diverse research hypotheses and implementation paths.\nThe straightforward strategy is to include adversarial examples into the training, called adversarial training, which promotes the model learning and adapting to adversarial patterns [14]. Such data-level defenses are intuitively and empirically effective, but the retraining increases the implementation cost and the resulting robustness cannot be adaptively/interpretably generalized to unseen adversarial patterns [15]. An example under analysis" }, { "figure_ref": [], "heading": "Inference Phase", "publication_ref": [ "b15", "b17", "b18", "b20", "b21", "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b29" ], "table_ref": [], "text": "Adversarial example detector with key and leaned parameters Adversarial example detected, service denied Adversarial example not detected, service offered (data randomization optional) Securing DLaaS from adversarial example Fig. 2. Illustration for the inference phase of the proposed adversarial example detector. When the detector is trained (i.e., with the key and learned SVM parameters), it can be deployed in various real-world scenarios, where a DLaaS scenario is chosen as an example. For an image under analysis, our detector predicts whether it contains adversarial perturbations. With such prediction, the DLaaS is able to deny the service when the adversarial perturbation is revealed.\nAnother popular strategy seeks architecture-level designs as defenses, e.g., regularization structure [16]- [18] and certified robustness [19]- [21], embedding adversarial robustness priors into networks. Ideally, such built-in designs have the potential to achieve adaptive/interpretable robustness against adversarial examples. Yet, in practice, tricky trade-offs between clean and robust accuracy are almost inevitable [22], and the additional cost for resetting DLaaS is also significant.\nIn contrast to above direct attempts on robustness enhancement, the detection-only approach [23] serve as a preprocessing for deep models to filter out potential adversarial examples, without modifying the model itself. Therefore, in adversarial perturbation detector, it is allowed to reduce both clean-accuracy loss and additional implementation costs with respect to adversarial training and architecture-level design, being particularly suitable for DLaaS scenarios.\nTechnically, such forensic detectors strongly relies on proper representation of the adversarial patterns while not being disturbed by the image contents, i.e., the discriminative decomposition of natural-artificial data [24], [25].\nHere, the priori knowledge of adversarial perturbation is potentially useful to achieve such discriminative decomposition. In the classical learning of adversarial perturbation, the visual loss is usually limited by a regularization term; in turn, forcing the perturbation to appear mainly in high frequency bands where the human visual system is less sensitive. Therefore, in typical scenarios it is reasonable to assume that the adversarial examples have common frequency distribution pattern that differs from the natural examples, especially in high frequency bands.\nFrom this assumption, researchers have tried to extract highfrequency discriminative features by various transformations, such as denoising filters [26], Spatial Rich Model (SRM) [27], Principal Component Analysis (PCA) [28], Discrete Cosine Transform (DCT) [29], Discrete Sine Transform (DST) [30], and Discrete Wavelet Transform (DWT) [30], with varying degrees of success." }, { "figure_ref": [], "heading": "B. Motivations", "publication_ref": [ "b30", "b31" ], "table_ref": [], "text": "This line of adversarial perturbation detectors still falls short in accuracy and security, especially for real-world scenarios.\n• For the accuracy, they are able to provide good detection performance when the scale of either the image patterns or the perturbation patterns is limited (e.g., MNIST with one attack), while the accuracy degrades significantly when both are large [31]. This phenomenon implies that existing representation methods fail to capture adversarial patterns comprehensively.\n• As for the security, successful defense-aware (secondary) attack, i.e., evading the detector as well as fooling the model, is realistic under the Kerckhoffs's principle [32] -the adversary is fully aware of the detector. Technically, with the transparent definition of the decomposition, such perturbation can be learned by penalizing the specific features that the detector focuses on." }, { "figure_ref": [ "fig_0" ], "heading": "C. Contributions", "publication_ref": [ "b32", "b29", "b30", "b29", "b30", "b33" ], "table_ref": [], "text": "Motivated by above facts, we attempt to present an accurate and secure adversarial example detector, technically enabled by a spatial-frequency discriminative decomposition with secret keys, as shown in Figs. 1 and2. To the best of our knowledge, this is a very early work on fundamental improving both accuracy and security of detector from the basic decomposition stage.\n• Regarding the accuracy, we attribute the failure at larger data scales to the contradiction of spatial and frequency discriminability in the decomposition, surprisingly revisiting a classical problem of signal processing [33].\nRecently, Agarwal et al. [30], [31] preliminarily explored this fundamental problem, where a decision-level fusion of the DST (biased towards frequency) and DWT (biased towards spatial) was designed to mitigate such contradiction. In this paper, we introduce Krawtchouk polynomials as basis functions for the discriminative decomposition, providing a mid-scale representation different from the global trigonometric basis in DST and the local wavelet basis in DWT. Note that such representation with rich spatial-frequency information can provide more clues of adversarial patterns for the prediction, being a more flexible detector than the decision-level fusion [30], [31].\n• Regarding the security, we attribute the successful defense-aware (secondary) attack to the the transparency of detector-interested features, as a foundational threat in many existing methods. In this paper, we propose a detransparency strategy based on random feature selection [34]. More specifically, a pseudorandom number generator determines the spatial and frequency parameters of the decomposition, and the user controls such generator by setting the secret keys (i.e., the seed values). After such randomization, the defense-aware attack becomes difficult (or impossible) as the confusion of the boundary between to-be-attacked and to-be-evaded features, even for the adversary with the knowledge of the detector algorithm other than the keys." }, { "figure_ref": [], "heading": "II. RELATED WORKS", "publication_ref": [], "table_ref": [], "text": "In this section, we review topics closely related to this paper, covering the generation, defense, and explanation of adversarial perturbations." }, { "figure_ref": [], "heading": "A. Generation of Adversarial Perturbations", "publication_ref": [ "b4", "b4", "b5", "b9", "b10", "b11", "b8", "b34", "b34", "b7", "b35", "b36", "b37", "b12", "b5", "b8", "b8", "b6", "b7" ], "table_ref": [], "text": "The generation is to find a perturbation that has a small enough perceptual loss but can cause a large enough fluctuation in the output of the model [5]. In this regard, the researchers mainly focus on the objective definitions, perceptual metrics, and optimization algorithms.\n1) Objective Definitions: Regarding the objective definition, it is a mathematical description for the goal of the generation. Therefore, the general definition is the combination of perceptual loss term (for visually reasonable) and model loss term (for high fooling rate) [5]. In the community, different scenarios and needs have led to a variety specific forms of this general definition. Here, in addition to the typical model loss for single model with single image [6], researchers are also seeking more efficient formalization of model loss for high transferability with respect to models [10], high universality with respect to examples [11], and less need for model knowledge [12].\nAs a more difficult term in the general definition, the perceptual loss term implies in fact the capturing of subjective visual perception through an objective metric. Next, we discuss it specifically.\n2) Perceptual Metrics: Regarding the perceptual metric, popular works are generally defined based on norms, with clear physical meanings. For example, the ℓ ∞ -norm constrains the largest magnitude among each element in the perturbation [9]; the ℓ 2 -norm constrains the Euclidean length of the perturbation [35]; the ℓ 1 -norm constrains the Manhattan length of the perturbation [35]; the ℓ 0 -norm constrains the number of perturbed pixels in the image [8]. From a visual perspective, the adversarial perturbations generated from norm-based perceptual metrics are generally non-semantic and appear similar to random noise.\nMore recently, several works explore a visually meaningful way for the generation of adversarial perturbations. Here, examples include visual watermarking [36], rain traces [37], out-of-focus blurring [38], and physically easy-to-implement patches [13]. Such attempts provide new insights into the generation of adversarial perturbations, further expanding the general definition of perceptual loss.\n3) Optimization Algorithms: Regarding the optimization algorithm, it aims to discover the parameters in their space (generally very large) that enable the objective to hold. Therefore, the accuracy [6] and efficiency [9] of the solving are the core properties of interest to researchers. Here, popular optimization algorithms include single-step optimization [9], iterative optimization (especially gradient-based optimization) [7], and heuristic optimization [8], with varying accuracy and efficiency." }, { "figure_ref": [], "heading": "B. Defense against Adversarial Perturbations", "publication_ref": [ "b22", "b13", "b14", "b38", "b39", "b40", "b21", "b22", "b25", "b26", "b27", "b28", "b29", "b29", "b1", "b41", "b15", "b16", "b17", "b18", "b19", "b20", "b42" ], "table_ref": [], "text": "The defense is to protect the model from adversarial attacks, or to improve the performance of the model under adversarial attacks, with reasonable cost (in efficiency and accuracy) [23]. In this regard, the researchers mainly focus on the data-level defenses and the architecture-level defenses.\n1) Data-level Defenses: Regarding the data-level defense, the most straightforward idea is adversarial training [14],\nwhere adversarial examples is included in the training. Empirically, this approach can significantly improve the robustness to the kinds of adversarial attacks seen in training. However, such robustness is not guaranteed for unseen attacks, and the model retraining often leads to a multiplicative increase in the training size [15].\nAnother idea is data recovery [39], where the perturbation patterns are reduced and the resulting recovered image is fed into a regular deep model. Here, by separating clean and perturbed data in the image, one can either directly remove the perturbed data (i.e., data compression) [40] or introduce randomness in the perturbed data to destroy their patterns (i.e., data randomization) [41]. Since discriminatively separating perturbed data and accurately reconstructing clean images are still open problems, this class of methods obviously faces a tricky accuracy-robustness tradeoff [22].\nA similar idea to data recovery is the adversarial perturbation detector [23], it evaluates the likelihood of the presence of a perturbation in an image. Technically, such detector also relies on the discriminative decomposition of natural-artificial data, e.g., denoising filters [26], SRM [27], PCA [28], DCT [29], DST [30], and DWT [30]. Note that the accuracyrobustness tradeoff is largely avoided here, since such detector only excludes potential adversarial images without involving the lossy recovery for all input images. Hence, it is particularly suitable DLaaS scenarios [2], with promising overall performance on accuracy, robustness, and implementation cost.\n2) Architecture-level Defenses: Regarding the architecturelevel defense, deep models are redesigned to satisfy various mathematical constraints with respect to robustness priors, especially from the perspective of function continuity (e.g., Lipschitz continuity).\nTypical efforts in this regard include the learnable denoising modules [42], the regularization for convolutional layers (e.g., with smooth kernels) [16], the regularization for activation functions (e.g., with greater nonlinear properties) [17], and the regularization for loss functions (e.g., a new loss for penalizing model fluctuations) [18]. More recent efforts provide stronger results, i.e., certified robustness, from cryptographic [19], statistical [20], and geometric [21] perspectives.\nIdeally, such architecture-level efforts have the potential to achieve a built-in robustness against any adversarial examples with high confidence. In fact, however, these theory-driven designs face the same accuracy-robustness tradeoff. Owing to the lack of faithful and self-consistent explanation theory [43] for the behavior of deep models and the existence of adversarial examples." }, { "figure_ref": [], "heading": "C. Explanation for Adversarial Perturbations", "publication_ref": [ "b43", "b8", "b44", "b5", "b8", "b44", "b45", "b46" ], "table_ref": [], "text": "The explanation is to build a self-consistent theory that faithfully explains and predicts adversarial phenomena [44]. In this regard, it is still controversial and no consensus theory has been developed in the community. A well-known case is the debate between Goodfellow et al. [9] and Tanay et al. [45] on the linear explanation.\nCurrently, researchers have proposed various explanations from training data (e.g., low sampling probability assumption) [6], model structure (e.g., linear assumption) [9], manifold geometry (e.g., boundary tilting assumption) [45], and data features (e.g., high frequency and non-robust feature assumption) [46], [47].\nIn general, such theories generally agree only with their local observations, and it is possible to find counterexamples that invalidate them. Therefore, there still appears a long way to go for the faithful and self-consistent explanation theory." }, { "figure_ref": [], "heading": "III. GENERAL FORMULATION", "publication_ref": [], "table_ref": [], "text": "In this section, we formulate the basic aspects involved in this paper, i.e., model, attack, and defense." }, { "figure_ref": [], "heading": "A. Model Formulation", "publication_ref": [], "table_ref": [], "text": "We focus on deep convolutional neural network models for image classification tasks. Such classification models can be formulated as a mapping M : X → Y, where X ⊂ [0, 1] W ×H×C is the image space with image size W × H × C and normalized pixel intensity [0, 1]; Y = {1, 2, ..., N } is the label space with the category size N . For a clean data point (x, y) ∈ X × Y, the classification model M is considered to be correct if and only if M(x) = y." }, { "figure_ref": [], "heading": "B. Attack Formulation", "publication_ref": [], "table_ref": [], "text": "Attack objective. We focus on evasion attacks against above image classification models. For an image x with the true label y, the goal of the attacker is to find an adversarial perturbation δ such that:\nM(x + δ) ̸ = y,(1)\ni.e., fooling the prediction, typically under a norm-based constraint for the imperceptibility of perturbation:\n||δ|| < ε.(2)\nThe resulting perturbed input for the model, i.e., the adversarial example, is denoted as x ′ = x + δ ∈ X . Note that the above attack objective is a formulation for the defense-unaware scenario, in line with the threat assumption of most related works. As for the defense-aware (secondary) attack, the goal of the attacker include the fooling of the adversarial perturbation detector: D(x ′ ) = D(x), in addition to the misclassification and imperceptibility above. Here, we denote the above defense-unaware and defense-aware attacks as A(x) = δ. More detailed formulation on attacker and defender will be provided later.\nKnowledge and capability. For the defense-unaware scenario, the attacker has perfect knowledge of the image classification model M (i.e., full access to its mechanism and parameters), but has zero knowledge of the detector D (or not aware of its presence). For the defense-aware scenario, the attacker likewise has perfect knowledge of the model M and has limited knowledge of the detector D -the attacker is aware the given model M is being secured with a detector, knows its mechanism, but does not have access its secret keys. For both scenarios, the attacker has the capability to arbitrarily modify pixels within a given image x." }, { "figure_ref": [], "heading": "C. Defense Formulation", "publication_ref": [], "table_ref": [], "text": "Defense objective. The goal of our defense is to design an adversarial perturbation detector D such that: D(x) = 0 (i.e., predicted as clean example) and D(x ′ ) = 1 (i.e., predicted as adversarial example) for any clean image x ∈ X and corresponding adversarial example x ′ by any pertinent A. In other words, since x and x ′ differ only in δ, the above binary classification task is practically equivalent to a hypothesis testing for the presence of adversarial patterns (with respect to δ) under the strong interference from image content (with respect to x)." }, { "figure_ref": [], "heading": "IV. TOWARDS ACCURATE AND SECURE DETECTOR: FORMULATION", "publication_ref": [], "table_ref": [], "text": "In this section, we provide new formal analyses of the concerned accuracy and security issues, as the theoretical basis for the proposed detector." }, { "figure_ref": [], "heading": "A. Discriminability Analysis", "publication_ref": [ "b25", "b26", "b28", "b29" ], "table_ref": [], "text": "The design of efficient detector D relies heavily on a discriminative decomposition F with respect to x and δ in x ′ . Such decomposition can be formulated as a mapping F : X → C, where X is the image space and C is the space of decomposition coefficients.\nTo achieve a high discriminability, two constraints are typically imposed on the explicit forms of F. Proposition 1. (Distributive Property of Addition). The F should be distributed over addition to fulfill:\nF(x ′ ) = F(x+ δ) = F(x) + F(δ), i.e.\n, the decomposition of the adversarial example is equivalent to the sum of the decomposition of the clean example and the perturbation.\nSuch distributive property of addition facilitates the separation of natural-artificial data, and the terms like filtering [26], convolution [27], and inner product [29], [30] from successful detectors satisfy this property." }, { "figure_ref": [], "heading": "Proposition 2. (Statistical Regularity", "publication_ref": [], "table_ref": [], "text": "). The F(x) is expected to exhibit a consistent statistical pattern in C for any clean image x ∈ X , and the F(δ) is also expected to exhibit another consistent statistical pattern in C for any δ by pertinent A; meanwhile, such two statistical patterns should be significantly different.\nHere, we provide only a general description for the regularity. However, more specific assumptions about statistical regularity are needed in order to design F explicitly.\nNext, we introduce two conjectures on the frequency and spatial distribution patterns of natural images and adversarial perturbations, as the main assumptions of our work. Note that such intuitive conjectures have general recognition in the research community, despite the current lack of serious proofs." }, { "figure_ref": [], "heading": "Conjecture 1. (Frequency Pattern). Natural images and adversarial perturbations have different frequency distributions.", "publication_ref": [ "b47", "b45", "b46", "b48", "b49", "b50", "b29", "b29" ], "table_ref": [], "text": "For natural images, their local smoothness and nonlocal selfsimilarity nature leads to the dominance of low-frequency components [48]. For adversarial perturbations, the imperceptibility constraint drives the generated perturbations to contain mainly high-frequency components, due to the relative insensitivity of the human visual system to highfrequency information [46], [47].\nConjecture 2. (Spatial Pattern). Natural images and adversarial perturbations have different spatial distributions. For natural images, their high-frequency components are mainly distributed in shape edges and texture regions [49]. For adversarial perturbations, their high-frequency components are likely to extend beyond such shape edges and texture regions, since there is no term in the typical generation to directly control the spatial distribution of the perturbations [50], [51].\nRemark. With Conjectures 1 and 2, we note that both frequency and spatial properties are of interest in the design of F. More specifically, discriminative features should be formed in such frequency and spatial ranges where the distribution patterns of x and δ differ, e.g., a naturally smooth region but with artificial high-frequency perturbations. Therefore, F is expected to analyze above frequency and spatial differences with sufficient resolution.\nHowever, it is also well known that there is a trade-off between frequency and spatial resolutions of the orthogonal transform. In the related works, global transforms such as DST [30] bias towards frequency resolution, at the cost of spatial resolution. As the opposite, local transforms such as DWT [30] bias towards spatial resolution, at the cost of frequency resolution. Therefore, neither can provide rich spatial-frequency information.\nMotivated by above facts, we introduce a mid-scale representation based on Krawtchouk polynomials, which provides a good trade-off between spatial and frequency resolutions than global/local transforms." }, { "figure_ref": [], "heading": "B. Security Analysis", "publication_ref": [ "b25", "b29" ], "table_ref": [], "text": "Suppose the adversarial perturbations generated by (1) and (2) have a strong response (with main energy) on a subset of coefficients: C A ⊂ C, and the detector-interested (with higher weights) subset of coefficients is denoted as C D ⊂ C. Therefore, the effectiveness of detector D is in fact built on the intersection C A ∩ C D , where we denote the corresponding coefficient subset for a perturbation δ as F C A ∩C D (δ). Proposition 3. (Defense-aware Attack: General Objective). With the above assumptions and notations, the objective of the defense-aware attack can be modeled on the correlation ρ:\nρ(F C A ∩C D (δ old ), F C A ∩C D (δ new )) < η,(3)\nfor an image x, with also objectives ( 1) and ( 2), where δ old is the generated perturbation in the defense-unaware scenario, i.e., by only (1) and (2), and δ new is the perturbation being generated.\nProposition 4. (Defense-aware Attack: A Special Case). In practice, objective (3) can be converted into another easily implemented objective -directly shifting the main energy of the perturbation δ new out of C A ∩ C D :\n||F C A ∩C D (δ new )|| < λ,(4)\nsuch an objective allows defense-aware perturbations to form on the relative complement C\\(C A ∩C D ), hence destroying the consistent pattern of F(δ old ) and F(δ new ) on the adversarial and detector-interested\nC A ∩ C D .\nRemark. Note that above new objectives in the defenseaware scenario, i.e., (3) or ( 4), rely in fact on the sufficient knowledge of detector D and decomposition F. Under the Kerckhoffs's principle, we assume that the adversary has such knowledge (see also Section III-B), which is practical due to the transparency in the design of D and the definition of F for many related works [26] ∼ [30]. With such knowledge, the adversary is able to identify the critical C A ∩ C D and hence successfully evade the detector D as well as fool the model M by the objectives ( 1) and ( 2) with ( 3) or (4).\nMotivated by above facts, we introduce a de-transparency strategy on F by random feature selection with keys, which confusion of the boundary between to-be-attacked features C\\(C A ∩ C D ) and to-be-evaded features C A ∩ C D ." }, { "figure_ref": [], "heading": "V. TOWARDS ACCURATE AND SECURE DETECTOR: METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "In this section, we specify the proposed detector against adversarial perturbation. We will first give an overview drawing a high-level intuition for readers, and then the main techniques within the methodology are presented separately." }, { "figure_ref": [ "fig_0" ], "heading": "A. Overview", "publication_ref": [], "table_ref": [], "text": "In general, the proposed detector consists of three main steps, going through training and inference phases.\nAs shown in Fig. 1, the training of our detector aims to fit a mapping from a set of adversarial/clean image examples to the corresponding labels. For an efficient mapping, our detector is equipped with three steps that perform decomposition, featuring, and classification, respectively.\n• Regarding the decomposition, the image is projected into a space defined by Krawtchouk polynomials, in which the clean image x and adversarial perturbation δ are better separable from x ′ (see also the discriminability analysis in Section IV-A). As for security, the frequency parameters (n, m) and spatial parameters (P x , P y ) are determined by the user key, which is considered as a detransparent mechanism for decomposition (see also the security analysis in Section IV-B). The description on decomposition will be presented in Section V-B.\n• Regarding the featuring, a compact-but-expressive feature vector is formed on the obtained decomposition coefficients. Here, the statistical regularity in the frequency and spatial domains are introduced, where coefficients are integrated and enhanced as feature by means of above beneficial priors. The description on featuring will be presented in Section V-C. • Regarding the classification, the above features are fed into a Support Vector Machine (SVM) for an automatic two-class separation of feature space. Note that the decomposition and featuring are non-learning, and the SVM is the only learning part of the detector. The description on classification will be presented in Section V-D. As shown in Fig. 2, the trained detector is an accurate and secure defense tool against adversarial attacks, technically enabled by a spatial-frequency discriminative decomposition with secret keys. It can be deployed in various real-world scenarios, where a DLaaS scenario is chosen as an example. For an image under analysis, our detector (with the same key and SVM parameters in training) predicts whether it contains adversarial perturbations. By such prediction, the DLaaS is able to deny the service when the adversarial perturbation is revealed." }, { "figure_ref": [ "fig_0" ], "heading": "B. Spatial-frequency Discriminative Decomposition with Secret Keys", "publication_ref": [ "b51", "b29", "b52", "b52", "b53" ], "table_ref": [], "text": "Our defense framework starts with a spatial-frequency discriminative decomposition of input example. In this subsection, we discuss the explicit definition of such decomposition, as well as the security enhancement strategy with secret keys. Definition 1. (Orthogonal Decomposition). The orthogonal decomposition of an image function f ∈ X , denoted as F, is defined as the inner product of the image function f and the basis function V [52]:\nF(f ) = D V * nm (x, y)f (x, y)dxdy,(5)\nwhere the frequency parameters (n, m) ∈ Z 2 , the domain of basis function D ⊂ {(x, y) ∈ R 2 }, the asterisk * denotes the complex conjugate. Here, the basis function V satisfy orthogonality over the domain D as:\nD V nm (x, y)V * n ′ m ′ (x, y)dxdy = δ nn ′ δ mm ′ , (6\n)\nwhere δ is the Kronecker delta function:\nδ ab = [a = b].\nWith the Definition 1, one can note that the orthogonal decomposition methods used in existing detectors all have the form ( 5) and ( 6), and their difference lies in the definition of the basis function V . For example, the DST and DWT in the detector of Agarwal et al. [30] are with the global trigonometric basis and local wavelet basis, respectively. In this paper, we define a mid-scale basis by Krawtchouk polynomials for a better trade-off between spatial and frequency resolutions. Definition 2. (Krawtchouk Basis). The Krawtchouk basis function V is defined as [53]:\nV nm (x, y) = Kn (x; P x , W ) Km (y; P y , H),(7)\nwhere domain of basis function\nD = {(x, y) ∈ [0, 1, ..., W ]× [0, 1, ..., H]} with image size W × H, frequency parameters (n, m) ∈ [0, 1, ..., W ] × [0, 1, ..., H]\n, spatial parameters (P x , P y ) ∈ (0, 1) 2 , and weighted Krawtchouk polynomials K are defined as:\nKl (z; P, L) = L z P z (1 -P ) L-z (-1) l ( 1-P P ) l l!Γ(-L) Γ(l-L) • 2 F 1 (-l, -z; -L; 1 P ),(8)\nwhere hypergeometric function 2 F 1 is defined as:\n2 F 1 (a, b; c; d) = ∞ k=0 (a) k (b) k (c) k d k k! ,(9)\nwith Pochhammer symbol: (a) k = Γ(a + k)/Γ(a). For more efficient computation of (8), i.e., avoiding the infinite summation in (9), we introduce the recursive formula for weighted Krawtchouk polynomials K: Kl+1 =\n(1-P )(l+1)\nP (L-l) (LP -2lP + l -z) Kl P (l -L) - (1-P ) 2 (l+1)l P 2 (L-l)(L-l+1) l(1 -P ) Kl-1 P (l -L) ,(10)\nwith initial items K1 and K0 :\nK1 (z; P, L) = (1 - z P L ) K0 ,(11)\nK0 (z; P, L) = L z P z (1 -P ) L-z . (12\n)\nBy substituting the basis of Definition 2 into the decomposition of Definition 1, we have formulated the Krawtchouk decomposition, which is fundamental in our detector.\n1) Discriminability Analysis: Next, we will discuss the key property of Krawtchouk decomposition, i.e., time-frequency discriminability, and its role in the detection of adversarial perturbation.\nProperty 1. (Time-frequency Discriminability). The frequency and spatial properties of the represented image information by Krawtchouk decomposition can be controlled with the frequency parameters (n, m) and spatial parameters (P x , P y ), respectively [53].\nRemark. In the study of image representation, it has been found that the frequency and spatial properties of orthogonal decomposition rely on the number and location of zeros of the basis functions, respectively [54]. Specific to this paper, the core of time-frequency discriminability in Krawtchouk decomposition is that the number and location of zeros can be adjusted explicitly by (n, m) and (P x , P y ), respectively:\n• The number of zeros of the 1D Kl (z; P, L) is proportional to l. As for the 2D V nm (x, y) = Kn (x) Km (y), similar conclusion holds with respect to the n and m at the x-direction and y-direction, respectively.. • The location of zeros of the 1D Kl (z; P, L) is biased towards 0 when P < 0.5, uniform when P = 0.5, and biased towards 1 when P > 0.5, where the more deviation of P from 0.5 is, the more biased the distribution of zeros is. As for the 2D V nm (x, y) = Kn (x) Km (y), similar conclusion holds with respect to the P x and P y at the x-direction and y-direction, respectively. In Fig. 3, we illustrate 1D plots of weighted Krawtchouk polynomials Kl (z; P, L) for a high-level intuition of such time-frequency discriminability. Here, the plots under different parameter settings: l = {2, 4, 8} and P = {0.25, 0.5, 0.75} with L = 100. As can be expected, changing l will change the number of zeros of K, which in turn corresponds to a change in the frequency properties. As for P , the change of its value will change the distribution of zeros, which in turn corresponds to a change in the spatial properties. The 2D plots of V nm (x, y) with respect to (n, m) and (P x , P y ) are given in Fig. 1, where frequency and spatial property changes at the x and y directions can be observed." }, { "figure_ref": [], "heading": "Main Result 1. (Discriminability for Adversarial Perturbation). The Krawtchouk decomposition is discriminative for adversarial perturbation due to the following factors: 1)", "publication_ref": [ "b25", "b26", "b28", "b29", "b29", "b33" ], "table_ref": [], "text": "The Krawtchouk decomposition defined by the inner product (Definitions 1 and 2) satisfies the distributive property of addition (Proposition 1); 2) The Krawtchouk decomposition with time-frequency discriminability (Property 1) is able to explore the statistical regularity (Proposition 2), when the frequency pattern (Conjecture 1) and spatial pattern (Conjecture 2) hold in natural images and adversarial perturbations.\nRemark. Although decompositions in competing detectors, e.g., denoising filters [26], SRM [27], DCT [29], DST [30], and DWT [30], all satisfy the distributive property of addition (Proposition 1), achieving the statistical regularity (Proposition 2) is still an open problem. Global decompositions, e.g., DST, are biased towards frequency resolution, thus better exploiting frequency patterns, but fail in mining spatial patterns. In contrast, local decompositions, e.g., denoising filters, SRM, and DWT, can fully exploit the spatial patterns, but only provide limited frequency resolution and thus fail in mining frequency patterns. As a mid-scale representation, the Krawtchouk decomposition provides a good trade-off between spatial and frequency resolutions than above global/local methods. It is thus expected to reveal a more comprehensive pattern of adversarial perturbation over the both spatial and frequency dimensions.\n2) Security Analysis: With the Property 1 and the Main Result 1, we have analyzed the theoretical effectiveness of the Krawtchouk decomposition in detecting adversarial perturbations. Next, we will discuss the de-transparency strategy of decomposition to provide certain security guarantees against defense-aware (secondary) attacks.\nIn the implementation of previous detector, the parameters of the orthogonal decomposition were often determined explicitly by the user. In our implementation, the frequency parameters (n, m) and spatial parameters (P x , P y ) are determined by a pseudorandom number generator, i.e., random feature selection [34]. Note that the random numbers will be appropriately scaled and quantized to fit the physical meaning of these parameters, i.e., within the domains [0, 1, ..., W ] × [0, 1, ..., H] and (0, 1) 2 , respectively. The user keeps the seed value of the generator secret, where such seed is considered as the key for decomposition. In fact, the secret key determines the set of coefficients (with respect to spatial and frequency features) that the detector can explore, and hence changing the key will result in a different detector (see also Property 1)." }, { "figure_ref": [], "heading": "Main Result 2. (Security for Defense-aware Attack).", "publication_ref": [ "b25", "b26", "b28", "b29", "b29", "b33" ], "table_ref": [], "text": "The random feature selection is security for defense-aware attack due to the following factor. After the randomization of Krawtchouk decomposition, the adversary is difficult (or impossible) to identify the adversarial and detectorinterested C A ∩ C D (Propositions 3 and 4), even with the knowledge of the detector algorithm other than the keys. Furthermore, such de-transparency strategy confusion of the boundary between to-be-attacked features C\\(C A ∩ C D ) and to-be-evaded features C A ∩ C D , resulting in a dilemma for the adversary in attacking the model M and evading the detector D.\nRemark. Some decompositions in competing detectors, e.g., denoising filters [26] and SRM [27], are based on a few fixed filters/bases. Such design is easy for the adversary to form an effective evasion of the detector. Other detectors use a more comprehensive bases as the decomposition, e.g., DCT [29], DST [30], and DWT [30], which intuitively enhances both the discriminability and security. However, the risk of evading the detector remains, where the adversary can still determine the critical C A ∩ C D due to the transparency in the design of detector and the definition of decomposition. In our work, the random feature selection provides a well detransparency mechanism, placing fundamental dilemma for the defense-aware attack.\nWe would like to state that Main Result 2 is a special case of the general modeling for the random feature selection in forensic detectors. Therefore, we do not provide a formal analysis on security, for avoiding a large repetition. The interested reader is referred to the detailed proofs and numerical results by Chen et al. [34]." }, { "figure_ref": [], "heading": "C. Frequency-band Integration and Feature Enhancement", "publication_ref": [], "table_ref": [], "text": "We first recall Section V-B and provide some proper notations. After the Krawtchouk decomposition, the set of spatialfrequency discriminability (Main Result 1) coefficients is denoted as C = {c n,m,Px,Py }, where the random feature selection provides certain security guarantees with respect to defenseaware attacks (Main Result 2). In the implementation, we prefer to sample a few spatial parameters (P x , P y ) but sample a large number of frequency parameters (n, m). It allows for capturing the potential adversarial patterns as comprehensively as possible while bounding the complexity.\nFrequency-band Integration. With the sampling strategy of the parameters, the coefficient set C is very high-dimensional and exhibits certain information redundancy, where direct knowledge engineering is usually inefficient. Here, the statistical regularity (Proposition 2) are explored for forming a compact-but-expressive feature vector V on the coefficient set C. More specifically, the coefficients of similar frequency properties in C are integrated as a component of feature vector V, inspired by the frequency pattern (Conjecture 1) and spatial pattern (Conjecture 2). First, the space of frequency coefficients of the Krawtchouk decomposition, i.e., (n, m) ∈ [0, 1, ..., W ] × [0, 1, ..., H], is divided equally into # B bands under ℓ 2 norm:\nB i = {(n, m) : i -1 # B ||(W, H)|| 2 ≤ ||(n, m)|| 2 < i # B ||(W, H)|| 2 },(13)\nwhere i = 1, 2, ..., # B and C = ∪ # B i=1 B i . Then, the coefficients c n,m,Px,Py in the frequency band B i are considered with similar frequency properties and they are integrated as a feature component:\nV i (P x , P y ) = (n,m)∈Bi c n,m,Px,Py ,(14)\nwhere the feature vector V = {V i (P x , P y )}. Feature Enhancement. In fact, the feature V obtained by frequency-band integration can still be enhanced, i.e., improving compactness and expressiveness. Here, we present two simple enhancement strategies, note that they are optional and not mandatory in the implementation.\n• Weighting: Starting from pairs of clean and adversarial examples in the training set, we evaluate in which frequency bands the features exhibit stronger variability. Simple functions (e.g. Gaussian functions) reflecting such variability can be set to weight the obtained feature vector, where the more discriminative bands are highlighted. • Ranking: Starting from the features and labels of the training examples, we calculate the correlation of each feature dimension/component with the labels. Then, the feature vector is re-ranked according to the relevance, where the dimension/component with low relevance can be dropped directly." }, { "figure_ref": [], "heading": "D. SVM Prediction", "publication_ref": [], "table_ref": [], "text": "After the featuring of Section V-C, we formed compactbut-expressive feature V from the Krawtchouk coefficient set " }, { "figure_ref": [], "heading": "VI. EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "In this section, we will evaluate the capability of the proposed method to detect adversarial examples by extensive quantitative analysis. The code is available online at https://github.com/ChaoWang1016/ASD.\nWe first provide the basic setup with respect to models, attacks, and defenses in the following experiments. Then, the proposed detector is evaluated with benchmarking, crossing, and challenging protocols, thus determining its position with respect to current state-of-the-art detectors and its effectiveness for realistic scenarios." }, { "figure_ref": [ "fig_2" ], "heading": "A. Experiment Setup", "publication_ref": [ "b54", "b55", "b56", "b57", "b58", "b60", "b61", "b62", "b63", "b8", "b6", "b64", "b65", "b66", "b67", "b68", "b69", "b70", "b71", "b10", "b72", "b73", "b74", "b25", "b75", "b26", "b76", "b29", "b29" ], "table_ref": [], "text": "In general, the experiments of this paper involve the setup of three aspects: 1) the foundational deep models along with datasets, which are going to be attacked/defended; 2) the adversarial perturbation generators for attacking; 3) the adversarial perturbation detectors for defensing. In Fig. 4, we provide the visualization for some adversarial images.\nThe deep models involved in experiments are\n• LeNet [55];\n• VGG-16 [56];\n• GoogLeNet [57];\n• CaffeNet [58]. The datasets involved in experiments are\n• MNIST [59], a large dataset of handwritten digits; CIFAR-10 [60], a large dataset of small-size color images;\n• MEDS [61], a face image dataset;\n• Multi-PIE [62], a face image dataset;\n• PubFig [63], face image dataset;\n• ImageNet [64], a very large dataset of natural image. The adversarial attacks involved in experiments are\n• FGSM [9], i.e., Fast Gradient Sign Method with variants of L1 and L2 norms; • BIM [7], i.e., Basic Iterative Method, as an iterative version of FGSM also with variants of L1 and L2 norms; • PGD [65], i.e., Projected Gradient Descent, with variants of L2 norm;\n• APGD [66], i.e., Auto PGD;\n• DeepFool [67];\n• FFGSM [68], i.e., Fast adversarial training using FGSM;\n• FAB [69], i.e., Fast Adaptive Boundary;\n• Square [70], i.e., a black-box attack based on squareshaped updates; • TPGD [71], i.e., Theoretically principled PGD;\n• EOTPGD [72], i.e., Expectation Over Transformation and PGD;\n• Universal [11], i.e., a perturbation can attack images universally; • F3 [73], i.e., Fast Feature Fool, as also a universal perturbation. The adversarial perturbation detectors (i.e., comparison methods) involved in experiments are\n• arXiv'17 [74] by looking at Bayesian uncertainty estimates; • ICLR'18 [75] by detecting out-of-distribution images;\n• TDSC'18 [26] by denoising filters;\n• IJCV'19 [76] by characterizing abnormal filter response behavior; • CVPR'19 [27] by steganalysis and spatial rich model; • NDSS'19 [77] by neural network invariant checking;\n• TDSC'20 [30] by global and local orthogonal decomposition. Note that the results for above comparison methods in following experiments are mainly cited from [30]." }, { "figure_ref": [ "fig_3", "fig_5", "fig_7", "fig_9", "fig_11", "fig_4" ], "heading": "B. Benchmarking Experiments", "publication_ref": [], "table_ref": [], "text": "In this part, we provide benchmarking evaluations of the proposed detector with respect to current state-of-the-art detectors, under the typical experimental protocol in these related works.\n1) MNIST: Fig. 5 shows the accuracy comparison of 7 detectors with respect to 7 attacks on the MNIST dataset. Here, 9000 clean images are selected from the MNIST, and then 9000 corresponding perturbed images are formed by each attack. For each competing detector, it is trained and tested based on the above images, with a training-testing split of 50%-50% on both original and perturbed images. It can be observed that both TDSC'20 and proposed detectors achieve significant gains in detection accuracy with respect to other advanced detectors over different attacks. A possible explanation is that the orthogonal decomposition provides more comprehensive representations of perturbations for the classifier.\n2) CIFAR-10: Fig. 6 shows the accuracy comparison of 6 detectors with respect to 7 attacks on the CIFAR-10 dataset. Here, the experiment covers 10000 clean images and 10000 corresponding perturbed images for each attack, also with 50%-50% training-testing split. Compared to the results on MNIST, one can note a significant performance degradation in competing methods, even > 10% degradation in ICLR'18 and TDSC'18 with respect to PGD attacks. This is mainly due to the richer patterns of image content in CIFAR-10, which acts as a strong interference for representing perturbation patterns. Among them, the proposed detector exhibits the least performance degradation over different attacks, even when compared to TDSC'20. Such a phenomenon further confirms the effectiveness of our spatial-frequency discriminative decomposition.\n3) MEDS and Multi-PIE: In Fig. 7 and Fig. 8, we provide accuracy comparison of 7 detectors with respect to 6 attacks on face datasets MEDS and Multi-PIE, respectively. Here, universal perturbation is imposed on small-scale face images, with 50%-50% training-testing split, where both perturbation and content patterns are relatively homogeneous. Even under this protocol, none of the competing methods except TDSC'20 achieved > 90% detection accuracy. This implies that non-complete image representations are not sufficient for supporting an efficient detector, even in small-scale detection scenarios. Under such simple experimental protocol, both the proposed detector and TDSC'20 exhibit ∼ 100% accuracy, in line with general expectations of this paper.\n4) ImageNet: As for Fig. 9, the accuracy comparison of 7 detectors with respect to 2 attacks on ImageNet dataset is illustrated. Here, universal perturbation is imposed on large-scale natural images, also of 50%-50% training-testing split. Clearly, the diversity of the image content increases significantly with respect to the previous experimental protocol, increasing also the difficulty of the detection. An interesting phenomenon is that the gap between TDSC'20 and other competing methods becomes smaller. This implies that the simple decision-level fusion of global and local orthogonal transforms in TDSC'20 cannot handle more complex detection tasks well. be considered as an inflexible remedy for the contradiction of spatial and frequency discriminability. In general, our method still achieves ∼ 5% gain with respect to TDSC'20 and ∼ 10% gain with respect to other competing methods.\nThe above consistent performance gains in Figs. 5 ∼ 9 validate the advanced nature of the proposed detector, revealing the potential of our spatial-frequency discriminative decomposition in perturbation detection tasks." }, { "figure_ref": [], "heading": "C. Crossing Experiments", "publication_ref": [], "table_ref": [], "text": "Through above benchmark experiments, we have positioned our detector with respect to some state-of-the-art detectors. Such extensive results indicate a consistent performance advantage of the proposed method under the typical experiment protocol. In this part, we will further analyze whether the above performance advantage arises from a certain over-fitting. More specifically, we will test a trained detector by crossing to other similar experiment protocols, thereby quantifying its transferability to reasonable changes. 1) Crossing Dataset: Table I lists the various performance scores of the proposed detector on crossing dataset protocol with universal perturbation. Here, the protocol involves three face datasets: MEDS, Multi-PIE, and PubFig. In general, a promising detector is expected to be generalizable to such natural differences of the training and testing phases. As can be observed here, our detector exhibits consistent recall, precision, F1, and accuracy scores for all crossing dataset scenarios, where most cases (except for Multi-PIE to MEDS) the scores are even ∼ 100%. The worst case is mainly due to the fact that Multi-PIE has less data diversity than MEDS, leading to insufficient learning; the reverse of the two datasets significantly improves the scores. This numerical evidence suggests that our spatial-frequency discriminative decomposition provides complete and intrinsic features of adversarial perturbations, and therefore generalizes well to unseen-butsimilar datasets.\nRemark. In addition, we would like to provide some supplementary information as comparison baselines. Based on scores reported in the literature, competing algorithms arXiv'17 and TDSC'18 achieve ∼ 80% accuracy scores on the crossing dataset protocol from MEDS to Multi-PIE, and ∼ 70% accuracy scores for the reverse protocol. Thus they exhibit a > 20% performance gap with respect to the proposed detector. It is further verified that the proposed decomposition serves as a more generic representation than such simple filters.\n2) Crossing Model: In Table II, we provide the various performance scores of the proposed detector on crossing model protocol with universal perturbation. Here, our detector is trained on VGG-16 and tested on GoogleNet or CaffeNet, where the training and testing are also considered on three face datasets. Note that although the adversarial perturbations on different models differ significantly at the numerical level, they still exhibit specific statistical consistency; the mining of this consistency largely reflects the generalizability of the detector. Obviously, the proposed detector remains stable, regardless of the different datasets and the different performance metrics. For most cases, our detector exhibits ∼ 100% scores, and even the worst case (VGG-16 to CaffeNet on MEDS) is still 93.82%. These consistent phenomena suggest that the proposed detector and its foundational decomposition have well generalizability to unseen-but-similar models.\nRemark. In addition, we would like to provide some supplementary information as comparison baselines. Based on scores reported in the literature, competing algorithm TDSC'20 achieves ∼ 93% accuracy score on the crossing model protocol from VGG-16 to GoogleNet on MEDS, and ∼ 96% accuracy score for the similar protocol on Multi-PIE. Therefore, our detector still exhibits gains with respect to this orthogonal transform based detector.\n3) Crossing Attack: As for Table III, we list the various performance scores of the proposed detector on crossing attack protocol on MNIST dataset. Here, the protocol involves five attacks, i.e., BIM, FGSM, PGD, FAB, and Square, with quite significant differences in their designs. The experiment will consider any crossing pair of these five attacks in training and testing, for a total of 20 pairs. As can be expected, the adversarial perturbations derived from these attacks are different numerically, but meanwhile have similar statistical properties. In the table, the extensive performance scores indicate that the proposed detector is capable of representing the statistical consistency over such attacks. For 12 crossing pairs, our detector exhibits ∼ 100% scores of F1 and accuracy. For the worst case, the F1 and accuracy scores of our detector are still ∼ 87%. In general, the performance of the proposed detector is quite promising, even with respect to the generalizability of the unseen and somewhat different adversarial perturbations.\nRemark. In addition, we would like to provide some supplementary information as comparison baselines. Based on scores reported in the literature, competing algorithms arXiv'17, ICLR'18, TDSC'18, CVPR'19, and TDSC'20 achieve ∼ 64%, ∼ 76%, ∼ 65%, ∼ 68%, and ∼ 95% average accuracy scores over FGSM-like crossing pairs, respectively. In Table III, our algorithm exhibits an average accuracy score of ∼ 96%. Note that even though our protocol involves more diverse attacks, the proposed detector still provides better scores than above competing methods.\nThe above consistent performance gains in Tables I ∼ III validate the advanced nature of the proposed detector, revealing the generalizability of our spatial-frequency discriminative decomposition for unseen-but-similar scenarios." }, { "figure_ref": [ "fig_0", "fig_0", "fig_0" ], "heading": "D. Challenging Experiments", "publication_ref": [], "table_ref": [], "text": "The above benchmarking and crossing experiments have extensively demonstrated the advantages of the proposed detector over state-of-the-art detectors under ideal protocols. In this part, we turn to more challenging protocols. Such protocols can verify the discriminability, generalization, completeness, and efficiency of the proposed detectors in a comprehensive and realistic manner.\n1) From Ideal to Practice: We begin with a discussion on the practical deployment of the adversarial perturbation detector and the corresponding training strategy. In realworld scenarios, it is expected that the detector enables an accurate and generic defense for the widest possible range of adversarial attacks. In general, two training and deployment approaches exist to achieve this goal:\n• Integration during inference. Multiple detectors are involved here, each of which is trained on adversarial examples from only one attack. When deployed, the example under analysis is filled into these detectors separately, and the corresponding results are then integrated as the final prediction. Therefore, this approach can be construed as a late integration strategy. It is clear that the integration during training is more compact. Compared to the integration during inference, it will consume significantly fewer computational resources in deployment, while not involving tricky integration policy design for the results from multiple detectors. However, most of the benchmarking and crossing experiments in the literature are trained only with single kind of attack, and thus only verifying the effectiveness for the integration during inference. Therefore, we not only consider benchmarking and crossing experiments for the integration during inference (Sections VI-B and VI-C), but will also focus on more challenging experiments for the integration during training (Section VI-D).\n2) A Comprehensive and Realistic Protocol: For a comprehensive investigation of the proposed detector in the scenario of integration during training, we design the following experimental protocol. We selected 10000 original images in MNIST for generating the experimental images, in which the first/latter 5000 images are used to derive the training/testing examples respectively. Assuming that N attacks are considered, where N = 5, 10, 15 in our experiments. For the training, the first 5000 images are equally divided into N parts for generating the adversarial images corresponding to per attack, and then they are used as training examples together with the first 5000 original images. Our detector will be trained directly on such 10000 images. For the testing, the examples under each attack will consist of the latter 5000 original images and 5000 corresponding adversarial images, resulting in a total of N × 10000 testing examples.\nIn Fig. 10, we present an illustration for the above adversarial image division with respect to N = 5, 10, 15. Note that when N increases, the number of adversarial images from each attack in the training set, i.e., 5000/N , will decrease, while the counterpart in the testing set remains 5000. Obviously, this protocol is a comprehensive challenge for the discriminability, generalization, completeness, and efficiency of the detector. In Fig. 11, we show the various performance scores of the proposed detector on challenging experiment protocol. Here, corresponding to Fig. 10, the protocol involves increasing number of attacks and decreasing number of training examples from (a) to (c). In general, a promising detector is expected to be stable to realistic variations in the problem complexity or the training scale. As can be observed here, our detector exhibits consistent recall, precision, F1, and accuracy scores on all three scenarios (a) ∼ (c). For the most challenging scenario (c), where number of attacks is 15 and number of training adversarial images per attack is only 333, the scores with respect to most attacks (except for APGD, DeepFool, and Squara) are even ∼ 100%. The worst case is DeepFool with ∼ 87% scores of F1 and accuracy. Such scores are still generally satisfactory, considering the training/testing adversarial images ratio is 300/5000. The above phenomenon suggests that our detector and its foundational decomposition have:\n• Discriminability for a wide range of attacks;\n• Generalization for unseen-but-similar examples;\n• Completeness for potential perturbation patterns; • Efficiency on training scale. Starting from such properties, the proposed detector can be regarded as a more comprehensive and effective defense for the realistic scenarios with integration during training." }, { "figure_ref": [], "heading": "VII. CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In general, our main goal is to provide a more comprehensive design of adversarial perturbation detector. As a technical foundation of the detector, we have proposed the spatial-frequency discriminative decomposition with secret keys, motivated by the accuracy and security issues of existing detectors. Here, the accuracy and security ingredients in this paper can be summarized as follows.\n• Regarding the accuracy, we attribute the accuracy bottleneck of existing detectors to the fundamental contradiction of spatial and frequency discriminability in the decomposition. Specifically, the non-orthogonal decomposition (e.g., SRM) is not sufficient to completely represent a wide range of potential perturbations. The global (e.g., DST) or local (e.g., DWT) orthogonal decomposition cannot mine both frequency and spatial patterns (Conjectures 1 and 2), thereby failing to fully reveal the statistical regularity (Proposition 2). In this paper, we have introduced the Krawtchouk basis (Definition 2) for more discriminative decomposition, providing a mid-scale representation with rich spatial-frequency information (Property 1). Therefore, the resulting detector exhibits better discriminability in the decomposition of natural images and adversarial perturbations (Main Result 1). • Regarding the security, we attribute the successful defense-aware (secondary) attack to the transparency of detector-interested features for the attacker. With such knowledge, the attacker can regenerate adversarial perturbations without exhibiting obvious artifacts on these features (Propositions 3 and 4). In this paper, we have proposed the random feature selection for de-transparency, where a key controlled pseudorandom number generator determines the spatial and frequency parameters of the decomposition. Therefore, the resulting detector is secure against the defense-aware attack: it is difficult (or impossible) to divide between to-be-attacked and to-be-evaded features even with the knowledge of the detector other than the keys (Main Result 2)." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "We have provided statistical comparisons with state-of-theart detectors, by the benchmarking (Section VI-B), crossing (Section VI-C), and challenging (Section VI-D) experiments for both ideal and realistic scenarios (with respect to integration during inference and training). In general, such extensive experimental results confirm the effectiveness of our detector, exhibiting quite satisfactory discriminability, generalization, completeness, and efficiency with respect to existing works.\nOur future work will focus on more formal statistical analysis for the decomposition of natural images and adversarial perturbations, potentially involving coefficient statistical modeling and hypothesis testing." } ]
The vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community. From a security perspective, it poses a critical risk for modern vision systems, e.g., the popular Deep Learning as a Service (DLaaS) frameworks. For protecting off-the-shelf deep models while not modifying them, current algorithms typically detect adversarial patterns through discriminative decomposition of natural-artificial data. However, these decompositions are biased towards frequency or spatial discriminability, thus failing to capture adversarial patterns comprehensively. More seriously, successful defense-aware (secondary) adversarial attack (i.e., evading the detector as well as fooling the model) is practical under the assumption that the adversary is fully aware of the detector (i.e., the Kerckhoffs's principle). Motivated by such facts, we propose an accurate and secure adversarial example detector, relying on a spatial-frequency discriminative decomposition with secret keys. It expands the above works on two aspects: 1) the introduced Krawtchouk basis provides better spatial-frequency discriminability and thereby is more suitable for capturing adversarial patterns than the common trigonometric or wavelet basis; 2) the extensive parameters for decomposition are generated by a pseudo-random function with secret keys, hence blocking the defense-aware adversarial attack. Theoretical and numerical analysis demonstrates the increased accuracy and security of our detector with respect to a number of state-of-the-art algorithms.
TOWARDS AN ACCURATE AND SECURE DETECTOR AGAINST ADVERSARIAL PERTURBATIONS 1 Towards an Accurate and Secure Detector against Adversarial Perturbations
[ { "figure_caption": "Fig. 1 .1Fig. 1. Illustration for the training phase of the proposed adversarial example detector. The detector is trained on a set of adversarial/clean image examples along with corresponding labels. The detector consists of three main steps: 1) the image is projected into a space defined by Krawtchouk polynomials, where the frequency parameters (n, m) and spatial parameters (Px, Py) are determined by key; 2) the obtained coefficients are integrated and enhanced to form a compact-but-expressive feature vector by certain beneficial priors; 3) such features are fed into an SVM for the prediction, which is the only learning part in the detector. Clean example set", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "75 Fig. 3 .753Fig.3. Illustration for the weighted Krawtchouk polynomials Kl (z; P, L) with l = {2, 4, 8}, P = {0.25, 0.5, 0.75}, and L = 100. Note that the number and location of zeros of K can be adjusted explicitly by l and P , respectively, meaning the time-frequency discriminability of the represented image information.", "figure_data": "", "figure_id": "fig_1", "figure_label": "753", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Illustration for the datasets and adversarial attacks involved in experiments.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: The benchmarking of adversarial perturbation detection accuracy for different detectors on the MNIST dataset.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. The benchmarking of adversarial perturbation detection accuracy for different detectors on the MNIST dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: The benchmarking of adversarial perturbation detection accuracy for different detectors on the CIFAR-10 dataset.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. The benchmarking of adversarial perturbation detection accuracy for different detectors on the CIFAR-10 dataset.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The benchmarking of adversarial perturbation detection accuracy for different detectors on the MEDS dataset.", "figure_data": "", "figure_id": "fig_7", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig. 7. The benchmarking of adversarial perturbation detection accuracy for different detectors on the MEDS dataset.", "figure_data": "", "figure_id": "fig_8", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: The benchmarking of adversarial perturbation detection accuracy for different detectors on the Multi-PIE dataset.", "figure_data": "", "figure_id": "fig_9", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 8 .8Fig. 8. The benchmarking of adversarial perturbation detection accuracy for different detectors on the Multi-PIE dataset.", "figure_data": "", "figure_id": "fig_10", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Fig. 99Fig.9 Adversarial detection performance of the proposed and existing algorithms on the 'ImageNet' database", "figure_data": "", "figure_id": "fig_11", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. The benchmarking of adversarial perturbation detection accuracy for different detectors on the ImageNet dataset.", "figure_data": "", "figure_id": "fig_12", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "= 15 Fig. 10 . 15 Fig. 11 .15101511Fig. 10. Illustration for the adversarial image division in challenging experiments. Note that the more attacks involved, the fewer training examples for each attack, and therefore this protocol is a comprehensive challenge for the discriminability, generalization, completeness, and efficiency of the detector.", "figure_data": "", "figure_id": "fig_13", "figure_label": "15101511", "figure_type": "figure" }, { "figure_caption": "Integration during training. Only one detector is involved here, which is trained directly on adversarial examples from the widest possible range of attacks. After training, the detector is deployed directly to predict arbitrary examples. Therefore, this approach can be construed as an early fusion strategy.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" } ]
Chao Wang; Shuren Qi; Zhiqiu Huang; Rushi Lan; Xiaochun Cao; Y Zhang
[ { "authors": "Y Bengio; A Courville; P Vincent", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b0", "title": "Representation learning: A review and new perspectives", "year": "2013" }, { "authors": "L Zhao; Q Wang; C Wang; Q Li; C Shen; B Feng", "journal": "IEEE Trans. Parallel Distrib. Syst", "ref_id": "b1", "title": "Veriml: Enabling integrity assurances and fair payments for machine learning as a service", "year": "2021" }, { "authors": "J M Wing", "journal": "Commun. ACM", "ref_id": "b2", "title": "Trustworthy AI", "year": "2021" }, { "authors": "A Fawzi; S.-M Moosavi-Dezfooli; P Frossard", "journal": "IEEE Signal Process Mag", "ref_id": "b3", "title": "The robustness of deep networks: A geometrical perspective", "year": "2017" }, { "authors": "I Goodfellow; P Mcdaniel; N Papernot", "journal": "Commun. ACM", "ref_id": "b4", "title": "Making machine learning robust against adversarial inputs", "year": "2018" }, { "authors": "C Szegedy; W Zaremba; I Sutskever; J Bruna; D Erhan; I Goodfellow; R Fergus", "journal": "", "ref_id": "b5", "title": "Intriguing properties of neural networks", "year": "2014" }, { "authors": "A Kurakin; I Goodfellow; S Bengio", "journal": "Chapman and Hall/CRC", "ref_id": "b6", "title": "Adversarial examples in the physical world", "year": "2018" }, { "authors": "J Su; D V Vargas; K Sakurai", "journal": "IEEE Trans. Evol. Comput", "ref_id": "b7", "title": "One pixel attack for fooling deep neural networks", "year": "2019" }, { "authors": "I Goodfellow; J Shlens; C Szegedy", "journal": "", "ref_id": "b8", "title": "Explaining and harnessing adversarial examples", "year": "2015" }, { "authors": "N Carlini; D Wagner", "journal": "", "ref_id": "b9", "title": "Towards evaluating the robustness of neural networks", "year": "2017" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; O Fawzi; P Frossard", "journal": "", "ref_id": "b10", "title": "Universal adversarial perturbations", "year": "2017" }, { "authors": "P.-Y Chen; H Zhang; Y Sharma; J Yi; C.-J Hsieh", "journal": "", "ref_id": "b11", "title": "Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models", "year": "2017" }, { "authors": "K Eykholt; I Evtimov; E Fernandes; B Li; A Rahmati; C Xiao; A Prakash; T Kohno; D Song", "journal": "", "ref_id": "b12", "title": "Robust physical-world attacks on deep learning visual classification", "year": "2018" }, { "authors": "M Andriushchenko; N Flammarion", "journal": "", "ref_id": "b13", "title": "Understanding and improving fast adversarial training", "year": "2020" }, { "authors": "H Liu; Y Wang; W Fan; X Liu; Y Li; S Jain; Y Liu; A Jain; J Tang", "journal": "ACM Trans. Intell. Syst. Technolog", "ref_id": "b14", "title": "Trustworthy AI: A computational perspective", "year": "2022" }, { "authors": "C Xiang; A N Bhagoji; V Sehwag; P Mittal", "journal": "", "ref_id": "b15", "title": "PatchGuard: A provably robust defense against adversarial patches via small receptive fields and masking", "year": "2021" }, { "authors": "A Nayebi; S Ganguli", "journal": "", "ref_id": "b16", "title": "Biologically inspired protection of deep networks from adversarial attacks", "year": "2017" }, { "authors": "A Ross; F Doshi-Velez", "journal": "", "ref_id": "b17", "title": "Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients", "year": "2018" }, { "authors": "M Lecuyer; V Atlidakis; R Geambasu; D Hsu; S Jana", "journal": "", "ref_id": "b18", "title": "Certified robustness to adversarial examples with differential privacy", "year": "2019" }, { "authors": "J Cohen; E Rosenfeld; Z Kolter", "journal": "", "ref_id": "b19", "title": "Certified adversarial robustness via randomized smoothing", "year": "2019" }, { "authors": "A Cullen; P Montague; S Liu; S Erfani; B Rubinstein", "journal": "", "ref_id": "b20", "title": "Double bubble, toil and trouble: enhancing certified robustness through transitivity", "year": "2022" }, { "authors": "H Zhang; Y Yu; J Jiao; E Xing; L El Ghaoui; M Jordan", "journal": "", "ref_id": "b21", "title": "Theoretically principled trade-off between robustness and accuracy", "year": "2019" }, { "authors": "X Zhang; X Zheng; W Mao", "journal": "ACM Comput. Surv", "ref_id": "b22", "title": "Adversarial perturbation defense on deep neural networks", "year": "2021" }, { "authors": "S Qi; Y Zhang; C Wang; J Zhou; X Cao", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b23", "title": "A principled design of image representation: Towards forensic tasks", "year": "2022" }, { "authors": "G.-L Chen; C.-C Hsu", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b24", "title": "Jointly defending DeepFake manipulation and adversarial attack using decoy mechanism", "year": "2023" }, { "authors": "B Liang; H Li; M Su; X Li; W Shi; X Wang", "journal": "IEEE Trans. Dependable Secure Comput", "ref_id": "b25", "title": "Detecting adversarial image examples in deep neural networks with adaptive noise reduction", "year": "2018" }, { "authors": "J Liu; W Zhang; Y Zhang; D Hou; Y Liu; H Zha; N Yu", "journal": "", "ref_id": "b26", "title": "Detection based defense against adversarial examples from the steganalysis point of view", "year": "2019" }, { "authors": "A N Bhagoji; D Cullina; C Sitawarin; P Mittal", "journal": "", "ref_id": "b27", "title": "Enhancing robustness of machine learning systems via data transformations", "year": "2018" }, { "authors": "N Akhtar; J Liu; A Mian", "journal": "", "ref_id": "b28", "title": "Defense against universal adversarial perturbations", "year": "2018" }, { "authors": "A Agarwal; R Singh; M Vatsa; N Ratha", "journal": "IEEE Trans. Dependable Secure Comput", "ref_id": "b29", "title": "Image transformationbased defense against adversarial perturbation on deep learning models", "year": "2020" }, { "authors": "A Agarwal; G Goswami; M Vatsa; R Singh; N K Ratha", "journal": "IEEE Trans. Neural Networks Learn. Syst", "ref_id": "b30", "title": "Damad: Database, attack, and model agnostic adversarial perturbation detector", "year": "2021" }, { "authors": "J.-H Hoepman; B Jacobs", "journal": "Commun. ACM", "ref_id": "b31", "title": "Increased security through open source", "year": "2007" }, { "authors": "S G Mallat", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b32", "title": "A theory for multiresolution signal decomposition: the wavelet representation", "year": "1989" }, { "authors": "Z Chen; B Tondi; X Li; R Ni; Y Zhao; M Barni", "journal": "IEEE Trans. Inf. Forensics Secur", "ref_id": "b33", "title": "Secure detection of image manipulation by of random feature selection", "year": "2019" }, { "authors": "T Miyato; S -I. Maeda; M Koyama; S Ishii", "journal": "IEEE Trans. Pattern Anal. Mach. Intell", "ref_id": "b34", "title": "Virtual adversarial training: a regularization method for supervised and semi-supervised learning", "year": "2018" }, { "authors": "X Jia; X Wei; X Cao; X Han", "journal": "", "ref_id": "b35", "title": "Adv-watermark: A novel watermark perturbation for adversarial examples", "year": "2020" }, { "authors": "L Zhai; F Juefei-Xu; Q Guo; X Xie; L Ma; W Feng; S Qin; Y Liu", "journal": "", "ref_id": "b36", "title": "It's raining cats or dogs? adversarial rain attack on DNN perception", "year": "2020" }, { "authors": "Q Guo; Z Cheng; F Juefei-Xu; L Ma; X Xie; Y Liu; J Zhao", "journal": "", "ref_id": "b37", "title": "Learning to adversarially blur visual object tracking", "year": "2021" }, { "authors": "S Zhang; H Gao; Q Rao", "journal": "IEEE Trans. Image Process", "ref_id": "b38", "title": "Defense against adversarial attacks by reconstructing images", "year": "2021" }, { "authors": "N Das; M Shanbhogue; S.-T Chen; F Hohman; S Li; L Chen; M E Kounavis; D H Chau", "journal": "", "ref_id": "b39", "title": "Shield: Fast, practical defense and vaccination for deep learning using JPEG compression", "year": "2018" }, { "authors": "Y Zhang; P Liang", "journal": "", "ref_id": "b40", "title": "Defending against whitebox adversarial attacks via randomized discretization", "year": "2019" }, { "authors": "S Rifai; P Vincent; X Muller; X Glorot; Y Bengio", "journal": "", "ref_id": "b41", "title": "Contractive auto-encoders: Explicit invariance during feature extraction", "year": "2011" }, { "authors": "X.-H Li; C C Cao; Y Shi; W Bai; H Gao; L Qiu; C Wang; Y Gao; S Zhang; X Xue", "journal": "IEEE Trans. Knowl. Data Eng", "ref_id": "b42", "title": "A survey of data-driven and knowledge-aware eXplainable AI", "year": "2020" }, { "authors": "S Han; C Lin; C Shen; Q Wang; X Guan", "journal": "ACM Comput. Surv", "ref_id": "b43", "title": "Interpreting adversarial examples in deep learning: A review", "year": "2023" }, { "authors": "T Tanay; L Griffin", "journal": "", "ref_id": "b44", "title": "A boundary tilting persepective on the phenomenon of adversarial examples", "year": "2016" }, { "authors": "D Yin; R Gontijo Lopes; J Shlens; E D Cubuk; J Gilmer", "journal": "", "ref_id": "b45", "title": "A fourier perspective on model robustness in computer vision", "year": "2019" }, { "authors": "A Ilyas; S Santurkar; D Tsipras; L Engstrom; B Tran; A Madry", "journal": "", "ref_id": "b46", "title": "Adversarial examples are not bugs, they are features", "year": "2019" }, { "authors": "Z Zha; X Yuan; J Zhou; C Zhu; B Wen", "journal": "IEEE Trans. Image Process", "ref_id": "b47", "title": "Image restoration via simultaneous nonlocal self-similarity priors", "year": "2020" }, { "authors": "E P Simoncelli; B A Olshausen", "journal": "Annu. Rev. Neurosci", "ref_id": "b48", "title": "Natural image statistics and neural representation", "year": "2001" }, { "authors": "F Wu; W Yang; L Xiao; J Zhu", "journal": "Electronics", "ref_id": "b49", "title": "Adaptive wiener filter and natural noise to eliminate adversarial perturbation", "year": "2020" }, { "authors": "V Veerabadran; J Goldman; S Shankar; B Cheung; N Papernot; A Kurakin; I Goodfellow; J Shlens; J Sohl-Dickstein; M C Mozer", "journal": "Nat. Commun", "ref_id": "b50", "title": "Subtle adversarial image manipulations influence both human and machine perception", "year": "2023" }, { "authors": "S Qi; Y Zhang; C Wang; J Zhou; X Cao", "journal": "ACM Comput. Surv", "ref_id": "b51", "title": "A survey of orthogonal moments for image representation: theory, implementation, and evaluation", "year": "2021" }, { "authors": "P.-T Yap; R Paramesran; S.-H Ong", "journal": "IEEE Trans. Image Process", "ref_id": "b52", "title": "Image analysis by Krawtchouk moments", "year": "2003" }, { "authors": "H Yang; S Qi; J Tian; P Niu; X Wang", "journal": "Pattern Recognit", "ref_id": "b53", "title": "Robust and discriminative image representation: Fractional-order Jacobi-Fourier moments", "year": "2021" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "Proc. IEEE", "ref_id": "b54", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "K Simonyan; A Zisserman", "journal": "", "ref_id": "b55", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2015" }, { "authors": "C Szegedy; W Liu; Y Jia; P Sermanet; S Reed; D Anguelov; D Erhan; V Vanhoucke; A Rabinovich", "journal": "", "ref_id": "b56", "title": "Going deeper with convolutions", "year": "2015" }, { "authors": "Y Jia; E Shelhamer; J Donahue; S Karayev; J Long; R Girshick; S Guadarrama; T Darrell", "journal": "", "ref_id": "b57", "title": "Caffe: Convolutional architecture for fast feature embedding", "year": "2014" }, { "authors": "Y Lecun; C Corinna; J B Christopher", "journal": "", "ref_id": "b58", "title": "MNIST", "year": "" }, { "authors": "A Krizhevsky; V Nair; G Hinton", "journal": "", "ref_id": "b59", "title": "CIFAR-10", "year": "" }, { "authors": "A Founds; N Orlans; G Whiddon; C Watson", "journal": "", "ref_id": "b60", "title": "MEDS", "year": "" }, { "authors": "R Gross; I Matthews; J Cohn; T Kanade; S Baker", "journal": "", "ref_id": "b61", "title": "Multi-PIE", "year": "" }, { "authors": "N Kumar; A C Berg; P N Belhumeur; S K Nayar", "journal": "", "ref_id": "b62", "title": "PubFig", "year": "" }, { "authors": "J Deng; W Dong; R Socher; L.-J Li; K Li; L Fei-Fei", "journal": "", "ref_id": "b63", "title": "ImageNet", "year": "" }, { "authors": "A Madry; A Makelov; L Schmidt; D Tsipras; A Vladu", "journal": "", "ref_id": "b64", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2018" }, { "authors": "F Croce; M Hein", "journal": "", "ref_id": "b65", "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "year": "2020" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; P Frossard", "journal": "", "ref_id": "b66", "title": "Deepfool: a simple and accurate method to fool deep neural networks", "year": "2016" }, { "authors": "E Wong; L Rice; J Z Kolter", "journal": "", "ref_id": "b67", "title": "Fast is better than free: Revisiting adversarial training", "year": "2020" }, { "authors": "F Croce; M Hein", "journal": "", "ref_id": "b68", "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "year": "2020" }, { "authors": "M Andriushchenko; F Croce; N Flammarion; M Hein", "journal": "", "ref_id": "b69", "title": "Square attack: a query-efficient black-box adversarial attack via random search", "year": "2020" }, { "authors": "H Zhang; Y Yu; J Jiao; E Xing; L El Ghaoui; M Jordan", "journal": "", "ref_id": "b70", "title": "Theoretically principled trade-off between robustness and accuracy", "year": "2019" }, { "authors": "R S Zimmermann", "journal": "", "ref_id": "b71", "title": "Comment on \"Adv-BNN: Improved adversarial defense through robust Bayesian Neural Network", "year": "2019" }, { "authors": "K R Mopuri; U Garg; R V Babu", "journal": "", "ref_id": "b72", "title": "Fast Feature Fool: A data independent approach to universal adversarial perturbations", "year": "2017" }, { "authors": "R Feinman; R R Curtin; S Shintre; A B Gardner", "journal": "", "ref_id": "b73", "title": "Detecting adversarial samples from artifacts", "year": "2017" }, { "authors": "S Liang; Y Li; R Srikant", "journal": "", "ref_id": "b74", "title": "Principled detection of out-ofdistribution examples in neural networks", "year": "2018" }, { "authors": "G Goswami; A Agarwal; N Ratha; R Singh; M Vatsa", "journal": "Int. J. Comput. Vision", "ref_id": "b75", "title": "Detecting and mitigating adversarial perturbations for robust face recognition", "year": "2019" }, { "authors": "S Ma; Y Liu; G Tao; W.-C Lee; X Zhang", "journal": "", "ref_id": "b76", "title": "Nic: Detecting adversarial samples with neural network invariant checking", "year": "2019" }, { "authors": "Chao Wang Received The; B S ; M S ", "journal": "", "ref_id": "b77", "title": "degrees from Liaoning Normal University", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 405.01, 340.46, 158.03, 8.99 ], "formula_id": "formula_0", "formula_text": "M(x + δ) ̸ = y,(1)" }, { "formula_coordinates": [ 4, 419.22, 390.11, 143.82, 8.96 ], "formula_id": "formula_1", "formula_text": "||δ|| < ε.(2)" }, { "formula_coordinates": [ 5, 51.8, 338.13, 245.39, 22.27 ], "formula_id": "formula_2", "formula_text": "F(x ′ ) = F(x+ δ) = F(x) + F(δ), i.e." }, { "formula_coordinates": [ 5, 359.77, 573.38, 200.43, 10.27 ], "formula_id": "formula_3", "formula_text": "ρ(F C A ∩C D (δ old ), F C A ∩C D (δ new )) < η,(3)" }, { "formula_coordinates": [ 5, 391.07, 703.12, 169.13, 10.27 ], "formula_id": "formula_4", "formula_text": "||F C A ∩C D (δ new )|| < λ,(4)" }, { "formula_coordinates": [ 6, 148.41, 231.91, 37.77, 9.65 ], "formula_id": "formula_5", "formula_text": "C A ∩ C D ." }, { "formula_coordinates": [ 6, 362.13, 643.38, 198.07, 19.31 ], "formula_id": "formula_6", "formula_text": "F(f ) = D V * nm (x, y)f (x, y)dxdy,(5)" }, { "formula_coordinates": [ 6, 356.75, 725.09, 199.58, 19.31 ], "formula_id": "formula_7", "formula_text": "D V nm (x, y)V * n ′ m ′ (x, y)dxdy = δ nn ′ δ mm ′ , (6" }, { "formula_coordinates": [ 6, 556.33, 727.49, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 7, 221.02, 55.99, 56.89, 9.65 ], "formula_id": "formula_9", "formula_text": "δ ab = [a = b]." }, { "formula_coordinates": [ 7, 87.49, 208.3, 209.7, 12.17 ], "formula_id": "formula_10", "formula_text": "V nm (x, y) = Kn (x; P x , W ) Km (y; P y , H),(7)" }, { "formula_coordinates": [ 7, 51.8, 229.5, 245.39, 32.65 ], "formula_id": "formula_11", "formula_text": "D = {(x, y) ∈ [0, 1, ..., W ]× [0, 1, ..., H]} with image size W × H, frequency parameters (n, m) ∈ [0, 1, ..., W ] × [0, 1, ..., H]" }, { "formula_coordinates": [ 7, 94.9, 298.24, 202.29, 67.8 ], "formula_id": "formula_12", "formula_text": "Kl (z; P, L) = L z P z (1 -P ) L-z (-1) l ( 1-P P ) l l!Γ(-L) Γ(l-L) • 2 F 1 (-l, -z; -L; 1 P ),(8)" }, { "formula_coordinates": [ 7, 105.83, 385.67, 191.36, 30.55 ], "formula_id": "formula_13", "formula_text": "2 F 1 (a, b; c; d) = ∞ k=0 (a) k (b) k (c) k d k k! ,(9)" }, { "formula_coordinates": [ 7, 109.11, 477.64, 188.08, 62.58 ], "formula_id": "formula_14", "formula_text": "P (L-l) (LP -2lP + l -z) Kl P (l -L) - (1-P ) 2 (l+1)l P 2 (L-l)(L-l+1) l(1 -P ) Kl-1 P (l -L) ,(10)" }, { "formula_coordinates": [ 7, 117.69, 561.39, 179.49, 22.31 ], "formula_id": "formula_15", "formula_text": "K1 (z; P, L) = (1 - z P L ) K0 ,(11)" }, { "formula_coordinates": [ 7, 92.9, 594.38, 200.14, 20.69 ], "formula_id": "formula_16", "formula_text": "K0 (z; P, L) = L z P z (1 -P ) L-z . (12" }, { "formula_coordinates": [ 7, 293.04, 600.77, 4.15, 8.64 ], "formula_id": "formula_17", "formula_text": ")" }, { "formula_coordinates": [ 8, 335.22, 348.94, 227.82, 48.29 ], "formula_id": "formula_18", "formula_text": "B i = {(n, m) : i -1 # B ||(W, H)|| 2 ≤ ||(n, m)|| 2 < i # B ||(W, H)|| 2 },(13)" }, { "formula_coordinates": [ 8, 367.97, 457.96, 195.07, 20.53 ], "formula_id": "formula_19", "formula_text": "V i (P x , P y ) = (n,m)∈Bi c n,m,Px,Py ,(14)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "I. INTRODUCTION", "publication_ref": [ "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b8", "b9", "b10", "b11", "b12", "b13", "b14", "b15", "b16", "b19" ], "table_ref": [], "text": "M ACHINE Learning (ML) algorithms have been able to solve various types of problems, namely highly complex ones, through the usage of Deep Neural Networks (DNNs) [1], achieving results similar to, or better than, humans in multiple tasks, such as object recognition [2], [3], face recognition [4], [5], and natural language processing [6], [7]. These networks have also been employed in critical areas, such as self-driving vehicles [8], [9], malware detection [10], [11], and healthcare [12], [13], whose application and impaired functioning can severely impact their users.\nPromising results shown by DNNs lead to the sense that these networks could correctly generalize in the local neighborhood of an input (image). These results motivate the adoption and integration of these networks in real-time image analysis, such as traffic sign recognition and vehicle segmentation, making malicious entities target these techniques. However, it was discovered that DNNs are susceptible to small perturbations in their input [14], which entirely alter their prediction, making it harder for them to be applied in critical areas. These perturbations have two main characteristics: 1) invisible to the Human eye or slight noise that does not alter Human prediction; and 2) significantly increase the confidence of erroneous output, the DNNs predict the wrong class with higher confidence than all other classes. As a result of these assertions, the effect of the perturbations has been analyzed with more focus on object recognition, which will also be the main target of this survey.\nPapernot et al. [15] distinguishes four types of adversaries depending on the information they have access to: (i) training data and network architecture, (ii) only training data or only network, (iii) oracle, and (iv) only pairs of input and output. In almost all real scenarios, the attacker does not have access to the training data or the network architecture, which diminishes the strength of the attack performed on a network, leaving the adversary with access to the responses given by the network, either by asking questions directly to it or by having pairs of input and prediction. Furthermore, the queries to a model are usually limited or very expensive [16], making it harder for an attacker to produce adversarial examples.\nMultiple mechanisms [17]- [20] were proposed to defend against legacy attacks, already displaying their weakened effect when adequately protected, which are clustered based on six different domains in this survey. Regardless of the attacks and defenses already proposed, there is no assurance about the effective robustness of these networks and if they can be trusted in critical areas, clearly raising the need to make the DNNs inherently robust or easy to be updated every time a new vulnerability is encountered. This motivates the presented work, whose main contributions are summarized as follows:\n• We present the most recent adversarial attacks grouped by the adversary capacity, accompanied by an illustration of the differences between black-box and white-box attacks; • We propose six different domains for adversarial defense grouping, assisted by exemplificative figures of each of these domains, and describe the effects of adversarial examples in ViTs; • We detail the most widely used metrics and datasets, present state-of-the-art results on CIFAR-10, CIFAR-100, and ImageNet, and propose directions for future works. The remaining of the paper is organized as follows: Section II provides background information; Section III compares this review with others; Section IV presents the set of adversarial attacks; Section V shows a collection of defenses to overcome these attacks; Section VII displays the commonly used datasets; Section VIII lists and elaborates on metrics and presents state-of-the-art results; and Section IX presents future directions, with the concluding remarks included in Section X." }, { "figure_ref": [], "heading": "II. BACKGROUND FOR ADVERSARIAL ATTACKS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Neural Network Architectures", "publication_ref": [], "table_ref": [], "text": "When an input image is fed into a CNN, it is converted into a matrix containing the numeric values representing the image or, if the image is colored, a set of matrices containing the numeric values for each color channel. Then, the Convolutions apply filters to these matrixes and calculate a set of reducedsize features. Finally, these features have an array format fed into the Fully Connected that classifies the provided image.\nFigure 1 shows an elementary example of CNNs used to classify images. Contrary to the CNN, ViT does not receive the image as a whole as input; instead, it is pre-processed to be divided into Patches, which are smaller parts of the original image, as displayed in Figure 2. These Patches are not fed randomly to the Transformer Encoder, they are ordered by their position, and both the Patches and their position are fed into the Transformer Encoder. Finally, the output resulting from the Transformer Encoder is fed into the Multi-Layer Perceptron (MLP) Head that classifies the image." }, { "figure_ref": [], "heading": "B. Adversarial Example", "publication_ref": [ "b13", "b20", "b21", "b13", "b20", "b21" ], "table_ref": [ "tab_0" ], "text": "Misclassification might be justified if the object contained in the image is not visible even to Humans. However, adversarial examples do not fit this scope. These examples add a perturbation to an image that causes the DNNs to misclassify the object in the image, yet Humans can correctly classify the same object.\nThe adversarial attacks described throughout this survey focus on identifying the adversarial examples that make DNNs misclassify. These attacks identify specific perturbations that modify the DNN classification while being correctly classified by Humans. The calculation of these perturbations is an optimization problem formally defined as:\narg min δX δ X s.t. f(X + δ X ) = Y * ,(1)\nwhere f is the is the classifier, δ X is the perturbation, X is the original/benign image, and Y * is the adversarial output. Furthermore, the adversarial example is defined as:\nX * = X + δ X ,(2)\nFig. 3. Adversarial Examples created using different state-of-the-art adversarial attacks. The first column represents the original image; the second represents the perturbation used to generate the adversarial images in the third column. The images were resized for better visualization. Images withdrawn from [14], [21], [22]. The first perturbation follows the edges of the building, the second is concentrated in the area of the whale, and the third is more smooth and greater in area.\nwhere X * is the adversarial image. Figure 3 displays adversarial examples generated using different attacks. Mainly, the first row is the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) [14] attack, the second row is the DeepFool [21] attack, and the third row is the SmoothFool [22] attack. When observing the L-BFGS, the perturbation applies noise to almost the entirety of the adversarial image. The DeepFool attack only perturbs the area of the whale but not all the pixels in that area. Finally, the SmoothFool attack slightly disturbs the pixels in the area of the image. These three attacks display the evolution of adversarial attacks in decreasing order of detectability and, consequently, increasing order of strength.\nTo limit the noise that each perturbation can add to an image, the adversarial attacks are divided into L 0 , L 2 , and L p norms, known as Vector Norms. Furthermore, commonly used terminologies in the context of adversarial examples are defined in Table I." }, { "figure_ref": [ "fig_1" ], "heading": "C. Vector Norms and Constraint", "publication_ref": [], "table_ref": [], "text": "Vector Norms are functions that take a vector as input and output a positive value (scalar). These functions are essential to ML and allow the backpropagation algorithms to compute the loss value as a scalar. The family of these functions is known as the p-norm, and, in the context of adversarial attacks, the considered values for p are 0, 2, and ∞.\nL 0 norm consists of counting the number of non-zero elements in the vector and is formally given as:\n||x|| 0 = (|x 1 | 0 +|x 2 | 0 +... + |x n | 0 ),(3)\nwhere x 1 to x n are the elements of the vector x. L 2 norm, also known as the Euclidean distance, measures the vector distance to the origin and is formally defined as:\n||x|| 2 = (|x 1 | 2 +|x 2 | 2 +... + |x n | 2 ) 1 2 ,(4)\nwhere x 1 to x n are the elements of the vector x. L ∞ norm represents the maximum hypothetical value that p can have and returns the absolute value of the element with the largest magnitude, formally as:\n||x|| ∞ = max i |x i |,(5)\nwhere x i is each element of the vector x.\nA geometric representation of the area of exploitation for the three considered p-norm is displayed in Figure 4. One relevant property of the p-norm is: the higher p is, the more important the contribution of large errors; the lower p is, the higher the contribution of small errors. This translates into a large p benefiting small maximal errors (minimal perturbations along multiple pixels) and a small p encouraging larger spikes in fewer places (abrupt perturbations along minimal pixels). Therefore, l 2 and l 0 attacks have greater detectability than l ∞ attacks, with the latter being more threatening.\nAnother constraint normally seen in the context of adversarial attacks is , which is a constant that controls the amount of noise, via generated perturbation, that can be added to an image. Usually, it is a tiny number and varies depending on the used dataset, decreasing when the task increases in difficulty. According to the literature, for MNIST, = 0.1, for CIFAR-10 and CIFAR-100, = 8/255, and for ImageNet, = 4/255." }, { "figure_ref": [], "heading": "D. Adversary Goals and Capacity", "publication_ref": [ "b14", "b3" ], "table_ref": [], "text": "Besides the restriction imposed by the different Vector Norms, the adversarial attacks are also divided by their impact on the networks. Depending on the goals of the attacker, the designation is as follows:\n• Confidence Reduction, the classifier outputs the original label with less confidence; • Untargeted, the classifier outputs any class besides the original label; • Targeted, the classifier outputs a particular class besides the original label. Another important aspect of adversarial attacks is the amount of knowledge the attacker has access to. As defined by Papernot et al. [15], who proposed the first threat model for deep learning, the attackers can have access to: 1) data training and network architecture; 2) only network architecture; 3) 4) an oracle that replies to all the inputs given; and 5) only have pairs of input and corresponding output (samples). However, to simplify this classification, these capacities were divided into:\n• White-box, which considers that the attacker has access to either the architecture or data; • Black-box, when the attacker can only access samples from an oracle or in pairs of input and output. The attackers goals and capacity are essential to classify the strength of an attack. For example, the easiest is a Confidence Reduction White-box attack, and the strongest is a Targeted Black-box attack." }, { "figure_ref": [], "heading": "III. RELATED SURVEYS", "publication_ref": [ "b22", "b23", "b24", "b25", "b26", "b27", "b28", "b29", "b30" ], "table_ref": [ "tab_3" ], "text": "The first attempt to summarize and display the recent developments in this area was made by Akhtar and Mian [23]. These authors studied adversarial attacks in computer vision, extensively referring to attacks for classification and providing a brief overview of attacks beyond the classification problem. Furthermore, the survey presents a set of attacks performed in the real world and provides insight into the existence of adversarial examples. Finally, the authors present the defenses distributed through three categories: modified training or input, modifying networks, and add-on networks.\nFrom a broader perspective, Liu et al. [24] studied the security threats and possible defenses in ML scope, considering the different phases of an ML algorithm. For example, the training phase is only susceptible to poisoning attacks; however, the testing phase is vulnerable to evasion, impersonation, and inversion attacks, making it harder to defend. The authors provide their insight on the currently used techniques. Additionally, focusing more on the object recognition task, Serban et al. [25] Qui et al. [26] extensively explains background concepts in Adversarial Attacks, mentioning adversary goals, capabilities, and characteristics. It also displays applications for adversarial attacks and presents some of the most relevant adversarial defenses. Furthermore, it explains a set of attacks divided by the stage in which they occur, referring to the most relevant attacks.\nXu et al. [27] also describes background concepts, describing the adversary goals and knowledge. This review summarizes the most relevant adversarial attacks at the time of that work and presents physical world examples. Furthermore, the authors present a batch of defenses grouped by the underlying methodology. Finally, there is an outline of adversarial attacks in graphs, text, and audio networks, culminating in the possible applications of these attacks.\nChakraborty et al. [28] provides insight into commonly used ML algorithms and presents the adversary capabilities and goals. The presented adversarial attacks are divided based on the stage of the attack (train or test). Additionally, the authors present relevant defenses used in adversarial settings.\nLong et al. [29] discusses a set of preliminary concepts of Computer Vision and adversarial context, providing a set of adversarial attacks grouped by adversary goals and capabilities. Finally, the authors provide a set of research directions that readers can use to continue the development of robust networks.\nLiang et al. [30] discuss the most significant attacks and defenses in the literature, with the latter being grouped by the underlying technique. This review finishes with a presentation of the challenges currently existing in the adversarial context.\nMore recently, Zhou et al. [31] provides insight into Deep Learning and Threat Models, focusing on the Cybersecurity perspective. Therefore, the authors identify multiple stages based on Advanced Persistent Threats and explain which adversarial attacks are adequate for each stage. Similarly, the same structure is followed to present the appropriate defenses for each stage. Furthermore, this survey presents the commonly used datasets in adversarial settings and provides a set of future directions from a Cybersecurity perspective.\nFrom the analysis of the previous surveys, some concepts have already been standardized, such as adversary goals and capabilities and the existence of adversarial attacks and defenses. However, due to the recent inception of this area, there still needs to be more standardization in datasets and metrics. Therefore, with this survey, we also analyze datasets and met-rics to provide insight to novice researchers. Furthermore, this survey consolidates the state-of-the-art results and identifies which datasets can be further explored. Finally, similarly to other reviews, this paper provides a set of future directions that researchers and practitioners can follow to start their work. A comparison between the several surveys discussed in this section is summarized in Table II." }, { "figure_ref": [ "fig_2" ], "heading": "IV. ADVERSARIAL ATTACKS", "publication_ref": [], "table_ref": [], "text": "Adversarial attacks are commonly divided by the amount of knowledge the adversaries have access to, white-box and black-box, as can be seen in Figure 5." }, { "figure_ref": [], "heading": "A. White-box Settings", "publication_ref": [ "b13", "b31", "b14", "b20", "b32", "b33", "b34", "b35", "b36", "b37", "b38", "b39", "b21", "b40", "b41", "b42" ], "table_ref": [], "text": "Adversarial examples were first proposed by Szegedy et al. [14], which discovered that DNNs do not generalize well in the vicinity of an input. The same authors proposed L-BFGS, the first adversarial attack, to create adversarial examples and raised awareness in the scientific community for this generalization problem.\nFast Gradient Sign Method (FGSM) [32] is a one-step method to find adversarial examples, which is based on the linear explanation for the existence of adversarial examples, and is calculated using the model cost function, the gradient, and the radius epsilon. This attack is formally defined as:\nx -• sign(∇loss F,t (x)),(6)\nwhere x is the original image, is the amount of changes to the image, and t is the target label. The value for should be very small to make the attack undetectable. Jacobian-based Saliency Maps (JSM) [15] explore the forward derivates to calculate the model gradients, replacing the gradient descent approaches, and discover which input regions are likely to yield adversarial examples. Then it uses saliency maps to construct the adversarial saliency maps, which display the features the adversary must perturb. Finally, to prove the effectiveness of JSM, only the adversarial examples correctly classified by humans were used to fool neural networks.\nDeepFool [21] is an iterative attack that stops when the minimal perturbation that alters the model output is found, exploiting its decision boundaries. It finds the minimum perturbation for an input x 0 , corresponding to the vector orthogonal to the hyperplane representing the decision boundary. Kurakin et al. [33] was the first to demonstrate that adversarial examples can also exist in the physical world, by using three different methods to generate the adversarial examples. Basic Iterative Method (BIM) applies the FGSM multiple times with a small step size between iterations and clips the intermediate values after each step. Iterative Least-likely Class Method (ILCM) uses the least-likely class, according to the prediction of the model, as the target class and uses BIM to calculate the adversarial example that outputs the target class.\nCarlini and Wagner (C&W) [34] attack is one of the most powerful attacks, which uses three different vector norms: 1) the L 2 attack uses a smoothing of clipped gradient descent approach, displaying low distortion; 2) the L 0 attack uses an iterative algorithm that, at each iteration, fixes the pixels that do not have much effect on the classifier and finds the minimum amount of pixels that need to be altered; and 3) the L ∞ attack also uses an iterative algorithm with an associated penalty, penalizing every perturbation that exceeds a predefined value, formally defined as:\nmin c • f (x + δ) + i [(δ i -τ ) + ],(7)\nwhere δ is the perturbation, τ is the penalty threshold (initially 1, decreasing in each iteration), and c is a constant. The value for c starts as a very low value (e.g., 10 -4 ), and each time the attack fails, the value for c is doubled. If c exceeds a threshold (e.g., 10 10 ), it aborts the search. Gradient Aligned Adversarial Subspace (GAAS) [35] is an attack that directly estimates the dimensionality of the adversarial subspace using the first-order approximation of the loss function. Through the experiments, GAAS proved the most successful at finding many orthogonal attack directions, indicating that neural networks generalize linearly.\nProjected Gradient Descent (PGD) [36] is an iterative attack that uses saddle point formulation, viewed as an inner maximization problem and an outer minimization problem, to find a strong perturbation. It uses the inner maximization problem to find an adversarial version of a given input that achieves a high loss and the outer minimization problem to find model parameters that minimize the loss in the inner maximization problem. The saddle point problem used by PGD is defined as:\nmin θ ρ(θ), where ρ(θ) = E (x,y)∼D max δ∈S L(θ, x + δ, y) , (8\n)\nwhere x is the original image, y is the corresponding label, and S is the set of allowed perturbations.\nAdvGAN [37] uses Generative Adversarial Networks (GAN) [38] to create adversarial examples that are realistic and have high attack success rate. The generator receives the original instance and creates a perturbation, the discriminator distinguishes the original instance from the perturbed instance, and the target neural network is used to measure the distance between the prediction and the target class.\nMotivated by the inability to achieve a high success rate in black-box settings, the Momentum Iterative FGSM (MI-FGSM) [39] was proposed. It introduces momentum, a technique for accelerating gradient descent algorithms, into the already proposed Iterative FGSM (I-FGSM), showing that the attack success rate in black-box settings increases almost double that of previous attacks.\nCroce and Hein [40] noted that the perturbations generated by l 0 attacks are sparse and by l ∞ attacks are smooth on all pixels, proposing Sparse and Imperceivable Adversarial Attacks (SIAA). This attack creates sporadic and imperceptible perturbations by applying the standard deviation of each color channel in both axis directions, calculated using the two immediate neighboring pixels and the original pixel.\nSmoothFool (SF) [22] is a geometry-inspired framework for computing smooth adversarial perturbations, exploiting the decision boundaries of a model. It is an iterative algorithm that uses DeepFool to calculate the initial perturbation and smoothly rectifies the resulting perturbation until the adversarial example fools the classifier. This attack provides smoother perturbations which improve the transferability of the adversarial examples, and their impact varies with the different categories in a dataset.\nIn the context of exploring the adversarial examples in the physical world, the Adversarial Camouflage (AdvCam) [41], which crafts physical-world adversarial examples that are legitimate to human observers, was proposed. It uses the target image, region, and style to perform a physical adaptation (creating a realistic adversarial example), which is provided into a target neural network to evaluate the success rate of the adversarial example.\nFeature Importance-aware Attack (FIA) [42] considers the object-aware features that dominate the model decisions, using the aggregate gradient (gradients average concerning the feature maps). This approach avoids local optimum, represents transferable feature importance, and uses the aggregate gradient to assign weights identifying the essential features. Furthermore, FIA generates highly transferable adversarial examples when extracting the feature importance from multiple classification models.\nMeta Gradient Adversarial Attack (MGAA) [43] is a novel architecture that can be integrated into any existing gradient-based attack method to improve cross-model transferability. This approach consists of multiple iterations, and, in each iteration, various models are samples from a model zoo to generate adversarial perturbations using the selected model, which are added to the previously generated perturbations. In addition, using multiple models simulates both white-and black-box settings, making the attacks more successful." }, { "figure_ref": [], "heading": "B. Universal Adversarial Perturbations", "publication_ref": [ "b43", "b44" ], "table_ref": [], "text": "Moosavi-Dezfooli et al. [44] discovered that some perturbations are image-agnostic (universal) and cause misclassification with high probability, labeled as Universal Adversarial Perturbations (UAPs). The authors found that these perturbations also generalize well across multiple neural networks, by searching for a vector of perturbations that cause misclassification in almost all the data drawn from a distribution of images. The optimization problem that Moosavi-Dezfooli et al. are trying to solve is the following:\n∆v i ← -arg min r r 2 s.t. k(x i + v + r) = k(x i ),(9)\nwhere ∆v i is the minimal perturbation to fool the classifier, v is the universal perturbation, and x i is the original image. This optimization problem is calculated for each image in a dataset, and the vector containing the universal perturbation is updated.\nThe Universal Adversarial Networks (UAN) [45] are Generative Networks that are capable of fooling a classifier when their output is added to an image. These networks were inspired by the discovery of UAPs, which were used as the training set and can create perturbations for any given input, demonstrating more outstanding results than the original UAPs." }, { "figure_ref": [], "heading": "C. Black-box Settings", "publication_ref": [ "b45", "b46", "b47" ], "table_ref": [], "text": "Specifically considering black-box setup, Ilyas et al. [46] define three realistic threat models that are more faithful to real-world settings: query-limited, partial information, and label-only settings. The first one suggests the development of query-efficient algorithms, using Natural Evolutionary Strategies to estimate the gradients used to perform the PGD attack. When only having the probabilities for the top-k labels, the algorithm alternates between blending in the original image and maximizing the likelihood of the target class and, when the attacker only obtains the top-k predicted labels, the attack uses noise robustness to mount a targeted attack.\nFeature-Guided Black-Box (FGBB) [47] uses the features extracted from images to guide the creation of adversarial perturbations, by using Scale Invariant Feature Transform. High probability is assigned to pixels that impact the composition of an image in the Human visual system and the creation of adversarial examples is viewed as a two-player game, where the first player minimizes the distance to an adversarial example, and the second one can have different roles, leading to minimal adversarial examples.\nSquare Attack [48] is an adversarial attack that does not need local gradient information, meaning that gradient masking does not affect it. Furthermore, this attack uses a randomized search scheme that selects localized square-shaped updates in random positions, causing the perturbation to be situated at the decision boundaries." }, { "figure_ref": [], "heading": "D. Auto-Attack", "publication_ref": [ "b48", "b48", "b49", "b47", "b49", "b47" ], "table_ref": [], "text": "Auto-Attack [49] was proposed to test adversarial robustness in a parameter-free, computationally affordable, and user-independent way. As such, Croce et al. proposed two variations of PGD to overcome suboptimal step sizes of the objective function, namely APGD-CE and APGD-DLR, for a step size-free version of PGD using cross-entropy (CE) and Difference of Logits Ratio (DLR) loss, respectively. DLR is a loss proposed by Croce et al. which is both shift and rescaling invariant and thus has the same degrees of freedom as the decision of the classifier, not suffering from the issues of the cross-entropy loss [49]. Then, they combine these new PGD variations with two other existing attacks to create Auto-Attack, which is composed by:\n• APGD-CE, step size-free version of PGD on the crossentropy; • APGD-DLR, step size-free version of PGD on the DLR loss; • Fast Adaptive Boundary (FAB) [50], which minimizes the norm of the adversarial perturbations; • Square [48] Attack, a query-efficient black-box attack. Given the main motivation of the Auto-Attack proposal, the FAB attack is the targeted version of FAB [50] since the untargeted version computes each iteration of the Jacobian matrix of the classifier, which scales linearly with the number of classes of the dataset. Although this is feasible for datasets with a low number of classes (e.g., MNIST and CIFAR-10), it becomes both computationally and memory-wise challenging with an increased number of classes (e.g., CIFAR-100 and ImageNet).\nAs such, Auto-Attack is an ensemble of attacks with important fundamental properties: APGD is a white-box attack aiming at any adversarial example within an L p -ball (Section II-C), FAB minimizes the norm of the perturbation necessary to achieve a misclassification, and Square Attack is a score-based black-box attack for norm bounded perturbations which use random search and do not exploit any gradient approximation, competitive with white-box attacks [48]." }, { "figure_ref": [], "heading": "V. ADVERSARIAL DEFENSES", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_3" ], "heading": "A. Adversarial Training", "publication_ref": [ "b13", "b31", "b50", "b51", "b52", "b53", "b54", "b55", "b56", "b58", "b59", "b61", "b61" ], "table_ref": [], "text": "Szegedy et al. [14] proposed that training on a mixture of adversarial and clean examples could regularize a neural network, as shown in Figure 6. Goodfellow et al. [32] evaluated the impact of Adversarial Training as a regularizer by including it in the objective function, showing that this approach is a reliable defense that can be applied to every neural network.\nKurakin et al. [51] demonstrates that it is possible to perform adversarial training in more massive datasets (ImageNet), displaying that the robustness significantly increases for onestep methods. When training the model with one-step attacks using the ground-truth labels, the model has significantly higher accuracy on the adversarial images than on the clean images, an effect denominated as Label Leaking, suggesting that the adversarial training should not make use of the groundtruth labels.\nAdversarial Training in large datasets implies using fast single-step methods, which converge to a degenerate global minimum, meaning that models trained with this technique remain vulnerable to black-box attacks. Therefore, Ensemble Adversarial Training [52] uses adversarial examples crafted on other static pre-trained models to augment the training data, preventing the trained model from influencing the strength of the adversarial examples.\nShared Adversarial Training [53] is an extension of adversarial training aiming to maximize robustness against universal perturbations. It splits the mini-batch of images used in training into a set of stacks and obtains the loss gradients concerning these stacks. Afterward, the gradients for each stack are processed to create a shared perturbation that is applied to the whole stack. After every iteration, these perturbations are added and clipped to constrain them into a predefined magnitude. Finally, these perturbations are added to the images and used for adversarial training.\nTRadeoff-inspired Adversarial DEfense via Surrogateloss minimization (TRADES) [54] is inspired by the presumption that robustness can be at odds with accuracy [55], [56]. The authors show that the robust error can be tightly bounded by using natural error measured by the surrogate loss function and the likelihood of input features being close to the decision boundary (boundary error). These assumptions make the model weights biased toward natural or boundary errors.\nBased on the idea that gradient magnitude is directly linked to model robustness, Bilateral Adversarial Training (BAT) [57] proposes to perturb not only the images but also the manipulation of labels (adversarial labels) during the training phase. The adversarial labels are derived from a closed-form heuristic solution, and the adversarial images are generated from a one-step targeted attack. Considering the same issue presented in Free-AT, the authors analyze Pontryagin's Maximum Principle [59] of this problem and observe that the adversary update is only related to the first layer of the network. Thus, You Only Propagate Once (YOPO) [60] only considers the first layer of the network for forward and backpropagation, effectively reducing the amount of propagation to one in each update. Defense against Occlusion Attacks (DOA) [62] is a defense mechanism that uses abstract adversarial attacks, Rectangular Occlusion Attack (ROA) [62], and applies the standard adversarial training. This attack considers including physically realizable attacks that are \"normal\" in the real world, such as eyeglasses and stickers on stop signs." }, { "figure_ref": [], "heading": "Adversarial Attack Classifier", "publication_ref": [ "b62", "b63", "b57", "b64", "b65", "b66", "b67", "b68", "b69" ], "table_ref": [], "text": "Adversarial Images\nThe proposal of Smooth Adversarial Training (SAT) [63] considers the evolution normally seen in curriculum learning, where the difficulty increases with time (age), using two difficulty metrics. These metrics are based on the maximal Hessian eigenvalue (H-SAT) and the softmax Probability (P-SAT), which are used to stabilize the networks for large perturbations while having high clean accuracy. In the same context, Friendly Adversarial Training (Friend-AT) [64] minimizes the loss considering the least adversarial data (friendly) among the adversarial data that is confidently misclassified. This method can be employed by early stopping PGD attacks when performing adversarial training.\nContrary to the idea of Free-AT [58], Cheap Adversarial Training (Cheap-AT) [65] proposes the use of weaker and cheaper adversaries (FGSM) combined with random initialization to train robust networks effectively. This method can be further accelerated by applying techniques that efficiently train networks.\nIn a real-world context, the attacks are not limited by the imperceptibility constraint ( value); there are, in fact, multiple perturbations (for models) that have visible sizes. The main idea of Oracle-Aligned Adversarial Training (OA-AT) [66] is to create a model that is robust to high perturbation bounds by aligning the network predictions with ones of an Oracle during adversarial training. The key aspect of OA-AT is the use of Learned Perceptual Image Patch Similarity [67] to generate Oracle-Invariant attacks and convex combination of clean and adversarial predictions as targets for Oracle-Sensitive samples.\nGeometry-aware Instance-reweighted Adversarial Training (GI-AT) [68] has two foundations: 1) overparameterized models still lack capacity; and 2) a natural data point closer to the class boundary is less robust, translating into assigning the corresponding adversarial data a larger weight. Therefore, this defense proposes using standard adversarial training, considering that weights are based on how difficult it is to attack a natural data point.\nAdversarial training leads to unfounded increases in the margin along decision boundaries, reducing clean accuracy. To tackle this issue, Helper-based Adversarial Training (HAT) [69] incorporates additional wrongly labeled examples during training, achieving a good trade-off between accuracy and robustness.\nAs a result of the good results achieved by applying random initialization, Fast Adversarial Training (FAT) [70] performs randomized smoothing to optimize the inner maximization problem efficiently, and proposes a new initialization strategy, named backward smoothing. This strategy helps to improve the stability and robustness of a model using single-step robust training methods, solving the overfitting issue." }, { "figure_ref": [ "fig_5" ], "heading": "B. Modify the Training Process", "publication_ref": [ "b70", "b71", "b72", "b73", "b74", "b71", "b75", "b76", "b77", "b78", "b79", "b18", "b80", "b81", "b81", "b82", "b83", "b84", "b85", "b86", "b87", "b87", "b88", "b89", "b90", "b91", "b92", "b88", "b93", "b94", "b95", "b96", "b97", "b98", "b100", "b101", "b102", "b103", "b103" ], "table_ref": [], "text": "Gu and Rigazio [71] proposed using three preprocessing techniques to recover from the adversarial noise, namely, noise injection, autoencoder, and denoising autoencoder, discovering that the adversarial noise is mainly distributed in the high-frequency domain. Solving the adversarial problem corresponds to encountering adequate training techniques and objective functions to increase the distortion of the smallest adversarial examples.\nAnother defense against adversarial examples is Defensive Distillation [72], which uses the predictions from a previously trained neural network, as displayed in Figure 7. This approach trains the initial neural network with the original training data and labels, producing the probability of the predictions, which replace the original training labels to train a smaller and resilient distilled network. Additionally, to improve the results obtained by Defensive Distillation, Papernot and Mc-Daniel [73] propose to change the vector used to train the distilled network by combining the original label with the first model uncertainty.\nTo solve the vulnerabilities of the neural network to adversarial examples, the Visual Causal Feature Learning [74] method uses causal reasoning to perform data augmentation. This approach uses manipulator functions that return an image similar to the original one with the desired causal effect.\nLearning with a Strong Adversary [75] is a training procedure that formulates as a min-max problem, making the [72]. An Initial Network is trained on the dataset images and labels (discrete values). Then, the predictions given by the Initial Network are fed into another network, replacing the dataset labels. These predictions are continuous values, making the Distilled Network more resilient to adversarial attacks. classifier inherently robust. This approach considers that the adversary applies perturbations to each data point to maximize the classification error, and the learning procedure attempts to minimize the misclassification error against the adversary. The greatest advantage of this procedure is the significant increase in robustness while maintaining clean high accuracy.\nZheng et al. [76] proposes the use of compression, rescaling, and cropping in benign images to increase the stability of DNNs, denominated as Image Processing, without changing the objective functions. A Gaussian perturbation sampler perturbs the benign image, which is fed to the DNN, and its feature representation of benign images is used to 1) minimize the standard CE loss; and 2) minimize the stability loss.\nZantedeschi et al. [77] explored the standard architectures, which usually employ Rectified Linear Units (ReLU) [78], [79] to ease the training process, and discovered that this function makes a small perturbation in the input accumulate with multiple layers (unbounded). Therefore, the authors propose the use of bounded ReLU (BReLU) [80] to prevent this accumulation and Gaussian Data Augmentation to perform data augmentation.\nZhang and Wang [19] suggest that adversarial examples are generated through Feature Scattering (FS) in the latent space to avoid the label leaking effect, which considers the interexample relationships. The adversarial examples are generated by maximizing the feature-matching distance between the clean and perturbed examples, FS produces a perturbed empirical distribution, and the DNN performs standard adversarial training.\nPGD attack causes the internal representation to shift closer to the \"false\" class, Triplet Loss Adversarial (TLA) [81] includes an additional term in the loss function that pulls natural and adversarial images of a specific class closer and the remaining classes further apart. This method was tested with different samples: Random Negative (TLA-RN), which refers to a randomly sampled negative example, and Switch Anchor (TLA-SA), which sets the anchor as a natural example and the positive to be adversarial examples.\nKumari et al. [82] analyzes the previously adversarialtrained models to test their vulnerability against adversarial attacks at the level of latent layers, concluding that the latent layer of these models is significantly vulnerable to adversarial perturbations of small magnitude. Latent Adversarial Training (LAT) [82] consists of finetuning adversarial-trained models to ensure robustness at the latent level.\nCurvature Regularization (CR) [83] minimizes the curvature of the loss surface, which induces a more \"natural\" behavior of the network. The theoretical foundation behind this defense uses a locally quadratic approximation that demonstrates a strong relation between large robustness and small curvature. Furthermore, the proposed regularizer confirms the assumption that exhibiting quasi-linear behavior in the proximity of data points is essential to achieve robustness.\nUnsupervised Adversarial Training (UAT) [84] enables the training with unlabeled data considering two different approaches, UAT with Online Target (UAT-OT) that minimizes a differentiable surrogate of the smoothness loss, and UAT with Fixed Targets (UAT-FT) that trains an external classifier to predict the labels on the unsupervised data and uses its predictions as labels.\nRobust Self-Training (RST) [85], an extension of Self-Training [86], [87], uses a standard supervised training to obtain pseudo-labels and then feeds them into a supervised training algorithm that targets adversarial robustness. This approach bridges the gap between standard and robust accuracy, using the unlabeled data, achieving high robustness using the same number of labels as required for high standard accuracy.\nSENSEI [88] and SENSEI-SA [88] use the methodologies employed in software testing to perform data augmentation, enhancing the robustness of DNNs. SENSEI implements the strategy of replacing each data point with a suitable variant or leaving it unchanged. SENSEI-SA improves the previous one by identifying which opportunities are suitable for skipping the augmentation process.\nBit Plane Feature Consistency (BPFC) [89] regularizer forces the DNNs to give more importance to the higher bit planes, inspired by the Human visual system perception. This regularizer uses the original image and a preprocessed version to calculate the l 2 norm between them and regularize the loss function, as the scheme shown in Figure 8.\nAdversarial Weight Perturbation (AWP) [90] explicitly regularizes the flatness of weight loss landscape and robustness gap, using a double-perturbation mechanism that disturbs both inputs and weights. This defense boosts the robustness of multiple existing adversarial training methods, confirming that it can be applied to other methods.\nSelf-Adaptive Training (SAT) [91] dynamically calibrates the training process with the model predictions without extra computational cost, improving the generalization of corrupted data. In contrast with the double-descent phenomenon, SAT exhibits a single-descent error-capacity curve, mitigating the overfitting effect.\nHYDRA [92] is another technique that explores the effects of pruning on the robustness of models, which proposes using pruning techniques that are aware of the robust training objective, allowing this objective to guide the search for connections to prune. This approach reaches compressed models that are state-of-the-art in standard and robust accuracy.\nBased on the promising results demonstrated by previous distillation methods, the Robust Soft Label Adversarial Distillation (RSLAD) [93] method uses soft labels to train\nChange Loss Regularizer Quantization Shift Noise Clip\nFig. 8. Schematic overview of the Bit Plane Feature Consistency [89]. This method applies multiple operations to input images, simulating adversarial images. Then, the loss is changed to include a regularizer (new term), which compares the original images with these manipulated images.\nrobust small student DNNs. This method uses the Robust Soft Labels (RSLs) produced by the teacher DNN to supervise the student training on natural and adversarial examples. An essential aspect of this method is that the student DNN does not access the original complex labels through the training process.\nThe most sensitive neurons in each layer make significant non-trivial contributions to the model predictions under adversarial settings, which means that increasing adversarial robustness stabilizes the most sensitive neurons. Sensitive Neuron Stabilizing (SNS) [94] includes an objective function dedicated explicitly to maximizing the similarities of sensitive neuron behaviors when providing clean and adversarial examples.\nDynamic Network Rewiring (DNR) [95] generates pruned DNNs that have high robust and standard accuracy, which employs a unified constrained optimization formulation using a hybrid loss function that merges ultra-high model compression with robust adversarial training. Furthermore, the authors propose a one-shot training method that achieves high compression, standard accuracy, and robustness, which has a practical inference 10 times faster than traditional methods.\nManifold Regularization for Locally Stable (MRLS) [96] DNNs exploit the continuous piece-wise linear nature of ReLU to learn a function that is smooth over both predictions and decision boundaries. This method is based on approximating the graph Laplacian when the data is sparse.\nInspired by the motivation behind distillation, Learnable Boundary Guided Adversarial Training (LBGAT) [97], assuming that models trained on clean data embed their most discriminative features, constrains the logits from the robust model to make them similar to the model trained on natural data. This approach makes the robust model inherit the decision boundaries of the clean model, preserving high standard and robust accuracy.\nLow Temperature Distillation (LTD) [98], which uses previous distillation frameworks to generate labels, uses relatively low temperatures in the teacher model and employs different fixed temperatures for the teacher and student models. The main benefit of this mechanism is that the generated soft labels can be integrated into existing works without additional costs.\nRecently, literature [99]- [101] demonstrated that neural Ordinary Differential Equations (ODE) are naturally more robust to adversarial attacks than vanilla DNNs. Therefore, Stable neural ODE for deFending against adversarial attacks (SODEF) [102] uses optimization formulation to force the extracted feature points to be within the vicinity of Lyapunovstable equilibrium points, which suppresses the input perturbations.\nSelf-COnsistent Robust Error (SCORE) [103] employs local equivariance to describe the ideal behavior of a robust model, facilitating the reconciliation between robustness and accuracy while still dealing with worst-case uncertainty. This method was inspired by the discovery that the trade-off between adversarial and clean accuracy imposes a bias toward smoothness.\nAnalyzing the impact of activation shape on robustness, Dai et al. [104] observes that activation has positive outputs on negative inputs, and a high finite curvature can improve robustness. Therefore, Parametric Shifted Sigmoidal Linear Unit (PSSiLU) [104] combines these properties and parameterized activation functions with adversarial training." }, { "figure_ref": [ "fig_6" ], "heading": "C. Use of Supplementary Networks", "publication_ref": [ "b104", "b105", "b0", "b106", "b16", "b107", "b17", "b108", "b109", "b110", "b111", "b111", "b112", "b113", "b114", "b115" ], "table_ref": [], "text": "MagNet [105] considers two reasons for the misclassification of an adversarial example: 1) incapacity of the classifier to reject an adversarial example distant from the boundary; and 2) classifier generalizes poorly when the adversarial example is close to the boundary. MagNet considers multiple detectors trained the reconstruction error, detecting significantly perturbed examples and detecting slightly perturbed examples based on probability divergence.\nAdversary Detection Network (ADN) [106] is a subnetwork that detects if the input example is adversarial or not, trained using adversarial images generated for a classification network which are classified as clean (0) or adversarial (1). Figure 9 displays a schematic overview of this network. However, this defense mechanism deeply correlates to the datasets and classification networks.\nXu et al. found that the inclusion of Feature Squeezing (FS) [107] is highly reliable in detecting adversarial examples by reducing the search space available for the adversary to modify. This method compares the predictions of a standard network with a squeezed one, detecting adversarial examples with high accuracy and having few false positives.\nHigh-level representation Guided Denoiser (HGD) [17] uses the distance between original and adversarial images to guide an image denoiser and suppress the impact of adversarial examples. HGD uses a Denoising AutoEncoder [108] with additional lateral connections and considers the difference between the representations as the loss function at a specific layer that is activated by the normal and adversarial examples.\nDefense-GAN [18] explores the use of GANs to effectively represent the set of original training examples, making this defense independent from the attack used. Defense-GAN considers the usage of Wasserstein GANs (WGANs) [109] to learn the representation of the original data and denoise the adversarial examples, which start by minimizing the l 2 difference between the generator representation and the input image.\nReverse Attacks [110] can be applied to each attack during the testing phase, by finding the suitable additive perturbation to repair the adversarial example similar to the adversarial attacks, which is highly difficult due to the unknown original label.\nEmbedding Regularized Classifier (ER-Classifier) [111] is composed of a classifier, an encoder, and a discriminator, which uses the encoder to generate code vectors by reducing the dimensional space of the inputs and the discriminator to separate these vectors from the ideal code vectors (sampled from a prior distribution). This technique allows pushing adversarial examples into the benign image data distribution, removing the adversarial perturbations.\nClass Activation Feature-based Denoiser (CAFD) [112] is a self-supervised approach trained to remove the noise from adversarial examples, using a set of examples generated by the Class Activation Feature-based Attack (CAFA) [112]. This defense mechanism is trained to minimize the distance of the class activation features between the adversarial and natural examples, being robust to unseen attacks.\nDetector Graph (DG) [113] considers graphs to detect the adversarial examples by constructing a Latent Neighborhood Graph (LNG) for each original example and using Graph Neural Networks (GNNs) [114] to exploit the relationship and distinguish between original and adversarial examples. This method maintains an additional reference dataset to retrieve the manifold information and uses embedding representation of image pixel values, making the defense robust to unseen attacks.\nImages in the real world are represented in a continuous manner, yet machines can only store these images in discrete 2D arrays. Local Implicit Image Function (LIIF) [115] takes an image coordinate and the deep features around this coordinate as inputs, predicting the corresponding RGB value. This method of pre-processing input images can filter adversarial images by reducing their perturbations, which are subsequently fed to a classifier.\nADversarIal defenSe with local impliCit functiOns (DISCO) [116] is an additional network to the classifier that removes adversarial perturbations using localized manifold projections, which receives an adversarial image and a query pixel location. This defense mechanism comprises an encoder that creates per-pixel deep features and a local implicit module that uses these features to predict the clean RGB value." }, { "figure_ref": [], "heading": "D. Change Network Architecture", "publication_ref": [ "b117" ], "table_ref": [], "text": "To identify the type of layers and their order, Guo et al. [118] proposes the use of Neural Architecture Search" }, { "figure_ref": [], "heading": "+ 1x1 Conv Mean Filter", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Non-local Means", "publication_ref": [], "table_ref": [], "text": "Bilateral Filter" }, { "figure_ref": [], "heading": "Median Filter", "publication_ref": [ "b116", "b117", "b116", "b118", "b119", "b120", "b121", "b122" ], "table_ref": [], "text": "Fig. 10. Overview of a Feature Denoising Block [117], which can be included in the intermediate layers to make networks more robust. This method is an example of Change Network Architecture.\n(NAS) to identify the networks that are more robust to adversarial attacks, finding that densely connected patterns improve the robustness and adding convolution operations to direct connection edge is effective, combined to create the RobNets [118].\nFeature Denoising [117] intends to address this problem by applying feature-denoising operations, consisting of nonlocal means, bilateral, mean, median filters, followed by 1x1 Convolution and an identity skip connection, as illustrated in Figure 10. These blocks are added to the intermediate layers of CNNs.\nInput Random [119] propose the addition of layers at the beginning of the classifier, consisting of 1) a random resizing layer, which resizes the width and height of the original image to a random width and height, and 2) a random padding layer, which pads zeros around the resized image in a random manner.\nControlling Neural Level Sets (CNLS) [120] uses samples obtained from the neural level sets and relates their positions to the network parameters, which allows modifying the decision boundaries of the network. The relation between position and parameters is achieved by constructing a sample network with an additional single fixed linear layer, which can incorporate the level set samples into a loss function.\nSparse Transformation Layer (STL) [121], included between the input image and the network first layer, transforms the received images into a low-dimensional quasinatural image space, which approximates the natural image space and removes adversarial perturbations. This creates an attack-agnostic adversarial defense that gets the original and adversarial images closer.\nBenz et al. [122] found that BN [123] and other normalization techniques make DNN more vulnerable to adversarial examples, suggesting the use of a framework that makes DNN more robust by learning Robust Features first and, then, Non-Robust Features (which are the ones learned when using BN)." }, { "figure_ref": [], "heading": "E. Perform Network Validation", "publication_ref": [ "b123", "b124", "b125", "b126", "b127", "b128", "b129", "b130", "b131", "b132", "b133", "b133", "b134" ], "table_ref": [], "text": "Most of the datasets store their images using the Joint Photographic Experts Group (JPEG) [124] compression, yet no one had evaluated the impact of this process on the network performance. Dziugaite et al. [125] (named as JPG) varies the magnitude of FGSM perturbations, discovering that smaller ones often reverse the drop in classification by a large extent and, when the perturbations increase in magnitude, this effect is nullified.\nRegarding formal verification, a tool [126] for automatic Safety Verification of the decisions made during the classification process was created using Satisfiability Modulo Theory (SMT). This approach assumes that a decision is safe when, after applying transformations in the input, the model decision does not change. It is applied to every layer individually in the network, using a finite space of transformations.\nDeepXplore [127] is the first white-box framework to perform a wide test coverage, introducing the concepts of neuron coverage, which are parts of the DNN that are exercised by test inputs. DeepXplore uses multiple DNNs as cross-referencing oracles to avoid manual checking for each test input and inputs that trigger different behaviors and achieve high neuron coverage is a joint optimization problem solved by gradientbased search techniques.\nDeepGauge [128] intends to identify a testbed containing multi-faceted representations using a set of multi-granularity testing criteria. DeepGauge evaluates the resilience of DNNs using two different strategies, namely, primary function and corner-case behaviors, considering neuron-and layer-level coverage criteria.\nSurprise Adequacy for Deep Learning Systems (SADL) [129] is based on the behavior of DNN on the training data, by introducing the surprise of an input, which is the difference between the DNN behavior when given the input and the learned training data. The surprise of input is used as an adequacy criterion (Surprise Adequacy), which is used as a metric for the Surprise Coverage to ensure the input surprise range coverage.\nThe most recent data augmentation techniques, such as cutout [130] and mixup [131], fail to prevent overfitting and, sometimes, make the model over-regularized, concluding that, to achieve substantial improvements, the combination of early stopping and semi-supervised data augmentation, Overfit Reduction (OR) [132], is the best method.\nWhen creating a model, multiple implementation details influence its performance; Pang et al. [133] is the first one to provide insights on how these details influence the model robustness, herein named as Bag of Tricks (BT). Some conclusions drawn from this study are: 1) The robustness of the models is significantly affected by weight decay; 2) Early stopping of the adversarial attacks may deteriorate worst-case robustness; and 3) Smooth activation benefits lower capacity models.\nOverfitting is a known problem that affects model robustness; Rebuffi et al. [134] focuses on reducing this robust overfitting by using different data augmentation techniques. Fixing Data Augmentation (FDA) [134] demonstrates that model weight averaging combined with data augmentation schemes can significantly increase robustness, which is enhanced when using spatial composition techniques.\nGowal et al. [135] systematically studies the effect of multiple training losses, model sizes, activation functions, the addition of unlabeled data, and other aspects. The main" }, { "figure_ref": [], "heading": "Adversarial Purified Diffusion Denoising", "publication_ref": [ "b136", "b135" ], "table_ref": [], "text": "Fig. 11. Overview of Adversarial Purification using Denoising Diffusion Probabilistic Models, adapted from [137]. The diffusion process is applied to an adversarial image, consisting of adding noise for a certain number of steps. During the denoising procedure, this noise is iteratively removed by the same amount of steps, resulting in a purified image (without perturbations).\nconclusion drawn by this analysis is that larger models with Swish/SiLU [136] activation functions and model weight averaging can reliably achieve state-of-the-art results in robust accuracy." }, { "figure_ref": [], "heading": "F. Adversarial Purification", "publication_ref": [ "b137", "b136", "b138", "b139", "b140", "b141", "b142", "b143", "b144", "b145", "b146" ], "table_ref": [], "text": "Adversarial Purification consists of defense mechanisms that remove adversarial perturbations using a generative model. Improving Robustness Using Generated Data (IRUGD) [138] explores how generative models trained on the original images can be leveraged to increase the size of the original datasets. Through extensive experiments, they concluded that Denoising Diffusion Probabilistic Models (DDPM) [137], a progression of diffusion probabilistic models [139], is the model that more closely resembles real data. Figure 11 presents the main idea behind the DDPM process.\nDue to the great results in image synthesis displayed by the DDPM, Sehwag et al. [140] (Proxy) uses proxy distributions to significantly improve the performance of adversarial training by generating additional examples, demonstrating that the best generative models for proxy distribution are DDPM.\nInspired by previous works on adversarial purification [141], [142], DiffPure [143] uses DDPM for adversarial purification, receiving as input an adversarial example and recovering the clean image through a reverse generative process. Since this discovery, multiple improvements regarding the use of DDPM for Adversarial Purification have been studied. Guided Diffusion Model for Adversarial Purification (GDMAP) [144] receives as initial input pure Gaussian noise and gradually denoises it with guidance to an adversarial image.\nDensePure [145] employs iterative denoising to an input image, with different random seeds, to get multiple reversed samples, which are given to the classifier and the final prediction is based on majority voting. Furthermore, Wang et al. [146] uses the most recent diffusion models [147] to demonstrate that diffusion models with higher efficiency and image quality directly translate into better robust accuracy." }, { "figure_ref": [], "heading": "VI. ADVERSARIAL EFFECTS ON VISION TRANSFORMERS", "publication_ref": [ "b147", "b148", "b149", "b150", "b151", "b152", "b150", "b150", "b153", "b153", "b19", "b154", "b155", "b156", "b157", "b158", "b161", "b162", "b163", "b163", "b159", "b160" ], "table_ref": [], "text": "Like CNNs [148], the ViTs are also susceptible to adversarial perturbations that alter a patch in an image [149], and ViTs demonstrate higher robustness, almost double, compared with ResNet-50 [150].\nTo further evaluate the robustness of ViT to adversarial examples, Mahmood et al. [151] used multiple adversarial attacks in CNNs, namely FGSM, PGD, MIM, C&W, and MI-FGSM. The ViT has increased robustness (compared with ResNet) for the first four attacks and has no resilience to the C&W and MI-FGSM attacks. Additionally, to complement the results obtained from the performance of ViTs, an extensive study [152] using feature maps, attention maps, and Gradientweighted Class Activation Mapping (Grad-CAM) [153] intends to explain this performance visually.\nThe transferability of adversarial examples from CNNs to ViTs was also evaluated, suggesting that the examples from CNNs do not instantly transfer to ViTs [151]. Furthermore, Self-Attention blended Gradient Attack (SAGA) [151] was proposed to misclassify both ViTs and CNNs. The Pay No Attention (PNA) [154] attack, which ignores the gradients of attention, and the PatchOut [154] attack, which randomly samples subsets of patches, demonstrate high transferability.\nTo detect adversarial examples that might affect the ViTs, PatchVeto [20] uses different transformers with different attention masks that output the encoding of the class. An image is considered valid if all transformers reach a consensus in the voted class, overall the masked predictions (provided by masked transformers).\nSmoothed ViTs [155] perform preprocessing techniques to the images before feeding them into the ViT, by generating image ablations (images composed of only one column of the original image, and the remaining columns are black), which are converted into tokens, and droping the fully masked tokens. The remaining tokens are fed into a ViT, which predicts a class for each ablation, and the class with the most predictions of overall ablations is considered the correct one.\nBai et al. [156] demonstrates that ViTs and CNNs are being unfairly evaluated because they do not have the same training details. Therefore, this work provides a fair and indepth comparison between ViTs and CNNs, indicating that ViTs are as vulnerable to adversarial perturbations as CNNs.\nArchitecture-oriented Transferable Attacking (ATA) [157] is a framework that generates transferable adversarial examples by considering the common characteristics among different ViT architectures, such as self-attention and image-embedding. Specifically, it discovers the most attentional patch-wise regions significantly influencing the model decision and searches pixel-wise attacking positions using sensitive embedding perturbation.\nPatch-fool [158] explores the perturbations that turn ViTs more vulnerable learners than CNNs, proposing a dedicated attack framework that fools the self-attention mechanism by attacking a single patch with multiple attention-aware optimization techniques. This attack mechanism demonstrates, for the first time, that ViTs can be more vulnerable than CNNs if attacked with proper techniques.\nGu et al. [159] evaluates the robustness of ViT to patch-wise perturbations, concluding that these models are more robust to naturally corrupted patches than CNNs while being more vulnerable to adversarially generated ones. Inspired by the observed results, the authors propose a simple Temperature Scaling based method that improves the robustness of ViTs. [162] in the first five columns and from the Fashion-MNIST dataset [163] in the last five columns. The images were resized for better visualization. Fig. 13. Images withdrew from the CIFAR-10 dataset [164] in the first five columns and from the CIFAR-100 dataset [164] in the last five columns. The images were resized for better visualization.\nAs previously observed for CNNs, improving the robust accuracy sacrifices the standard accuracy of ViTs, which may limit their applicability in the real context. Derandomized Smoothing [160] uses a progressive smoothed image modeling task to train the ViTs, making them capture the more discriminating local context while preserving global semantic information, improving both robust and standard accuracy.\nVeinGuard [161] is a defense framework that helps ViTs be more robust against adversarial palm-vein image attacks, with practical applicability in the real world. Namely, VeinGuard is composed of a local transformer-based GAN that learns the distribution of unperturbed vein images and a purifier that automatically removes a variety of adversarial perturbations." }, { "figure_ref": [ "fig_0" ], "heading": "VII. DATASETS A. MNIST and F-MNIST", "publication_ref": [ "b161", "b162" ], "table_ref": [], "text": "One of the most used datasets is the MNIST [162] dataset, which contains images of handwritten digits collected from approximately 250 writers in shades of black and white, withdrawn from two different databases. This dataset is divided into training and test sets, with the first one containing 60,000 examples and a second one containing 10,000 examples.\nXiao et al. propose the creation of the Fashion-MNIST [163] dataset by using figures from a fashion website, which has a total size of 70,000 images, contains ten classes, uses greyscale images, and each image has a size of 28x28. The Fashion-MNIST dataset is divided into train and test sets, containing 60,000 and 10,000 examples, respectively. Fig. 12 displays the 10 digits (from 0 to 9) from the MNIST dataset in the first five columns and the 10 fashion objects from Fashion-MNIST dataset in the last five columns. MNIST is one of the most widely studied datasets in the earlier works of adversarial examples, with defense mechanisms already displaying high robustness on this dataset. The same does not apply to Fashion-MNIST, which has not been as widely studied, despite having similar characteristics to MNIST." }, { "figure_ref": [ "fig_1" ], "heading": "B. CIFAR-10 and CIFAR-100", "publication_ref": [ "b165", "b166" ], "table_ref": [], "text": "Another widely studied dataset is the CIFAR-10, which, in conjunction with the CIFAR-100 dataset, are subsets from a Fig. 14. Images withdrew from the Street View House Numbers dataset [166] in the first three columns and from the German Traffic Sign Recognition Benchmark dataset [167] in the last three columns. The images were resized for better visualization. " }, { "figure_ref": [ "fig_1" ], "heading": "C. Street View Datasets", "publication_ref": [ "b165", "b166", "b167", "b168", "b169", "b170" ], "table_ref": [], "text": "The Street View House Numbers (SVHN) [166] dataset provides the same challenge as MNIST: identifying which digits are present in a colored image, containing ten classes, 0 to 9 digits, and an image size of 32x32 centered around a single character, with multiple digits in a single image. Regarding the dataset size, it has 630,420 digit images, but only 73,257 images are used for training, 26,032 images are used for testing, and the remaining 531,131 images can be used as additional training data.\nGerman Traffic Sign Recognition Benchmark (GT-SRB) [167] is a dataset containing 43 classes of different traffic signs, has 50,000 images, and demonstrates realistic scenarios. The dataset has 51,840 images, whose size varies from 15x15 to 222x193, divided into training, validation, and test sets with 50%, 25%, and 25%, respectively, of the total images.\nThe difficulties associated with the SVHN dataset are displayed in the first three rows of Fig. 14, showing unique digits that occupy the whole image and multiple digits on different backgrounds. Furthermore, the same figure presents the different types of traffic signs in the GTSRB dataset, such as prohibition, warning, mandatory, and end of prohibition. [168] in the top left, from the ImageNet-A dataset [169] in the top right, from the ImageNet-C and ImageNet-P datasets [170] in the bottom left, and ImageNet-COLORDISTORT [171] in the bottom right. The images were resized for better visualization." }, { "figure_ref": [], "heading": "D. ImageNet and Variants", "publication_ref": [ "b167", "b171", "b168", "b169", "b169", "b170" ], "table_ref": [], "text": "ImageNet [168] is one of the largest datasets for object recognition, containing 1,461,406 colored images and 1,000 classes, with images being resized to 224x224. This dataset collected photographs from Flickr, and other search engines, divided into 1.2 million training images, 50,000 validation images, and 100,000 test images.\nA possible alternative to ImageNet, when the dataset size is an important factor, is called Tiny ImageNet [172], a subset of ImageNet that contains fewer classes and images. This dataset contains only 200 classes (from the 1,000 classes in ImageNet), 100,000 training images, 10,000 validation images, and 10,000 test images. These classes include animals, vehicles, household items, insects, and clothing, considering the variety of contexts/environments that these objects can be found. Their images have a size of 64x64 and are colored.\nImageNet-A [169] is a subset of ImageNet, only 200 classes from the 1,000 classes, covering the broadest categories in ImageNet. ImageNet-A is a dataset composed of real-world adversarially filtered images, which were obtained by deleting the correctly predicted images by ResNet-50 classifiers. Despite ImageNet-A being based on the deficiency of ResNet-50, it also demonstrates transferability to unseen models, making this dataset suitable for evaluating the robustness of multiple classifiers.\nTwo additional benchmarks, ImageNet-C [170] and ImageNet-P [170], were designed to evaluate the robustness of DNNs. The ImageNet-C standardizes and expands the corruption robustness topic, consisting of 75 corruptions applied to each image in the ImageNet validation set. ImageNet-P applies distortions to the images, though it differs from ImageNet-C because it contains perturbation sequences using only ten common perturbations.\nAnother benchmark to evaluate the model generalization capability is the ImageNet-COLORDISTORT (ImageNet-CD) [171], which considers multiple distortions in the color of an image using different color space representations. This dataset contains the 1,000 classes from ImageNet, removing images without color channels, and the same image considers multiple color distortions under the Red Green Blue (RGB), " }, { "figure_ref": [], "heading": "VIII. METRICS AND STATE-OF-THE-ART RESULTS", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A. Evaluation Metrics", "publication_ref": [ "b172", "b20", "b32", "b32" ], "table_ref": [], "text": "Due to the nature of adversarial examples, they need specific metrics to be correctly evaluated and constructed. Following this direction, multiple works have been proposing different metrics that calculate the percentage of adversarial examples that make a model misclassify (fooling rate), measure the amount of perturbation made in an image (destruction rate), and calculate the model robustness to adversarial examples (average robustness).\n1) Accuracy: This metric measures the number of samples that are correctly predicted by the model, which is defined as:\naccuracy = T P + T N T P + T N + F P + F N ,(10)\nwhere T P refers to True Positive, T N to True Negative, F P to False Positive, and F N to False Negative. The True Positive and True Negative are the samples whose network prediction is the same as the label (correct), and the False Positive and False Negative are the samples whose network prediction differs from the label (incorrect). When considering original images, this metric is denominated as Clean Accuracy and, when using adversarial images, is named as Robust Accuracy.\n2) Fooling Rate: After being perturbed to change the classifier label, the fooling rate F R [173] was proposed to calculate the percentage of images.\n3) Average Robustness: To objectively evaluate the robustness to adversarial perturbations of a classifier f , the average robustness padv (f ) is defined as [21]:\npadv (f ) = 1 D x∈D r(x) 2 x 2 ,(11)\nwhere r(x) is the estimated minimal perturbation obtained using the attack, and D denotes the test set. 4) Destruction Rate: To evaluate the impact of arbitrary transformations on adversarial images, the notion of destruction rate d is introduced and formally defined as [33]:\nd = n k=1 C(X k , y k true )¬C(X k adv , y k true )C(T (X k adv ), y k true ) n k=1 (X k , y k true )C(X k adv , y k true ) , (12\n)\nwhere n is the number of images, X k is the original image from the dataset, y k true is the true class of this image, X k adv is the adversarial image corresponding to that image, and T is an arbitrary image transformation. ¬C(X k adv , y k true ) is defined as the binary negation of C(X k adv , y k true ). Finally, the function C(X, y) is defined as [33]: \nC(X, y) = 1, if image X is classified as y; 0, otherwise.(13)" }, { "figure_ref": [], "heading": "B. Defense Mechanisms Robustness", "publication_ref": [ "b174", "b35", "b50", "b175" ], "table_ref": [], "text": "The metric used to evaluate models is accuracy, which evaluates the results on both original (Clean Accuracy) and adversarially perturbed (Robust Accuracy) datasets. One of the earliest and strongest adversarial attacks proposed was PGD, which was used by multiple defenses to evaluate their robustness. Table IV displays defenses evaluated on CIFAR-10 under multiple steps PGD attack, ordered by increasing robustness. For the PGD attack, the best performing defenses are from approaches that use supplementary networks (CAFD) or modify the training process (FS and AWP). Overall, Wide ResNets [175] have better robust accuracy, due to high-capacity networks exhibiting greater adversarial robustness [36], [51], suggesting the usage of these networks in future developments of adversarial attacks and defenses.\nTo assess the robustness of defenses for white and blackbox settings, Auto-Attack has gained increased interest over PGD in recent works. Tables V, VI, and VII present a set of defenses that are evaluated under Auto-Attack, on CIFAR-10, CIFAR-100, and ImageNet, respectively, ordered by increasing Robust Accuracy. The most used networks are Wide ResNets with different sizes, with the biggest Wide ResNet displaying better results overall, and the most resilient defense derives from the use of supplementary networks (DISCO), followed by modifying the train process (SODEF) and changing network architecture (STL). The results suggest that the inclusion of additional components to sanitize inputs of the targeted model (use of supplementary networks) is the most resilient approach for model robustness in white and black-box settings. The updated results for defenses under Auto-Attack can be found on the RobustBench [176] website. " }, { "figure_ref": [], "heading": "IX. FUTURE DIRECTIONS", "publication_ref": [], "table_ref": [], "text": "Following the de facto standards adopted by the literature, we suggest that future proposals of defense mechanisms should be evaluated on Auto-Attack, using the robust accuracy as a metric for comparison purposes. The adversarial defense that demonstrates better results is Adversarial Training, which should be a requirement when evaluating attacks and defenses.\nThe state-of-the-art results show that MNIST and CIFAR-10 datasets are already saturated. Other datasets should be further evaluated, namely: 1) CIFAR-100 and ImageNet since adversarial defenses do not achieve state-of-the-art clean accuracy (91% and 95%, respectively); 2) GTSRB and SVHN, depicting harder scenarios with greater variations of background, inclination, and luminosity; and 3) Fashion-MNIST that would allow better comprehension of which image properties influence DNNs performance (e.g., type of task, image shades, number of classes).\nMost works present their results using accuracy as the evaluation metric and, more recently, evaluate their defenses on the Auto-Attack. Furthermore, the values given for in each dataset were standardized by recurrent use. However, there should be an effort to develop a metric/process that quantifies the amount of perturbation added to the original image. This would ease the expansion of adversarial attacks to other datasets that do not have a standardized value.\nThere has been a greater focus on the development of whitebox attacks, which consider that the adversary has access to the network and training data, yet this is not feasible in real contexts, translating into the need of focusing more on the development of black-box attacks. A unique black-box set, physical attacks, also require additional evaluation, considering the properties of the real world and perturbations commonly found in it. Considering the increasing liberation of ML in the real world, end-users can partially control the training phase of DNNs, suggesting that gray-box attacks will intensify (access only to network or data).\nThe different network architectures are designed to increase the clean accuracy of DNNs in particular object recognition datasets, yet there should be further evaluation on the impact of the different layers and their structure. ViTs introduce a new paradigm in image analysis and are more robust against natural corruptions, suggesting that building ViT inherently robust to adversarial examples might be a possible solution.\nDDPM are generative models that perform adversarial purification of images, but they can not be applied in realtime since they take up to dozens of seconds to create a single purified image. Therefore, an effort on developing close to real-time adversarial purification strategies is a viable strategy for future works." }, { "figure_ref": [], "heading": "X. CONCLUSIONS", "publication_ref": [], "table_ref": [], "text": "DNNs are vulnerable to a set of inputs, denominated as adversarial examples, that drastically modify the output of the considered network and are constructed by adding a perturbation to the original image. This survey presents background concepts, such as adversary capacity and vector norms, essential to comprehend adversarial settings, providing a comparison with existing surveys in the literature. Adversarial attacks are organized based on the adversary knowledge, highlighting the emphasis of current works toward white box settings, and adversarial defenses are clustered into six domains, with most works exploring the adversarial training strategy. We also present the latest developments of adversarial settings in ViTs and describe the commonly used datasets, providing the state-of-the-art results in CIFAR-10, CIFAR-100, and ImageNet. Finally, we propose a set of open issues that can be explored for subsequent future works." }, { "figure_ref": [], "heading": "ACKNOWLEDGMENTS", "publication_ref": [], "table_ref": [], "text": "This work was supported in part by the Portuguese FCT/MCTES through National Funds and co-funded by EU funds under Project UIDB/50008/2020; in part by the FCT Doctoral Grant 2020.09847.BD and Grant 2021.04905.BD;" } ]
Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction (adversarial examples), raising concerns regarding its usage in critical areas, such as self-driving vehicles, malware detection, and healthcare. This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies. We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses
[ { "figure_caption": "Fig. 1 .Fig. 2 .12Fig. 1. Schematic example of the Convolutional Neural Networks mechanism to classify images.", "figure_data": "", "figure_id": "fig_0", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Fig. 4 .4Fig. 4. Geometric representation of the l 0 , l 2 , and l∞ norms, from left to right, respectively.", "figure_data": "", "figure_id": "fig_1", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Fig. 5 .5Fig. 5. Schematic overview of an Adversarial Attack under White-box Settings (left) and Black-box Settings (right). The first one uses the classifier predictions and network gradients to create perturbations (similar to noise), which can fool this classifier. These perturbations are added to the original images, creating adversarial images, which are fed to the network and cause misclassification. In the Black-box Settings, the same process is applied to a known classifier, and the obtained images are used to attack another classifier (represented as Target Architecture).", "figure_data": "", "figure_id": "fig_2", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Fig. 6 .6Fig. 6. Schematic overview of Adversarial Training. A subset of the original images of a dataset is fed into an adversarial attack (e.g., PGD, FGSM, or C&W), which creates adversarial images. Each batch contains original and adversarial images, with the Classifier being normally trained.", "figure_data": "", "figure_id": "fig_3", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Misclassification Aware adveRsarial Training (MART) [61] is an algorithm that explicitly differentiates the misclassified and correctly classified examples during training. This proposal is motivated by the finding that different maximization techniques are negligible, but minimization ones are crucial when looking at the misclassified examples.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 7 .7Fig.7. Method proposed by Defensive Distillation[72]. An Initial Network is trained on the dataset images and labels (discrete values). Then, the predictions given by the Initial Network are fed into another network, replacing the dataset labels. These predictions are continuous values, making the Distilled Network more resilient to adversarial attacks.", "figure_data": "", "figure_id": "fig_5", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Fig. 9 .9Fig. 9. Schematic overview of the Use of Supplementary Networks. The Detector Network was previously trained to detect adversarial images and is included between the input images and the classifier. This network receives the input images and determines if these images are Adversarial or Not. If they are not, they are redirected to the Classifier; If they are, they are susceptible to Human evaluation.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Fig. 12 .12Fig.12. Images withdrew from the MNIST dataset[162] in the first five columns and from the Fashion-MNIST dataset[163] in the last five columns. The images were resized for better visualization.", "figure_data": "", "figure_id": "fig_7", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "vast database containing 80 million tiny images[165], 32x32, and three color channels75,062 different classes. CIFAR-10 [164] contains only ten classes from this large database, with 6,000 images for each class, distributed into 50,000 training images and 10,000 test images. This dataset considers different objects, namely, animals and vehicles, usually found in different environments. CIFAR-100 [164] contains 100 classes with only 600 images for each one with the same size and amount of color channels as the CIFAR-10 dataset. CIFAR-100 groups its 100 classes into 20 superclasses, located in different contexts/environments, making this dataset much harder to achieve high results. Examples from the CIFAR-10 dataset are shown in Fig. 13 in the first five columns, and the remaining columns display examples of the superclasses from CIFAR-100. Due to the unsatisfactory results demonstrated by models trained on CIFAR-10, the CIFAR-100 dataset has not been included in most studies under the context of adversarial examples, suggesting that solving the issue of adversarial-perturbed images is still at its inception.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Fig. 15 .15Fig.15. Images withdrew from the ImageNet dataset[168] in the top left, from the ImageNet-A dataset[169] in the top right, from the ImageNet-C and ImageNet-P datasets[170] in the bottom left, and ImageNet-COLORDISTORT[171] in the bottom right. The images were resized for better visualization.", "figure_data": "", "figure_id": "fig_9", "figure_label": "15", "figure_type": "figure" }, { "figure_caption": "Hue-Saturation-Value (HSV), CIELAB, and YCbCr color spaces considered common transformations used in image processing. It is possible to observe a set of images withdrawn from ImageNet in the top left of Fig. 15. Additionally, some images misclassified by multiple classifiers (ImageNet-A) are shown in the top right of the same figure. The bottom represents the ImageNet with common corruptions and perturbations and is manipulated by multiple image techniques on the left and right, respectively.", "figure_data": "", "figure_id": "fig_10", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "TERMINOLOGIES USED IN THE CONTEXT OF ADVERSARIAL ATTACKS AND THEIR DEFINITION.", "figure_data": "", "figure_id": "tab_0", "figure_label": "I", "figure_type": "table" }, { "figure_caption": "extensively analyzed the adversarial attacks and defenses proposed under this context, providing conjectures for the existence of adversarial examples and evaluating the capacity of adversarial examples transferring between different DNNs.", "figure_data": "", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "SHOWN IN STATE-OF-THE-ART SURVEYS ON ADVERSARIAL ATTACKS.", "figure_data": "SurveyYearWhite & Black-Box Comparison SurveyGrouping of DefensesFuture Directions Overview Architectures Datasets Metrics andState-of-the-art ComparisonVision TransformersAkhtar and Mian [23]2018×××××Qiu et al. [26]2019×××××××Serban et al. [25]2020×××××Xu et al. [27]2020××××××Chakraborty et al. [28] 2021×××××××Long et al. [29]2022×××××Liang et al. [30]2022×××××Zhou et al. [31]2022××××This survey2023", "figure_id": "tab_3", "figure_label": "II", "figure_type": "table" }, { "figure_caption": "CHARACTERISTICS TO THE CONTEXT OF ADVERSARIAL EXAMPLES OF THE STATE-OF-THE-ART DATASETS. #CLASSES MEANS THE NUMBER OF CLASSES IN THE DATASET. EMPTY COLOR COLUMN MEANS THAT THE IMAGES IN THAT DATASET USE GREYSCALE OR BLACK AND WHITE SHADES. DATASETS WITH * ARE ONLY USED FOR TESTING PURPOSES.", "figure_data": "DatasetSize#ClassesClassesColorMNIST70,00010DigitsFashion-MNIST70,00010ClothingCIFAR-1060,00010Animals VehiclesSVHN630,42010DigitsGTSRB51,84043Traffic SignsCIFAR-10060,000100Household Items Outdoor ScenesTiny ImageNet120,000200Animals Household ItemsImageNet-A *7,500200Vehicles FoodImageNet-C *3,750,000200Vehicles FoodImageNet-P *15,000,000200Vehicles FoodImageNet1,431,1671,000Vehicles Electronic devicesImageNet-CD *736,5151,000Vehicles Electronic devices", "figure_id": "tab_4", "figure_label": "III", "figure_type": "table" }, { "figure_caption": "Table III summarizes the main characteristics of the datasets presented throughout this section.", "figure_data": "", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" }, { "figure_caption": "COMPARISON OF DIFFERENT DEFENSE MECHANISMS ON CIFAR-10 UNDER PGD ATTACK, l∞ AND = 8/255. CLEAN AND ROBUST REFERS TO ACCURACY WITHOUT AND WITH ADVERSARIAL ATTACKS, RESPECTIVELY. DEFENSES WITH \"-\" ON CLEAN ACCURACY DO NOT HAVE A CLEAN ACCURACY REPORTED.", "figure_data": "Defense MethodYearArchitectureAccuracy Clean RobustBPFC [89]2020ResNet-1882.434.4SNS [94]2021VGG-1686.039.6AT-MIFGSM [51]2017Inception v385.345.9AT-PGD [36]2018ResNet-1887.347.0RobNets [118]2020RobNet-free82.852.6HGD [17]2018DUNET92.453.1RSLAD [93]2021ResNet-1883.454.2MART [61]2020WRN-28-1083.155.6TRADES [54]2019WRN-34-1084.956.4BagT [133]2020WRN-34-10-56.4RO [132]2020ResNet-18-56.8DOA [62]2019VGGFace93.661.0AWP [90]2020WRN-28-10-63.6FS [19]2019WRN-28-1090.068.6CAFD [112]2021DUNET91.187.2", "figure_id": "tab_6", "figure_label": "IV", "figure_type": "table" }, { "figure_caption": "COMPARISON OF DIFFERENT DEFENSE MECHANISMS ON CIFAR-10 UNDER AUTO-ATTACK ATTACK, l∞ AND = 8/255 . CLEAN AND ROBUST REFERS TO ACCURACY WITHOUT AND WITH ADVERSARIAL ATTACKS, RESPECTIVELY.", "figure_data": "ArchitectureDefense MethodYearAccuracy Clean RobustInput Random [119]201794.38.6BAT [57]201992.829.4FS [19]201990.036.6Jpeg [125]201683.950.7Pretrain [174]201987.154.9UAT [84]201986.556.0MART [61]202087.556.3HYDRA [92]202089.057.1RST [85]201989.759.5GI-AT [68]202089.459.6Proxy [140]202189.559.7WRN28-10AWP [90]202088.360.0FDA [134]202187.360.8HAT [69]202188.261.0SCORE [103]202288.661.0PSSiLU [104]202287.061.6Gowal et al. [135]202089.562.8IRUGD [138]202187.563.4Wang et al. [146]202392.467.3STL [121]201982.267.9DISCO [116]202289.385.6Free-AT [58]201986.141.5AT-PGD [36]201887.144.0YOPO [60]201987.244.8TLA [81]201986.247.4LAT [82]201987.849.1SAT [63]202086.850.7FAT [70]202285.351.1WRN34-10LBGAT [97] TRADES [54]2021 201988.2 84.952.3 53.1SAT [91]202083.553.3Friend-AT [64]202084.555.5AWP [90]202085.456.2LTD [98]202185.256.9OA-AT [66]202185.358.0Proxy [140]202286.760.3HAT [69]202191.562.8SCORE [103]202289.063.4IRUGD [138]202191.165.9WRN-70-16Gowal et al. [135] FDA [134]2020 202188.7 92.266.1 66.6Wang et al. [146]202393.370.7SODEF [102]202193.771.3", "figure_id": "tab_7", "figure_label": "V", "figure_type": "table" }, { "figure_caption": "COMPARISON OF DIFFERENT DEFENSE MECHANISMS ON CIFAR-100 UNDER AUTO-ATTACK ATTACK, l∞ AND = 8/255 . CLEAN AND ROBUST REFERS TO ACCURACY WITHOUT AND WITH ADVERSARIAL ATTACKS, RESPECTIVELY.", "figure_data": "ArchitectureDefense MethodYearAccuracy Clean RobustInput Random [119]201773.63.3LIIF [115]202180.33.4Bit Reduction [107]201776.93.8Pretrain [174]201959.228.4WRN28-10SCORE [103] FDA [134]2022 202163.7 62.431.1 32.1Wang et al. [146]202378.638.8Jpeg [125]201661.939.6STL [121]201967.446.1DISCO [116]202272.167.9SAT [63]202062.824.6AWP [90]202060.428.9LBGAT [97]202160.629.3WRN34-10OA-AT [66]202165.730.4LTD [98]202164.130.6Proxy [140]202265.931.2DISCO [116]202271.669.0SCORE [103]202265.633.1WRN-70-16FDA [134] Gowal et al. [135]2021 202063.6 69.234.6 36.9Wang et al. [146]202375.242.7", "figure_id": "tab_8", "figure_label": "VI", "figure_type": "table" }, { "figure_caption": "COMPARISON OF DIFFERENT DEFENSE MECHANISMS ON IMAGENET UNDER AUTO-ATTACK ATTACK, l∞ AND = 4/255. CLEAN AND ROBUST REFERS TO ACCURACY WITHOUT AND WITH ADVERSARIAL ATTACKS, RESPECTIVELY.", "figure_data": "ArchitectureDefense MethodYearAccuracy Clean RobustBit Reduction [107]201767.64.0Jpeg [125]201667.213.1ResNet-18Input Random [119] Salman et al. [177]2017 202064.0 52.917.8 25.3STL [121]201965.632.9DISCO [116]202268.060.9Bit Reduction [107]201773.81.9Input Random [119]201774.018.8Cheap-AT [65]202055.626.2ResNet-50Jpeg [125]201673.633.4Salman et al. [177]202064.035.0STL [121]201968.350.2DISCO [116]202272.668.2Bit Reduction [107]201775.15.0Input Random [119]201771.723.6WRN-50-2Jpeg [125]201675.424.9Salman et al. [177]202068.538.1DISCO [116]202275.169.5", "figure_id": "tab_9", "figure_label": "VII", "figure_type": "table" } ]
Joana C Costa; Tiago Roxo; Hugo Proenc ¸a; Pedro R M Inácio
[ { "authors": "I Goodfellow; Y Bengio; A Courville", "journal": "MIT press", "ref_id": "b0", "title": "Deep learning", "year": "2016" }, { "authors": "L Liu; W Ouyang; X Wang; P Fieguth; J Chen; X Liu; M Pietikäinen", "journal": "IJCV", "ref_id": "b1", "title": "Deep learning for generic object detection: A survey", "year": "2020" }, { "authors": "H.-B Zhang; Y.-X Zhang; B Zhong; Q Lei; L Yang; J.-X Du; D.-S Chen", "journal": "Sensors", "ref_id": "b2", "title": "A comprehensive survey of vision-based human action recognition methods", "year": "2019" }, { "authors": "I Masi; Y Wu; T Hassner; P Natarajan", "journal": "IEEE", "ref_id": "b3", "title": "Deep face recognition: A survey", "year": "2018" }, { "authors": "M Wang; W Deng", "journal": "Neurocomputing", "ref_id": "b4", "title": "Deep face recognition: A survey", "year": "2021" }, { "authors": "T Wolf; L Debut; V Sanh; J Chaumond; C Delangue; A Moi; P Cistac; T Rault; R Louf; M Funtowicz", "journal": "", "ref_id": "b5", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "D W Otter; J R Medina; J K Kalita", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b6", "title": "A survey of the usages of deep learning for natural language processing", "year": "2020" }, { "authors": "A I Maqueda; A Loquercio; G Gallego; N García; D Scaramuzza", "journal": "", "ref_id": "b7", "title": "Event-based vision meets deep learning on steering prediction for self-driving cars", "year": "2018-06" }, { "authors": "A Ndikumana; N H Tran; D H Kim; K T Kim; C S Hong", "journal": "IEEE Transactions on Intelligent Transportation Systems", "ref_id": "b8", "title": "Deep learning based caching for self-driving cars in multi-access edge computing", "year": "2021" }, { "authors": "Z Yuan; Y Lu; Z Wang; Y Xue", "journal": "Association for Computing Machinery", "ref_id": "b9", "title": "Droid-sec: Deep learning in android malware detection", "year": "2014" }, { "authors": "R Vinayakumar; M Alazab; K P Soman; P Poornachandran; S Venkatraman", "journal": "IEEE Access", "ref_id": "b10", "title": "Robust intelligent malware detection using deep learning", "year": "2019" }, { "authors": "X Zhou; W Liang; I Kevin; K Wang; H Wang; L T Yang; Q Jin", "journal": "IEEE Internet of Things Journal", "ref_id": "b11", "title": "Deep-learning-enhanced human activity recognition for internet of healthcare things", "year": "2020" }, { "authors": "Z Liang; G Zhang; J X Huang; Q V Hu", "journal": "IEEE", "ref_id": "b12", "title": "Deep learning for healthcare decision making with emrs", "year": "2014" }, { "authors": "C Szegedy; W Zaremba; I Sutskever; J Bruna; D Erhan; I J Goodfellow; R Fergus", "journal": "", "ref_id": "b13", "title": "Intriguing properties of neural networks", "year": "2014" }, { "authors": "N Papernot; P Mcdaniel; S Jha; M Fredrikson; Z B Celik; A Swami", "journal": "IEEE EuroS&P", "ref_id": "b14", "title": "The limitations of deep learning in adversarial settings", "year": "2016" }, { "authors": " Google", "journal": "", "ref_id": "b15", "title": "Vertex ai pricing", "year": "2022-05-10" }, { "authors": "F Liao; M Liang; Y Dong; T Pang; J Zhu; X Hu", "journal": "", "ref_id": "b16", "title": "Defense against adversarial attacks using high-level representation guided denoiser", "year": "2018" }, { "authors": "P Samangouei; M Kabkab; R Chellappa", "journal": "", "ref_id": "b17", "title": "Defense-gan: Protecting classifiers against adversarial attacks using generative models", "year": "2018" }, { "authors": "H Zhang; J Wang", "journal": "", "ref_id": "b18", "title": "Defense against adversarial attacks using feature scattering-based adversarial training", "year": "2019" }, { "authors": "Y Huang; Y Li", "journal": "", "ref_id": "b19", "title": "Zero-shot certified defense against adversarial patches with vision transformers", "year": "2021" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; P Frossard", "journal": "", "ref_id": "b20", "title": "Deepfool: A simple and accurate method to fool deep neural networks", "year": "2016" }, { "authors": "A Dabouei; S Soleymani; F Taherkhani; J M Dawson; N M Nasrabadi", "journal": "IEEE WACV", "ref_id": "b21", "title": "Smoothfool: An efficient framework for computing smooth adversarial perturbations", "year": "2020" }, { "authors": "N Akhtar; A Mian", "journal": "IEEE Access", "ref_id": "b22", "title": "Threat of adversarial attacks on deep learning in computer vision: A survey", "year": "2018" }, { "authors": "Q Liu; P Li; W Zhao; W Cai; S Yu; V C M Leung", "journal": "IEEE Access", "ref_id": "b23", "title": "A survey on security threats and defensive techniques of machine learning: A data driven view", "year": "2018" }, { "authors": "A Serban; E Poll; J Visser", "journal": "ACM Computing Surveys", "ref_id": "b24", "title": "Adversarial examples on object recognition: A comprehensive survey", "year": "2020" }, { "authors": "S Qiu; Q Liu; S Zhou; C Wu", "journal": "Applied Sciences", "ref_id": "b25", "title": "Review of artificial intelligence adversarial attack and defense technologies", "year": "2019" }, { "authors": "H Xu; Y Ma; H.-C Liu; D Deb; H Liu; J.-L Tang; A K Jain", "journal": "International Journal of Automation and Computing", "ref_id": "b26", "title": "Adversarial attacks and defenses in images, graphs and text: A review", "year": "2020" }, { "authors": "A Chakraborty; M Alam; V Dey; A Chattopadhyay; D Mukhopadhyay", "journal": "CAAI Transactions on Intelligence Technology", "ref_id": "b27", "title": "A survey on adversarial attacks and defences", "year": "2021" }, { "authors": "T Long; Q Gao; L Xu; Z Zhou", "journal": "Computers & Security", "ref_id": "b28", "title": "A survey on adversarial attacks in computer vision: Taxonomy, visualization and future directions", "year": "2022" }, { "authors": "H Liang; E He; Y Zhao; Z Jia; H Li", "journal": "Electronics", "ref_id": "b29", "title": "Adversarial attack and defense: A survey", "year": "2022" }, { "authors": "S Zhou; C Liu; D Ye; T Zhu; W Zhou; P S Yu", "journal": "ACM Computing Surveys", "ref_id": "b30", "title": "Adversarial attacks and defenses in deep learning: From a perspective of cybersecurity", "year": "2022" }, { "authors": "I J Goodfellow; J Shlens; C Szegedy", "journal": "", "ref_id": "b31", "title": "Explaining and harnessing adversarial examples", "year": "2015" }, { "authors": "A Kurakin; I J Goodfellow; S Bengio", "journal": "", "ref_id": "b32", "title": "Adversarial examples in the physical world", "year": "2017" }, { "authors": "N Carlini; D Wagner", "journal": "Ieee", "ref_id": "b33", "title": "Towards evaluating the robustness of neural networks", "year": "2017" }, { "authors": "F Tramèr; N Papernot; I J Goodfellow; D Boneh; P Mcdaniel", "journal": "", "ref_id": "b34", "title": "The space of transferable adversarial examples", "year": "2017" }, { "authors": "A Madry; A Makelov; L Schmidt; D Tsipras; A Vladu", "journal": "", "ref_id": "b35", "title": "Towards deep learning models resistant to adversarial attacks", "year": "2018" }, { "authors": "C Xiao; B Li; J.-Y Zhu; W He; M Liu; D Song", "journal": "", "ref_id": "b36", "title": "Generating adversarial examples with adversarial networks", "year": "2018" }, { "authors": "I J Goodfellow; J Pouget-Abadie; M Mirza; B Xu; D Warde-Farley; S Ozair; A C Courville; Y Bengio", "journal": "", "ref_id": "b37", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Y Dong; F Liao; T Pang; H Su; J Zhu; X Hu; J Li", "journal": "", "ref_id": "b38", "title": "Boosting adversarial attacks with momentum", "year": "2018" }, { "authors": "F Croce; M Hein", "journal": "", "ref_id": "b39", "title": "Sparse and imperceivable adversarial attacks", "year": "2019" }, { "authors": "R Duan; X Ma; Y Wang; J Bailey; A K Qin; Y Yang", "journal": "", "ref_id": "b40", "title": "Adversarial camouflage: Hiding physical-world attacks with natural styles", "year": "2020" }, { "authors": "Z Wang; H Guo; Z Zhang; W Liu; Z Qin; K Ren", "journal": "", "ref_id": "b41", "title": "Feature importance-aware transferable adversarial attacks", "year": "2021" }, { "authors": "Z Yuan; J Zhang; Y Jia; C Tan; T Xue; S Shan", "journal": "", "ref_id": "b42", "title": "Meta gradient adversarial attack", "year": "2021" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; O Fawzi; P Frossard", "journal": "", "ref_id": "b43", "title": "Universal adversarial perturbations", "year": "2017" }, { "authors": "J Hayes; G Danezis", "journal": "IEEE SPW", "ref_id": "b44", "title": "Learning universal adversarial perturbations with generative models", "year": "2018" }, { "authors": "A Ilyas; L Engstrom; A Athalye; J Lin", "journal": "PMLR", "ref_id": "b45", "title": "Black-box adversarial attacks with limited queries and information", "year": "2018" }, { "authors": "M Wicker; X Huang; M Kwiatkowska", "journal": "", "ref_id": "b46", "title": "Feature-guided blackbox safety testing of deep neural networks", "year": "2018" }, { "authors": "M Andriushchenko; F Croce; N Flammarion; M Hein", "journal": "Springer", "ref_id": "b47", "title": "Square attack: a query-efficient black-box adversarial attack via random search", "year": "2020" }, { "authors": "F Croce; M Hein", "journal": "PMLR", "ref_id": "b48", "title": "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks", "year": "2020" }, { "authors": "F Croce; M Hein", "journal": "PMLR", "ref_id": "b49", "title": "Minimally distorted adversarial examples with a fast adaptive boundary attack", "year": "2020" }, { "authors": "A Kurakin; I J Goodfellow; S Bengio", "journal": "", "ref_id": "b50", "title": "Adversarial machine learning at scale", "year": "2017" }, { "authors": "F Tramèr; A Kurakin; N Papernot; I Goodfellow; D Boneh; P Mcdaniel", "journal": "", "ref_id": "b51", "title": "Ensemble adversarial training: Attacks and defenses", "year": "2018" }, { "authors": "C K Mummadi; T Brox; J H Metzen", "journal": "", "ref_id": "b52", "title": "Defending against universal perturbations with shared adversarial training", "year": "2019" }, { "authors": "H Zhang; Y Yu; J Jiao; E Xing; L El Ghaoui; M Jordan", "journal": "PMLR", "ref_id": "b53", "title": "Theoretically principled trade-off between robustness and accuracy", "year": "2019" }, { "authors": "D Tsipras; S Santurkar; L Engstrom; A Turner; A Madry", "journal": "", "ref_id": "b54", "title": "Robustness may be at odds with accuracy", "year": "2019" }, { "authors": "D Su; H Zhang; H Chen; J Yi; P.-Y Chen; Y Gao", "journal": "", "ref_id": "b55", "title": "Is robustness the cost of accuracy?-a comprehensive study on the robustness of 18 deep image classification models", "year": "2018" }, { "authors": "J Wang; H Zhang", "journal": "", "ref_id": "b56", "title": "Bilateral adversarial training: Towards fast training of more robust models against adversarial attacks", "year": "2019" }, { "authors": "A Shafahi; M Najibi; M A Ghiasi; Z Xu; J Dickerson; C Studer; L S Davis; G Taylor; T Goldstein", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b57", "title": "Adversarial training for free!", "year": "2019" }, { "authors": "R E Kopp", "journal": "Elsevier", "ref_id": "b58", "title": "Pontryagin maximum principle", "year": "1962" }, { "authors": "D Zhang; T Zhang; Y Lu; Z Zhu; B Dong", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b59", "title": "You only propagate once: Accelerating adversarial training via maximal principle", "year": "2019" }, { "authors": "Y Wang; D Zou; J Yi; J Bailey; X Ma; Q Gu", "journal": "", "ref_id": "b60", "title": "Improving adversarial robustness requires revisiting misclassified examples", "year": "2020" }, { "authors": "T Wu; L Tong; Y Vorobeychik", "journal": "", "ref_id": "b61", "title": "Defending against physically realizable attacks on image classification", "year": "2019" }, { "authors": "C Sitawarin; S Chakraborty; D Wagner", "journal": "", "ref_id": "b62", "title": "Improving adversarial robustness through progressive hardening", "year": "2020" }, { "authors": "J Zhang; X Xu; B Han; G Niu; L Cui; M Sugiyama; M Kankanhalli", "journal": "PMLR", "ref_id": "b63", "title": "Attacks which do not kill training make adversarial learning stronger", "year": "2020" }, { "authors": "E Wong; L Rice; J Z Kolter", "journal": "", "ref_id": "b64", "title": "Fast is better than free: Revisiting adversarial training", "year": "2020" }, { "authors": "S Addepalli; S Jain; G Sriramanan; S Khare; V B Radhakrishnan", "journal": "", "ref_id": "b65", "title": "Towards achieving adversarial robustness beyond perceptual limits", "year": "2021" }, { "authors": "R Zhang; P Isola; A A Efros; E Shechtman; O Wang", "journal": "", "ref_id": "b66", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "J Zhang; J Zhu; G Niu; B Han; M Sugiyama; M Kankanhalli", "journal": "", "ref_id": "b67", "title": "Geometry-aware instance-reweighted adversarial training", "year": "2020" }, { "authors": "R Rade; S.-M Moosavi-Dezfooli", "journal": "", "ref_id": "b68", "title": "Helper-based adversarial training: Reducing excessive margin to achieve a better accuracy vs. robustness trade-off", "year": "2021" }, { "authors": "J Chen; Y Cheng; Z Gan; Q Gu; J Liu", "journal": "", "ref_id": "b69", "title": "Efficient robust training via backward smoothing", "year": "2022" }, { "authors": "S S Gu; L Rigazio", "journal": "", "ref_id": "b70", "title": "Towards deep neural network architectures robust to adversarial examples", "year": "2015" }, { "authors": "N Papernot; P Mcdaniel; X Wu; S Jha; A Swami", "journal": "", "ref_id": "b71", "title": "Distillation as a defense to adversarial perturbations against deep neural networks", "year": "2016" }, { "authors": "N Papernot; P Mcdaniel", "journal": "", "ref_id": "b72", "title": "Extending defensive distillation", "year": "2017" }, { "authors": "K Chalupka; P Perona; F Eberhardt", "journal": "", "ref_id": "b73", "title": "Visual causal feature learning", "year": "2015" }, { "authors": "R Huang; B Xu; D Schuurmans; C Szepesvari", "journal": "", "ref_id": "b74", "title": "Learning with a strong adversary", "year": "2015" }, { "authors": "S Zheng; Y Song; T Leung; I J Goodfellow", "journal": "", "ref_id": "b75", "title": "Improving the robustness of deep neural networks via stability training", "year": "2016" }, { "authors": "V Zantedeschi; M.-I Nicolae; A Rawat", "journal": "", "ref_id": "b76", "title": "Efficient defenses against adversarial attacks", "year": "2017" }, { "authors": "A F Agarap", "journal": "", "ref_id": "b77", "title": "Deep learning using rectified linear units (relu)", "year": "2018" }, { "authors": "R H Hahnloser; R Sarpeshkar; M A Mahowald; R J Douglas; H S Seung", "journal": "Nature", "ref_id": "b78", "title": "Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit", "year": "2000" }, { "authors": "S S Liew; M Khalil-Hani; R Bakhteri", "journal": "Neurocomputing", "ref_id": "b79", "title": "Bounded activation functions for enhanced training stability of deep neural networks on visual pattern recognition problems", "year": "2016" }, { "authors": "C Mao; Z Zhong; J Yang; C Vondrick; B Ray", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b80", "title": "Metric learning for adversarial robustness", "year": "2019" }, { "authors": "N Kumari; M Singh; A Sinha; H Machiraju; B Krishnamurthy; V N Balasubramanian", "journal": "", "ref_id": "b81", "title": "Harnessing the vulnerability of latent layers in adversarially trained models", "year": "2019" }, { "authors": "S.-M Moosavi-Dezfooli; A Fawzi; J Uesato; P Frossard", "journal": "", "ref_id": "b82", "title": "Robustness via curvature regularization, and vice versa", "year": "2019" }, { "authors": "J.-B Alayrac; J Uesato; P.-S Huang; A Fawzi; R Stanforth; P Kohli", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b83", "title": "Are labels required for improving adversarial robustness?", "year": "2019" }, { "authors": "Y Carmon; A Raghunathan; L Schmidt; J C Duchi; P S Liang", "journal": "Advances in neural information processing systems", "ref_id": "b84", "title": "Unlabeled data improves adversarial robustness", "year": "2019" }, { "authors": "H Scudder", "journal": "IEEE Transactions on Information Theory", "ref_id": "b85", "title": "Probability of error of some adaptive pattern-recognition machines", "year": "1965" }, { "authors": "J Cohen; E Rosenfeld; Z Kolter", "journal": "PMLR", "ref_id": "b86", "title": "Certified adversarial robustness via randomized smoothing", "year": "2019" }, { "authors": "X Gao; R K Saha; M R Prasad; A Roychoudhury", "journal": "IEEE/ACM", "ref_id": "b87", "title": "Fuzz testing based data augmentation to improve robustness of deep neural networks", "year": "2020" }, { "authors": "S Addepalli; S Vivekb; A Baburaj; G Sriramanan; R V Babu", "journal": "", "ref_id": "b88", "title": "Towards achieving adversarial robustness by enforcing feature consistency across bit planes", "year": "2020" }, { "authors": "D Wu; S.-T Xia; Y Wang", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b89", "title": "Adversarial weight perturbation helps robust generalization", "year": "2020" }, { "authors": "L Huang; C Zhang; H Zhang", "journal": "Advances in neural information processing systems", "ref_id": "b90", "title": "Self-adaptive training: beyond empirical risk minimization", "year": "2020" }, { "authors": "V Sehwag; S Wang; P Mittal; S Jana", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b91", "title": "Hydra: Pruning adversarially robust neural networks", "year": "2020" }, { "authors": "B Zi; S Zhao; X Ma; Y.-G Jiang", "journal": "", "ref_id": "b92", "title": "Revisiting adversarial robustness distillation: Robust soft labels make student better", "year": "2021" }, { "authors": "C Zhang; A Liu; X Liu; Y Xu; H Yu; Y Ma; T Li", "journal": "IEEE Transactions on Image Processing", "ref_id": "b93", "title": "Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity", "year": "2021" }, { "authors": "S Kundu; M Nazemi; P A Beerel; M Pedram", "journal": "", "ref_id": "b94", "title": "Dnr: A tunable robust pruning framework through dynamic network rewiring of dnns", "year": "2021" }, { "authors": "C Jin; M Rinard", "journal": "", "ref_id": "b95", "title": "Manifold regularization for locally stable deep neural networks", "year": "2020" }, { "authors": "J Cui; S Liu; L Wang; J Jia", "journal": "", "ref_id": "b96", "title": "Learnable boundary guided adversarial training", "year": "2021" }, { "authors": "E.-C Chen; C.-R Lee", "journal": "", "ref_id": "b97", "title": "Ltd: Low temperature distillation for robust adversarial training", "year": "2021" }, { "authors": "H Yan; J Du; V Y Tan; J Feng", "journal": "", "ref_id": "b98", "title": "On robustness of neural ordinary differential equations", "year": "2019" }, { "authors": "E Haber; L Ruthotto", "journal": "Inverse problems", "ref_id": "b99", "title": "Stable architectures for deep neural networks", "year": "2017" }, { "authors": "X Liu; T Xiao; S Si; Q Cao; S Kumar; C.-J Hsieh", "journal": "", "ref_id": "b100", "title": "How does noise help robustness? explanation and exploration under the neural sde framework", "year": "2020" }, { "authors": "Q Kang; Y Song; Q Ding; W P Tay", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b101", "title": "Stable neural ode with lyapunov-stable equilibrium points for defending against adversarial attacks", "year": "2021" }, { "authors": "T Pang; M Lin; X Yang; J Zhu; S Yan", "journal": "PMLR", "ref_id": "b102", "title": "Robustness and accuracy could be reconcilable by (proper) definition", "year": "2022" }, { "authors": "S Dai; S Mahloujifar; P Mittal", "journal": "IEEE", "ref_id": "b103", "title": "Parameterizing activation functions for adversarial robustness", "year": "2022" }, { "authors": "D Meng; H Chen", "journal": "", "ref_id": "b104", "title": "Magnet: A two-pronged defense against adversarial examples", "year": "2017" }, { "authors": "J H Metzen; T Genewein; V Fischer; B Bischoff", "journal": "", "ref_id": "b105", "title": "On detecting adversarial perturbations", "year": "2017" }, { "authors": "W Xu; D Evans; Y Qi", "journal": "", "ref_id": "b106", "title": "Feature squeezing: Detecting adversarial examples in deep neural networks", "year": "2017" }, { "authors": "P Vincent; H Larochelle; Y Bengio; P.-A Manzagol", "journal": "", "ref_id": "b107", "title": "Extracting and composing robust features with denoising autoencoders", "year": "2008" }, { "authors": "M Arjovsky; S Chintala; L Bottou", "journal": "", "ref_id": "b108", "title": "Wasserstein gan", "year": "2017" }, { "authors": "C Mao; M Chiquier; H Wang; J Yang; C Vondrick", "journal": "", "ref_id": "b109", "title": "Adversarial attacks are reversible with natural supervision", "year": "2021" }, { "authors": "Y Li; M R Min; T Lee; W Yu; E Kruus; W Wang; C.-J Hsieh", "journal": "", "ref_id": "b110", "title": "Towards robustness of deep neural networks via regularization", "year": "2021-10" }, { "authors": "D Zhou; N Wang; C Peng; X Gao; X Wang; J Yu; T Liu", "journal": "", "ref_id": "b111", "title": "Removing adversarial noise in class activation feature space", "year": "2021" }, { "authors": "A Abusnaina; Y Wu; S Arora; Y Wang; F Wang; H Yang; D Mohaisen", "journal": "", "ref_id": "b112", "title": "Adversarial example detection using latent neighborhood graph", "year": "2021-10" }, { "authors": "F Scarselli; M Gori; A C Tsoi; M Hagenbuchner; G Monfardini", "journal": "IEEE Transactions on Neural Networks", "ref_id": "b113", "title": "The graph neural network model", "year": "2009" }, { "authors": "Y Chen; S Liu; X Wang", "journal": "", "ref_id": "b114", "title": "Learning continuous image representation with local implicit image function", "year": "2021" }, { "authors": "C.-H Ho; N Vasconcelos", "journal": "Curran Associates, Inc", "ref_id": "b115", "title": "Disco: Adversarial defense with local implicit functions", "year": "2022" }, { "authors": "C Xie; Y Wu; L Van Der Maaten; A L Yuille; K He", "journal": "", "ref_id": "b116", "title": "Feature denoising for improving adversarial robustness", "year": "2019" }, { "authors": "M Guo; Y Yang; R Xu; Z Liu", "journal": "", "ref_id": "b117", "title": "When nas meets robustness: In search of robust architectures against adversarial attacks", "year": "2020" }, { "authors": "C Xie; J Wang; Z Zhang; Z Ren; A Yuille", "journal": "", "ref_id": "b118", "title": "Mitigating adversarial effects through randomization", "year": "2018" }, { "authors": "M Atzmon; N Haim; L Yariv; O Israelov; H Maron; Y Lipman", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b119", "title": "Controlling neural level sets", "year": "2019" }, { "authors": "B Sun; N.-H Tsai; F Liu; R Yu; H Su", "journal": "", "ref_id": "b120", "title": "Adversarial defense by stratified convolutional sparse coding", "year": "2019" }, { "authors": "P Benz; C Zhang; I S Kweon", "journal": "", "ref_id": "b121", "title": "Batch normalization increases adversarial vulnerability: Disentangling usefulness and robustness of model features", "year": "2020" }, { "authors": "S Ioffe; C Szegedy", "journal": "PMLR", "ref_id": "b122", "title": "Batch normalization: Accelerating deep network training by reducing internal covariate shift", "year": "2015" }, { "authors": "W F Good; G S Maitz; D Gur", "journal": "Journal of Digital Imaging", "ref_id": "b123", "title": "Joint photographic experts group (jpeg) compatible data compression of mammograms", "year": "1994" }, { "authors": "G K Dziugaite; Z Ghahramani; D M Roy", "journal": "", "ref_id": "b124", "title": "A study of the effect of jpg compression on adversarial images", "year": "2016" }, { "authors": "X Huang; M Kwiatkowska; S Wang; M Wu", "journal": "CAV", "ref_id": "b125", "title": "Safety verification of deep neural networks", "year": "2017" }, { "authors": "K Pei; Y Cao; J Yang; S S Jana", "journal": "", "ref_id": "b126", "title": "Deepxplore: Automated whitebox testing of deep learning systems", "year": "2017" }, { "authors": "L Ma; F Juefei-Xu; F Zhang; J Sun; M Xue; B Li; C Chen; T Su; L Li; Y Liu; J Zhao; Y Wang", "journal": "", "ref_id": "b127", "title": "Deepgauge: Multi-granularity testing criteria for deep learning systems", "year": "2018" }, { "authors": "J Kim; R Feldt; S Yoo", "journal": "IEEE/ACM", "ref_id": "b128", "title": "Guiding deep learning system testing using surprise adequacy", "year": "2019" }, { "authors": "T Devries; G W Taylor", "journal": "", "ref_id": "b129", "title": "Improved regularization of convolutional neural networks with cutout", "year": "2017" }, { "authors": "H Zhang; M Cisse; Y N Dauphin; D Lopez-Paz", "journal": "", "ref_id": "b130", "title": "mixup: Beyond empirical risk minimization", "year": "2018" }, { "authors": "L Rice; E Wong; Z Kolter", "journal": "PMLR", "ref_id": "b131", "title": "Overfitting in adversarially robust deep learning", "year": "2020" }, { "authors": "T Pang; X Yang; Y Dong; H Su; J Zhu", "journal": "", "ref_id": "b132", "title": "Bag of tricks for adversarial training", "year": "2020" }, { "authors": "S.-A Rebuffi; S Gowal; D A Calian; F Stimberg; O Wiles; T Mann", "journal": "", "ref_id": "b133", "title": "Fixing data augmentation to improve adversarial robustness", "year": "2021" }, { "authors": "S Gowal; C Qin; J Uesato; T Mann; P Kohli", "journal": "", "ref_id": "b134", "title": "Uncovering the limits of adversarial training against norm-bounded adversarial examples", "year": "2020" }, { "authors": "S Elfwing; E Uchibe; K Doya", "journal": "Neural Networks", "ref_id": "b135", "title": "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning", "year": "2018" }, { "authors": "J Ho; A Jain; P Abbeel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b136", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "S Gowal; S.-A Rebuffi; O Wiles; F Stimberg; D A Calian; T A Mann", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b137", "title": "Improving robustness using generated data", "year": "2021" }, { "authors": "J Sohl-Dickstein; E Weiss; N Maheswaranathan; S Ganguli", "journal": "PMLR", "ref_id": "b138", "title": "Deep unsupervised learning using nonequilibrium thermodynamics", "year": "2015" }, { "authors": "V Sehwag; S Mahloujifar; T Handina; S Dai; C Xiang; M Chiang; P Mittal", "journal": "", "ref_id": "b139", "title": "Robust learning meets generative models: Can proxy distributions improve adversarial robustness?", "year": "2022" }, { "authors": "C Shi; C Holtz; G Mishne", "journal": "", "ref_id": "b140", "title": "Online adversarial purification based on self-supervised learning", "year": "2021" }, { "authors": "J Yoon; S J Hwang; J Lee", "journal": "PMLR", "ref_id": "b141", "title": "Adversarial purification with scorebased generative models", "year": "2021" }, { "authors": "W Nie; B Guo; Y Huang; C Xiao; A Vahdat; A Anandkumar", "journal": "PMLR", "ref_id": "b142", "title": "Diffusion models for adversarial purification", "year": "2022" }, { "authors": "Q Wu; H Ye; Y Gu", "journal": "", "ref_id": "b143", "title": "Guided diffusion model for adversarial purification from random noise", "year": "2022" }, { "authors": "C Xiao; Z Chen; K Jin; J Wang; W Nie; M Liu; A Anandkumar; B Li; D Song", "journal": "", "ref_id": "b144", "title": "Densepure: Understanding diffusion models towards adversarial robustness", "year": "2022" }, { "authors": "Z Wang; T Pang; C Du; M Lin; W Liu; S Yan", "journal": "", "ref_id": "b145", "title": "Better diffusion models further improve adversarial training", "year": "2023" }, { "authors": "T Karras; M Aittala; T Aila; S Laine", "journal": "", "ref_id": "b146", "title": "Elucidating the design space of diffusion-based generative models", "year": "2022" }, { "authors": "A Krizhevsky; I Sutskever; G E Hinton", "journal": "Curran Associates, Inc", "ref_id": "b147", "title": "Imagenet classification with deep convolutional neural networks", "year": "2012" }, { "authors": "M Naseer; K Ranasinghe; S H Khan; M Hayat; F S Khan; M Yang", "journal": "", "ref_id": "b148", "title": "Intriguing properties of vision transformers", "year": "2021" }, { "authors": "K He; X Zhang; S Ren; J Sun", "journal": "", "ref_id": "b149", "title": "Deep residual learning for image recognition", "year": "2015" }, { "authors": "K Mahmood; R Mahmood; M Van Dijk", "journal": "", "ref_id": "b150", "title": "On the robustness of vision transformers to adversarial examples", "year": "2021-10" }, { "authors": "A Aldahdooh; W Hamidouche; O Déforges", "journal": "", "ref_id": "b151", "title": "Reveal of vision transformers robustness against adversarial attacks", "year": "2021" }, { "authors": "R R Selvaraju; M Cogswell; A Das; R Vedantam; D Parikh; D Batra", "journal": "", "ref_id": "b152", "title": "Grad-cam: Visual explanations from deep networks via gradient-based localization", "year": "2017-10" }, { "authors": "Z Wei; J Chen; M Goldblum; Z Wu; T Goldstein; Y Jiang", "journal": "", "ref_id": "b153", "title": "Towards transferable adversarial attacks on vision transformers", "year": "2021" }, { "authors": "H Salman; S Jain; E Wong; A Madry", "journal": "", "ref_id": "b154", "title": "Certified patch robustness via smoothed vision transformers", "year": "2021" }, { "authors": "Y Bai; J Mei; A L Yuille; C Xie", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b155", "title": "Are transformers more robust than cnns?", "year": "2021" }, { "authors": "Y Wang; J Wang; Z Yin; R Gong; J Wang; A Liu; X Liu", "journal": "", "ref_id": "b156", "title": "Generating transferable adversarial examples against vision transformers", "year": "2022" }, { "authors": "Y Fu; S Zhang; S Wu; C Wan; Y Lin", "journal": "", "ref_id": "b157", "title": "Patch-fool: Are vision transformers always robust against adversarial perturbations?", "year": "2022" }, { "authors": "J Gu; V Tresp; Y Qin", "journal": "Springer", "ref_id": "b158", "title": "Are vision transformers robust to patch perturbations?", "year": "2022" }, { "authors": "Z Chen; B Li; J Xu; S Wu; S Ding; W Zhang", "journal": "", "ref_id": "b159", "title": "Towards practical certifiable patch defense with vision transformer", "year": "2022" }, { "authors": "Y Li; S Ruan; H Qin; S Deng; M A El-Yacoubi", "journal": "IEEE Transactions on Information Forensics and Security", "ref_id": "b160", "title": "Transformer based defense gan against palm-vein adversarial attacks", "year": "2023" }, { "authors": "Y Lecun; L Bottou; Y Bengio; P Haffner", "journal": "", "ref_id": "b161", "title": "Gradient-based learning applied to document recognition", "year": "1998" }, { "authors": "H Xiao; K Rasul; R Vollgraf", "journal": "", "ref_id": "b162", "title": "Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms", "year": "2017" }, { "authors": "A Krizhevsky; G Hinton", "journal": "", "ref_id": "b163", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "A Torralba; R Fergus; W T Freeman", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b164", "title": "80 million tiny images: A large data set for nonparametric object and scene recognition", "year": "2008" }, { "authors": "Y Netzer; T Wang; A Coates; A Bissacco; B Wu; A Y Ng", "journal": "", "ref_id": "b165", "title": "Reading digits in natural images with unsupervised feature learning", "year": "2011" }, { "authors": "J Stallkamp; M Schlipsing; J Salmen; C Igel", "journal": "Neural networks", "ref_id": "b166", "title": "Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition", "year": "2012" }, { "authors": "O Russakovsky; J Deng; H Su; J Krause; S Satheesh; S Ma; Z Huang; A Karpathy; A Khosla; M Bernstein", "journal": "IJCV", "ref_id": "b167", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "D Hendrycks; K Zhao; S Basart; J Steinhardt; D Song", "journal": "", "ref_id": "b168", "title": "Natural adversarial examples", "year": "2021-06" }, { "authors": "D Hendrycks; T Dietterich", "journal": "", "ref_id": "b169", "title": "Benchmarking neural network robustness to common corruptions and perturbations", "year": "2019" }, { "authors": "K De; M Pedersen", "journal": "", "ref_id": "b170", "title": "Impact of colour on robustness of deep neural networks", "year": "2021" }, { "authors": "Y Le; X Yang", "journal": "CS", "ref_id": "b171", "title": "Tiny imagenet visual recognition challenge", "year": "2015" }, { "authors": "Z Huan; Y Wang; X Zhang; L Shang; C Fu; J Zhou", "journal": "Springer", "ref_id": "b172", "title": "Data-free adversarial perturbations for practical black-box attack", "year": "2020" }, { "authors": "D Hendrycks; K Lee; M Mazeika", "journal": "PMLR", "ref_id": "b173", "title": "Using pre-training can improve model robustness and uncertainty", "year": "2019" }, { "authors": "S Zagoruyko; N Komodakis", "journal": "British Machine Vision Association", "ref_id": "b174", "title": "Wide residual networks", "year": "2016" }, { "authors": "F Croce; M Andriushchenko; V Sehwag; E Debenedetti; N Flammarion; M Chiang; P Mittal; M Hein", "journal": "", "ref_id": "b175", "title": "Robustbench: a standardized adversarial robustness benchmark", "year": "2021" }, { "authors": "H Salman; A Ilyas; L Engstrom; A Kapoor; A Madry", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b176", "title": "Do adversarially robust imagenet models transfer better?", "year": "2020" } ]
[ { "formula_coordinates": [ 2, 105.09, 669.9, 194.93, 14.73 ], "formula_id": "formula_0", "formula_text": "arg min δX δ X s.t. f(X + δ X ) = Y * ,(1)" }, { "formula_coordinates": [ 2, 145.72, 739.01, 154.31, 9.83 ], "formula_id": "formula_1", "formula_text": "X * = X + δ X ,(2)" }, { "formula_coordinates": [ 2, 367.66, 718.55, 195.38, 11.72 ], "formula_id": "formula_2", "formula_text": "||x|| 0 = (|x 1 | 0 +|x 2 | 0 +... + |x n | 0 ),(3)" }, { "formula_coordinates": [ 3, 101.51, 227.65, 198.52, 13.03 ], "formula_id": "formula_3", "formula_text": "||x|| 2 = (|x 1 | 2 +|x 2 | 2 +... + |x n | 2 ) 1 2 ,(4)" }, { "formula_coordinates": [ 3, 138.71, 303.43, 161.32, 14.43 ], "formula_id": "formula_4", "formula_text": "||x|| ∞ = max i |x i |,(5)" }, { "formula_coordinates": [ 4, 385.2, 538.64, 177.84, 9.65 ], "formula_id": "formula_5", "formula_text": "x -• sign(∇loss F,t (x)),(6)" }, { "formula_coordinates": [ 5, 100.5, 529.28, 199.52, 21.98 ], "formula_id": "formula_6", "formula_text": "min c • f (x + δ) + i [(δ i -τ ) + ],(7)" }, { "formula_coordinates": [ 5, 320.14, 348.72, 239.02, 14.66 ], "formula_id": "formula_7", "formula_text": "min θ ρ(θ), where ρ(θ) = E (x,y)∼D max δ∈S L(θ, x + δ, y) , (8" }, { "formula_coordinates": [ 5, 559.16, 349.04, 3.87, 8.64 ], "formula_id": "formula_8", "formula_text": ")" }, { "formula_coordinates": [ 6, 64.21, 508.33, 235.81, 16.76 ], "formula_id": "formula_9", "formula_text": "∆v i ← -arg min r r 2 s.t. k(x i + v + r) = k(x i ),(9)" }, { "formula_coordinates": [ 15, 97.52, 471.7, 202.51, 22.31 ], "formula_id": "formula_10", "formula_text": "accuracy = T P + T N T P + T N + F P + F N ,(10)" }, { "formula_coordinates": [ 15, 120.17, 688.07, 179.86, 26.8 ], "formula_id": "formula_11", "formula_text": "padv (f ) = 1 D x∈D r(x) 2 x 2 ,(11)" }, { "formula_coordinates": [ 15, 323.15, 594.98, 235.74, 36.57 ], "formula_id": "formula_12", "formula_text": "d = n k=1 C(X k , y k true )¬C(X k adv , y k true )C(T (X k adv ), y k true ) n k=1 (X k , y k true )C(X k adv , y k true ) , (12" }, { "formula_coordinates": [ 15, 558.89, 622.91, 4.15, 8.64 ], "formula_id": "formula_13", "formula_text": ")" }, { "formula_coordinates": [ 15, 332.85, 725.74, 230.18, 23.37 ], "formula_id": "formula_14", "formula_text": "C(X, y) = 1, if image X is classified as y; 0, otherwise.(13)" } ]
10.18653/v1/p16-1163
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b26", "b13", "b6", "b14", "b16", "b20", "b12", "b15", "b24" ], "table_ref": [], "text": "Implicit Discourse Relation Recognition (IDRR) is to detect and classify some latent relation in between a pair of text segments (called arguments) without an explicit connective (Xiang and Wang, 2023). Fig. 1 illustrates an argument pair example with a Contingency relation in the Penn Discourse TreeBank (PDTB) corpus, and the implicit connective 'so' is inserted by annotators. IDRR is of great importance for many downstream Natural Language Processing (NLP) applications, such as question answering (Liakata et al., 2013), machine translation (Guzmán et al., 2014), summarization (Huang and Kurohashi, 2021), and etc. However, due to the absence of an explicit connective, inferring discourse relations from the contextual semantics of arguments is still a challenging task. Conventional pre-train and fine-tuning paradigm (Liu et al., 2021) designs sophisticated neural networks to encode the representation of argument pairs upon a Pre-trained Language Model (PLM) for relation classification (Chen et al., 2016b;Liu and Li, 2016;Ruan et al., 2020;Li et al., 2020;Liu et al., 2020). On the one hand, these task-specific neural networks introduce some additional parameters that need to be trained by a large amount of labelled data. On the other hand, the task objective function is often not in accordance with that of the PLM, so that the PLM needs to be fine-tuned for solving downstream tasks, resulting in poor utilization of the encyclopedic linguistic knowledge embedded in the pre-training process.\nThe recent ConnPrompt model (Xiang et al., 2022b) has successfully applied the pre-train, prompt, and predict paradigm, i.e. the so-called prompt learning, in the IDRR task by transforming the IDRR as a connective-cloze task to predict an answer word and map it to a relation sense. The ConnPrompt has achieved the new state-of-the-art performance on the commonly used PDTB corpus (Webber et al., 2019), however it designs three different yet much similar connective prediction templates which inserts the [MASK] token in between two arguments or at the beginning of one argument for answer prediction. Moreover, to fuse different prompt predictions, the ConnPrompt employs a simple majority voting decision fusing as for final relation sense prediction.\nInstead of simple multi-prompt ensemble, we argue that some auxiliary prompt tasks can be designed to enlighten the main prompt task with promoted decision features. For example, as the top relation labels in the PDTB corpus are those plain vocabulary words, we can design an auxiliary task to directly predict such label words from the PLM vocabulary. Furthermore, as the PDTB corpus also contains manually annotated implicit connectives, we can design another auxiliary task to directly predict an annotated connective. Although such auxiliary tasks are not necessarily used to output the final IDRR prediction, they can be jointly trained with the main task on a shared PLM, by which some features learned from the auxiliary tasks can be fused into the main task to promote its decision features for the final prediction.\nMotivated from such considerations, we propose a Task Enlightenment Prompt Learning (TEPrompt) model, where the main IDRR task can be enlightened from some auxiliary prompt tasks in terms of its promoted decision features via fusing auxiliary task features. Specifically, the TEPrompt contains a main prompt task: Discourse Relation Recognition (DRR), and two auxiliary prompt tasks: Sense Semantics Classification (SSC) and Annotated Connective Prediction (ACP). We design each prompt task with a unique template and an answer space. We concatenate three prompt templates as an entire word sequence with two newly added special tokens [Arg 1 ] and [Arg 2 ] for shared argument representation, as the input of a PLM. In the training phase, we jointly train three prompt tasks upon one PLM model but with three different answer predictions as objective functions. In the testing phase, we only take the main prompt decision features yet promoted by fusing the features from the two auxiliary prompts to output the final IDRR decision.\nExperiment results have shown that our proposed TEPrompt outperforms the ConnPrompt with the same conditions and achieves the new state-of-the-art performance on the latest PDTB 3.0 corpus." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "pre-train and fine-tuning paradigm", "publication_ref": [ "b29", "b21", "b29", "b30", "b31", "b5", "b20", "b12", "b31", "b20", "b12", "b8", "b2" ], "table_ref": [], "text": "Conventional pre-train and fine-tuning paradigm usually approaches the IDRR task as a classification problem, and the key is to design a sophisticated downstream neural network for argument representation learning (Zhang et al., 2015;Rutherford et al., 2017). For example, the SCNN model (Zhang et al., 2015) obtains each argument representation via a single convolution layer and concatenates two arguments' representations for relation classification. Some hybrid models have attempted to combine CNN, LSTM, graph convolutional networks and etc., for argument representation learning (Zhang et al., 2021;Jiang et al., 2021b).\nAttention mechanisms have been widely used in neural model to unequally encode each word according to its importance for argument representation (Zhou et al., 2016;Guo et al., 2020;Ruan et al., 2020;Li et al., 2020). For example, Zhou et al. (2016) apply self-attention to weight a word according to its similarity to its belonging argument. Ruan et al. (2020) propose a pipeline workflow to apply interactive attention after self-attention. Li et al. (2020) use a penalty-based loss re-estimation method to regulate the attention learning.\nWord pair features have been exploited to capture interactions between arguments for representation learning (Chen et al., 2016a,b;Xiang et al., 2022a). For example, Chen et al. (2016b) construct a relevance score word-pair interaction matrix based on a bilinear model (Jenatton et al., 2012) and a single layer neural model (Collobert and Weston, 2008). Xiang et al. (2022a) propose an offset matrix network to encode word-pairs' offsets as linguistic evidence for argument representation." }, { "figure_ref": [ "fig_1" ], "heading": "pre-train, prompt, and predict paradigm", "publication_ref": [ "b3", "b17", "b19", "b22", "b23", "b4", "b22", "b19" ], "table_ref": [], "text": "Recently, some large-scale PLMs have been proposed, such as the BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), T5 (Raffel et al., 2020), and etc. The prompt learning has become a new paradigm for many NLP tasks, which uses the probability of text in PLMs to perform a prediction task, and has achieved promising results (Seoh et al., 2021;Wang et al., 2021;Ding et al., 2021). For example, Seoh et al. (2021) propose a cloze question prompt and a natural language inference prompt for Some studies design appropriate prompts to reformulate an IDRR task for predicting discourse relations (Jiang et al., 2021a,b;Xiang et al., 2022b). Jiang et al. (2021a) use a masked PLM to generate a pseudo-connective for relation classification. Jiang et al. (2021b) utilize the PLM T5 (Raffel et al., 2020) to generate the target sentence which contains the meaning of discourse relations. Xiang et al. (2022b) propose the ConnPrompt model with the new state-of-the-art performance, which reformulates the IDRR task as a connective-cloze task. They further use a majority voting decision fusion of the same task but with three much similar cloze templates for final relation sense prediction.\nThe proposed TEPrompt model fuses the learned features of two auxiliary prompt task to boost the main prompt tasks for relation prediction.\n3 The Proposed TEPrompt Model Fig. 2 presents our TEPrompt model, including three modules of prompt templatize, answer prediction and verbalizer for the main prompt task (DRR) and two auxiliary prompt tasks (SSC and ACP). The main DRR prompt task uses a kind of connective-cloze prompt to predict a manually selected answer words between two arguments, and map it to a relation sense; The SSC auxiliary prompt task describes and classifies the sense semantic between two arguments; While the ACP describes and predicts the implicit connective words." }, { "figure_ref": [ "fig_2" ], "heading": "Prompt Templatize", "publication_ref": [], "table_ref": [], "text": "We first reformulate an input argument pair x = (Arg 1 ; Arg 2 ) into a prompt template T (x) by concatenating the main DRR prompt template with two auxiliary prompt templates: SSC and ACP, as the input of a PLM. Some PLM-specific tokens such as [MASK], [CLS] and [SEP] are inserted in the prompt template; While the [MASK] tokens are added for the PLM to predict an answer word v, and the [CLS] and [SEP] tokens are used to indicate the beginning and ending of each prompt template, respectively. Fig. 3 illustrates the three templates for our DRR, SSC and ACP task. We first use a kind of connective-cloze prompt template as the main DRR prompt template T D (x), in which argument-1 and argument-2 are concatenated as an entire word sequence, and the [MASK] token is inserted between two arguments. Besides, two newly added specific tokens [Arg 1 ] and [Arg 2 ] are inserted at the front of argument-1 and argument-2 to represent their se- mantics which are also shared in the SSC template.\nWe also design two discrete prompt templates T S (x) and T A (x) for the auxiliary task SSC and ACP, respectively. The text of SSC template describes the sense semantics between argument-1 and argument-2; While the text of ACP template describes the implicit connective words. The [MASK] tokens are inserted at the end of SSC and ACP template for prediction. Note that in the SSC template, the specific tokens [Arg 1 ] and [Arg 2 ] are used to represent the semantics of argument-1 and argument-2, which are shared and trained with the main prompt task." }, { "figure_ref": [], "heading": "Answer Prediction", "publication_ref": [], "table_ref": [], "text": "After the PLM, we obtain a hidden state h for each input token in the prompt templates, where h ∈ R d h and d h is the dimension of the hidden state. We use h DRR m , h SSC m and h ACP m to denote the hidden state of [MASK] tokens in the DRR, SSC and ACP template, respectively, which are used for the joint training of task enlightenment prompt learning; While the h SSC c and h ACP c are used to denote the hidden state of the [CLS] token in the SSC and ACP template, respectively, which are used for the feature fusion of auxiliary prompt tasks.\nTo fuse the features of auxiliary prompt SSC and ACP into the main DRR task, we use the fusion gate mechanism to integrate their [CLS] representations into the [MASK] representation of the main DRR task, which is next used for the final answer word prediction. Specifically, we first use a fusion gate mechanism to integrate the [CLS] representations of SSC and ACP, the transition functions are computed as follows:\ng c = sigmoid(W c h SSP c + U c h CEP c ), (1) hc = g c h SSP c + (1 -g c ) h CEP c ,(2)\nwhere\nW c ∈ R d h ×d h , U c ∈ R d h ×d h are\nlearnable parameters and donates the element-wise product of vectors.\nWith the fusion gate, we adaptively assign different importance to the features of SSC and ACP prompt task, and outputs hc ∈ R d h as the auxiliary prompt vector. We next use another fusion gate to integrate the auxiliary prompt vector hc into the [MASK] hidden state of the main DRR prompt h DRP m for the final answer prediction. The transition functions are:\ng m = sigmoid(W m h DRP m + U m hc ), (3) hm = g m h DRP m + (1 -g m ) hc ,(4)\nwhere\nW m ∈ R d h ×d h , U m ∈ R d h ×d h are learn- able parameters.\nFinally, the Masked Language Model (MLM) classifier of the PLM uses the fused hidden state hm to estimates the probability of each word in its vocabulary V for the [MASK] token of the DRR task as follows:\nP D ([MASK] DRP = v d ∈ V | T (x)).\n(5)\nNote that, the MLM classifier also estimates an answer word probability P S and P A for the [MASK] token of the auxiliary prompt task SSC and ACP without feature fusion in the joint training." }, { "figure_ref": [], "heading": "Verbalizer", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We define a discrete answer space for the DRR, SSC and ACP prompt task, respectively, which are all subsets of the PLM vocabulary. Specifically, we use sixteen manually selected answer words as the answer space V d of the DRR, the same as that of ConnPrompt (Xiang et al., 2022b). Besides, we use four top-level sense labels in the PDTB corpus as the SSC answer space, V s = {Comparison, Contingency, Expansion, Temporal}, and we use the 174 manually annotated implicit connectives in the PDTB corpus as the ACP answer space V c of ACP. We note that the answer space of DRR is next mapped to a relation sense in verbalizer process, while the answer space of SSC and ACP are only used in the auxiliary task training. After answer prediction, a softmax layer is applied on the prediction scores of our pre-defined answer space to normalize them into probabilities:\nP (v ∈ V | T (x)) = e pv i n j=1 e pv j .(6)\nThen, the predicted answer word of DRR is projected into a unique discourse relation sense based on the pre-defined connection regulation. Table 1 presents the verbalizer connection from the answer word to the PDTB discourse relation sense labels." }, { "figure_ref": [], "heading": "Training and Prediction", "publication_ref": [ "b18" ], "table_ref": [], "text": "In the training phase, we tune the PLM parameters based on the DRR, SSC and ACP prompt task jointly to fuse their learned features. We compute a cross entropy loss for the DRR loss L d , SSC loss L s and ACP loss L c , respectively.\nJ(θ) = - 1 K K k=1 y (k) log(ŷ (k) ) + λ θ 2 , (7\n)\nwhere y (k) and ŷ(k) are the answer label and predicted answer of the k-th training instance respectively. λ and θ are the regularization hyper-parameters. We use the AdamW optimizer (Loshchilov and Hutter, 2019) with L2 regularization for model training. The cost function of our TEPrompt is optimized as follows:\nL = L d + βL s + γL c ,(8)\nwhere β and γ are weight coefficients to balance the importance of the SSC loss and ACP loss." }, { "figure_ref": [], "heading": "Experiment Setting", "publication_ref": [ "b24", "b9", "b3", "b17", "b16", "b20", "b12", "b15", "b25" ], "table_ref": [], "text": "In this section, we present our experiment settings, including the dataset, PLMs, competitors, and parameter settings.\nThe PDTB 3.0 Dataset: Our experiments are conducted on the Penn Discourse TreeBank (PDTB) 3.0 corpus1 (Webber et al., 2019), which contains more than one million words of English texts from the Wall Street Journal. Following the conventional data splitting, we use sections 2-20 as the full training set, sections 21-22 as the testing set and 0-1 as the development set (Ji and Eisenstein, 2015). Our experiments are conducted on the four top-level classes of relation sense, including Comparison, Contingency, Expansion, Temporal. Pre-trained Language Models: We use two of the most representative masked pre-trained language models (PLM) for comparison: BERT (Devlin et al., 2019) is the first Transformer-based large-scale pre-trained PLM proposed by Google2 , which is pre-trained using a cloze task and a next sentence prediction task; RoBERTa (Liu et al., 2019) is a BERT-enhanced PLM proposed by Facebook3 , which removes the next sentence prediction objective and is pre-trained on a much larger dataset with some modified key hyper-parameters.\nCompetitors: We compare our TEPrompt with the following advanced models:\n• DAGRN (Chen et al., 2016b) encodes wordpair interactions by a neural tensor network.\n• NNMA (Liu and Li, 2016) combines two arguments' representations for stacked interactive attentions.\n• IPAL (Ruan et al., 2020) propagates selfattention into interactive attention by a crosscoupled network.\n• PLR (Li et al., 2020) uses a penalty-based loss re-estimation to regulate the attention learning.\n• BMGF (Liu et al., 2020) combines bilateral multi-perspective matching and global information fusion to learn a contextualized representation.\n• MANF (Xiang et al., 2022a) encodes two kinds of attentive representation for arguments and fuses them with the word-pairs features.\n• ConnPrompt (Xiang et al., 2022b) applies the prompt learning for IDRR based on the fusion of multi-prompt decisions.\nParameter Setting: We implement the PLM models with 768-dimension provided by Hugging-Face transformers4 (Wolf et al., 2020), and run PyTorch5 framework with CUDA on NVIDIA GTX 3090 Ti GPUs. The maximum length of our TEPrompt template is set to 150 tokens, in which the maximum length of arguments are 70 tokens. We set the mini-batch size to 32, the learning rate to 1e-5, the weight coefficients β and γ to 0.3 and 0.4 respectively, and all trainable parameters are randomly initialized from normal distributions. We release the code at: https://github.com/HustMinsLab/TEPrompt." }, { "figure_ref": [], "heading": "Result and Analysis", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Overall Result", "publication_ref": [ "b3", "b17" ], "table_ref": [ "tab_3" ], "text": "Table 3 compares the overall performance between our TEPrompt and the competitors. We implement a four-way classification on the top-level relation sense of the PDTB dataset and adopt the commonly used macro F 1 score and accuracy (Acc) as performance metrics.\nWe note that the competitors in the first group all use the pre-train and fine-tuning paradigm; While our TEPrompt and the ConnPrompt use the pretrain, prompt, and predict paradigm, i.e. the prompt learning. Besides, the first two competitors both use a kind of distributed and static word embeddings: Word2vec and Glove; while the others use Transformer-based PLM models: BERT and RoBERTa.\nThe first observation is that the DAGRN and NNMA cannot outperform the other competitors. This is not unexpected, as the others employ the more advanced dynamic PLMs pre-trained with deeper neural networks and larger scale of parameters, which have been proven more effective for many downstream NLP tasks (Devlin et al., 2019;Liu et al., 2019). The gaps between large PLM fine-tuning and static embedding for representation learning also have a certain impact on the performance of the IDRR task.\nThe second observation is that our TEPrompt and the ConnPrompt adopting the prompt learning paradigm can significantly outperform the other competitors in terms of much higher macro F1 score (8%+) and Acc(5%+). The outstanding performance can be attributed to the task transformation of connective-cloze prediction into the training of PLMs, other than designing a task-specific model upon PLM, by which the model can better enjoy the encyclopedic linguistic knowledge embedded in a PLM during the model training. Finally, our TEPrompt achieves better performance than the ConnPrompt with the same PLM and outperforms all the other models in both higher macro F1 score and accuracy. Similar results can also be observed in the binary classification (i.e. one-versus-others) of implicit discourse relation recognition, in Table 4. We attribute the outstanding performance of our TEPrompt to the use of auxiliary tasks for enlightenment prompt learning, by which the jointly trained features of auxiliary SSC and ACP prompt task can be well fused into the main DRR task to improve the final answer prediction. This will be further analyzed in our ablation study. Table 4: Comparison of binary classification results on the PDTB (F1 score %). We have reproduced some of the competitors on PDTB 3.0 for fair comparison." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_6" ], "text": "To examine the effectiveness of different prompt tasks, we design the following ablation studies.\n• Prompt-SSC is only the SSC prompt concatenating argument-1 and argument-2 in front, without the DRR and ACP task.\n• TEPrompt-SSC combines the SCC prompt with DRR and ACP, and only uses the predicted answer of SSC for relation sense mapping.\n• Prompt-ACP is only the ACP prompt concatenating argument-1 and argument-2 in front, without the DRR and SSC.\n• TEPrompt-ACP combines the ACP prompt with the DRR and SSC, and uses the predicted answer of ACP for relation sense mapping6 .\n• Prompt-DRR is only the DRR prompt without the auxiliary prompt SSC and ACP.\n• TEPrompt w/o Gate is our task enlightenment prompt model without fusion mechanisms.\nTable 5 compares the results of our ablation study models with both single-prompt and multiprompt ConnPrompt. Task enlightenment prompt: We can observe that the Prompt-DRR has comparable performance to each single-ConnPrompt, viz. ConnPrompt-1/2/3. This is not unexpected. All the three single-ConnPrompts are with the same connective-cloze prompt model, and the only difference is the location of the cloze-mask in each template; While the Prompt-DRR is with the same connective-cloze prompt model and answer space as a single-ConnPrompt. The ConnPrompt-Multi uses multi-prompt majority voting and outperforms any of the single-ConnPrompt; While the TEPrompt designs two auxiliary tasks to augment the main task and outperforms both Prompt-DRR and ConnPrompt-Multi, which validates the effectiveness of our task enlightenment prompt learning via fusing features from both main and auxiliary prompt tasks by joint training.\nPrompt ablation study: Among the second group of prompt ablation models, it can be observed that the Prompt-SSC and Prompt-ACP cannot outperform the Prompt-DRR; While the TEPrompt-SSC and TEPrompt-ACP also cannot outperform the TEPrompt. Although both the SSC and ACP prompt model can each output the final prediction by mapping its predicted answer to a relation sense, however, their objectives are not completely in accordance with the IDRR task. The SCC prompt is designed to classify sense semantics; While the ACP prompt aims at predicting manually annotated connectives. Furthermore, we can also observe that the TEPrompt-SSC and TEPrompt-ACP have achieved better performance than the Prompt-SSC and Prompt-ACP, respectively. This again validates our argument that fusing features from jointly trained auxiliary prompt tasks can be useful to boost the main prompt task prediction. Gate Fusion Mechanism: We also observe that the TEPrompt w/o Gate without gate fusion mechanism cannot outperform the full TEPrompt model, even it jointly trains a PLM as well as the MLM head with two auxiliary tasks. This indicates that the features learned from auxiliary tasks can indeed augment the main task prediction.\nAuxiliary prompt effections: To further investigate the task enlightenment effections, we design several combinations of individual prompt models: the DRR with the only main task, the DRR+SSC and DRR+ACP are the main task enlightened by only one auxiliary task, and DRR+SSC+ACP (viz., TEPrompt) is the main task enlightened by two auxiliary tasks.\nFig. 4 compares the performance of different auxiliary prompt ablation models. We can observe that both the SSC and ACP auxiliary task can help improving the performance of the main DRR task. This suggests that fusing either the sense semantics feature in training SSC or the annotated connective feature in training ACP (viz., the two [CLS] tokens) can help promoting the decision feature of the main DRR task (viz., the [MASK] token) to improve the IDRR prediction. Finally, our TEPrompt joint training with both SSC and ACP auxiliary prompts yields substantial improvements over all ablation models, again approving our arguments and design objectives." }, { "figure_ref": [ "fig_4" ], "heading": "Case Study", "publication_ref": [], "table_ref": [], "text": "We use a case study to compare the TEPrompt and the DRR prompt. Note that the DRR prompt can be regarded as the ConnPrompt using only one template yet without multi-prompt ensemble. Fig. 5 visualizes the representation of the [MASK] token, as well as its prediction probability and classified relation sense by a pie chart. The [MASK] token representation of the TEPrompt is quite different from that of the DRR prompt, as the former also fuses two auxiliary prompt task features. Such feature fusion from auxiliary tasks may enlighten the main task to make correct predictions.\nIt can be observed that the DRR prompt itself tends to predict a Comparison relation (64.76%) corresponding to the answer word 'however' with the highest probability 35.99%. After feature fusion, the TEPrompt can correctly recognize the Contingency relation (83.59%) between the two arguments by predicting the answer word 'so' with a much higher probability 75.43% than that of the DRR prompt prediction (10.60%). We argue that such benefits from the adjustments of prediction probabilities can be attributed to the feature fusion of the two auxiliary prompt tasks." }, { "figure_ref": [], "heading": "Concluding Remarks", "publication_ref": [], "table_ref": [], "text": "In this paper, we have argued a main prompt task can be enlightened by some auxiliary prompt tasks for performance improvements. For the IDRR task, we have proposed a TEPrompt, a task enlightenment prompt model that fuses learned features from our designed auxiliary SSC and ACP task into the decision features of the main DRR task. Since the three prompt tasks are trained jointly, the learned auxiliary task features in the training phase can help promoting the main task decision feature and improving the final relation prediction in the testing phase. Experiment results and ablation studies have validated the effectiveness of our arguments and design objectives in terms of improved stateof-the-art IDRR performance.\nIn our future work, we shall investigate other types of auxiliary tasks for the IDRR task as well as the applicability of such task enlightenment prompt learning for other NLP tasks." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The two auxiliary prompt tasks are closely related to the PDTB corpus, as the top-level relation sense labels are those plain vocabulary words and the PDTB provides manually annotated connectives." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This work is supported in part by National Natural Science Foundation of China (Grant No: 62172167). The computation is completed in the HPC Platform of Huazhong University of Science and Technology." } ]
Implicit Discourse Relation Recognition (IDRR) aims at classifying the relation sense between two arguments without an explicit connective. Recently, the ConnPrompt (Xiang et al., 2022b) has leveraged the powerful prompt learning for IDRR based on the fusion of multi-prompt decisions from three different yet much similar connective prediction templates. Instead of multi-prompt ensembling, we propose to design auxiliary tasks with enlightened prompt learning for the IDRR task. Although an auxiliary task is not used to directly output final prediction, we argue that during the joint training some of its learned features can be useful to boost the main task. In light of such motivations, we propose a task enlightenment prompt learning model, called TEPrompt, to fuse learned features from three related tasks for IDRR. In particular, the TEPrompt contains three tasks, viz., Discourse Relation Recognition (DRR), Sense Semantics Classification (SSC) and Annotated Connective Prediction (ACP), each with a unique prompt template and an answer space. In the training phase, we jointly train three prompt learning tasks with shared argument representation. In the testing phase, we only take the DRR output with fused features as the final IDRR decision. Experiments with the same conditions have shown that the proposed TEPrompt outperforms the ConnPrompt. This can be attributed to the promoted decision features and language models benefited from joint-training of auxiliary tasks.
TEPrompt: Task Enlightenment Prompt Learning for Implicit Discourse Relation Recognition
[ { "figure_caption": "Figure 1 :1Figure 1: An example of implicit discourse relation annotation with manually inserted connective.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Illustration of our TEPrompt framework. It contains three modules of the prompt templatize, answer prediction and verbalizer for the main prompt task (DRR) and two auxiliary prompt tasks (SSC and ACP).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Illustration of our TEPrompt template, which is a concatenation of the three task templates.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Comparison of auxiliary prompt effections.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualization of the predicted answer words and relation sense for the DRR Prompt and TEPrompt.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Answer space of the DRR prompt and the connection to the top-level class discourse relation sense labels in the PDTB corpus.", "figure_data": "Relation Sense Answer wordsComparisonsimilarly, but, however, althoughContingencyfor, if, because, soExpansioninstead, by, thereby, specifically, andTemporalsimultaneously, previously, then", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Table 2 presents the dataset statistics. Statistics of implicit discourse relation instances in PDTB 3.0 with four top-level relation senses.", "figure_data": "RelationTrain Dev. TestExpansion8645748643Comparison 1937190154Contingency 5916579529Temporal1447136148Total17945 1653 1474", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison of overall results on the PDTB.", "figure_data": "", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results of ablation study on the PDTB corpus.", "figure_data": "", "figure_id": "tab_6", "figure_label": "5", "figure_type": "table" } ]
Wei Xiang; Chao Liang; Bang Wang
[ { "authors": "Jifan Chen; Qi Zhang; Pengfei Liu; Xuanjing Huang", "journal": "", "ref_id": "b0", "title": "Discourse relations detection via a mixed generative-discriminative framework", "year": "2016" }, { "authors": "Jifan Chen; Qi Zhang; Pengfei Liu; Xipeng Qiu; Xuanjing Huang", "journal": "", "ref_id": "b1", "title": "Implicit discourse relation detection via a deep architecture with gated relevance network", "year": "2016" }, { "authors": "Ronan Collobert; Jason Weston", "journal": "", "ref_id": "b2", "title": "A unified architecture for natural language processing: Deep neural networks with multitask learning", "year": "2008" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "", "ref_id": "b3", "title": "BERT: pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Ning Ding; Yulin Chen; Xu Han; Guangwei Xu; Pengjun Xie; Hai-Tao Zheng; Zhiyuan Liu; Juanzi Li; Hong-Gee Kim", "journal": "", "ref_id": "b4", "title": "Prompt-learning for fine-grained entity typing", "year": "2021" }, { "authors": "Fengyu Guo; Ruifang He; Jianwu Dang; Jian Wang", "journal": "", "ref_id": "b5", "title": "Working memory-driven neural networks with a novel knowledge enhancement paradigm for implicit discourse relation recognition", "year": "2020" }, { "authors": "Francisco Guzmán; Shafiq Joty; Lluís Màrquez; Preslav Nakov", "journal": "Baltimore", "ref_id": "b6", "title": "Using discourse structure improves machine translation evaluation", "year": "2014" }, { "authors": "Jou Yin; Sadao Huang; Kurohashi", "journal": "", "ref_id": "b7", "title": "Extractive summarization considering discourse and coreference relations based on heterogeneous graph", "year": "2021" }, { "authors": "Rodolphe Jenatton; Nicolas L Roux; Antoine Bordes; Guillaume R Obozinski", "journal": "", "ref_id": "b8", "title": "A latent factor model for highly multi-relational data", "year": "2012" }, { "authors": "Yangfeng Ji; Jacob Eisenstein", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b9", "title": "One vector is not enough: Entity-augmented distributed semantics for discourse relations", "year": "2015" }, { "authors": "Congcong Jiang; Tieyun Qian; Zhuang Chen; Kejian Tang; Shaohui Zhan; Tao Zhan; ; ", "journal": "", "ref_id": "b10", "title": "Generating pseudo connectives with mlms for implicit discourse relation recognition", "year": "2021" }, { "authors": "Feng Jiang; Yaxin Fan; Xiaomin Chu; Peifeng Li; Qiaoming Zhu", "journal": "", "ref_id": "b11", "title": "Not just classification: Recognizing implicit discourse relation on joint modeling of classification and generation", "year": "2021" }, { "authors": "Xiao Li; Yu Hong; Huibin Ruan; Zhen Huang", "journal": "", "ref_id": "b12", "title": "Using a penalty-based loss re-estimation method to improve implicit discourse relation classification", "year": "2020" }, { "authors": "Maria Liakata; Simon Dobnik; Shyamasree Saha; Colin Batchelor; Dietrich Rebholz Schuhmann", "journal": "", "ref_id": "b13", "title": "A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task", "year": "2013" }, { "authors": "Pengfei Liu; Weizhe Yuan; Jinlan Fu; Zhengbao Jiang; Hiroaki Hayashi; Graham Neubig", "journal": "", "ref_id": "b14", "title": "Pretrain, prompt, and predict: A systematic survey of prompting methods in natural language processing", "year": "2021" }, { "authors": "Xin Liu; Jiefu Ou; Yangqiu Song; Xin Jiang", "journal": "", "ref_id": "b15", "title": "On the importance of word and sentence representation learning in implicit discourse relation classification", "year": "2020" }, { "authors": "Yang Liu; Sujian Li", "journal": "", "ref_id": "b16", "title": "Recognizing implicit discourse relations via repeated reading: Neural networks with multi-level attention", "year": "2016" }, { "authors": "Yinhan Liu; Myle Ott; Naman Goyal; Jingfei Du; Mandar Joshi; Danqi Chen; Omer Levy; Mike Lewis; Luke Zettlemoyer; Veselin Stoyanov", "journal": "", "ref_id": "b17", "title": "Roberta: A robustly optimized BERT pretraining approach", "year": "2019" }, { "authors": "Ilya Loshchilov; Frank Hutter", "journal": "", "ref_id": "b18", "title": "Decoupled weight decay regularization", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "Journal of Machine Learning Research", "ref_id": "b19", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "year": "2020" }, { "authors": "Huibin Ruan; Yu Hong; Yang Xu; Zhen Huang; Guodong Zhou; Min Zhang", "journal": "", "ref_id": "b20", "title": "Interactivelypropagative attention learning for implicit discourse relation recognition", "year": "2020" }, { "authors": "Attapol Rutherford; Vera Demberg; Nianwen Xue", "journal": "", "ref_id": "b21", "title": "A systematic study of neural discourse models for implicit discourse relation", "year": "2017" }, { "authors": "Ronald Seoh; Ian Birle; Mrinal Tak; Haw-Shiuan Chang; Brian Pinette; Alfred Hough", "journal": "", "ref_id": "b22", "title": "Open aspect target sentiment classification with natural language prompts", "year": "2021" }, { "authors": "Chengyu Wang; Jianing Wang; Minghui Qiu; Jun Huang; Ming Gao", "journal": "", "ref_id": "b23", "title": "Transprompt: Towards an automatic transferable prompting framework for few-shot text classification", "year": "2021" }, { "authors": "Bonnie Webber; Rashmi Prasad; Alan Lee; Aravind Joshi", "journal": "Philadelphia, University of Pennsylvania", "ref_id": "b24", "title": "The penn discourse treebank 3.0 annotation manual", "year": "2019" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Joe Davison; Sam Shleifer; Clara Patrick Von Platen; Yacine Ma; Julien Jernite; Canwen Plu; Teven Xu; Sylvain Le Scao; Mariama Gugger; Quentin Drame; Alexander M Lhoest; Rush", "journal": "", "ref_id": "b25", "title": "Transformers: State-of-the-art natural language processing", "year": "2020" }, { "authors": "Wei Xiang; Bang Wang", "journal": "ACM Computing Surveys", "ref_id": "b26", "title": "A survey of implicit discourse relation recognition", "year": "2023" }, { "authors": "Wei Xiang; Bang Wang; Lu Dai; Yijun Mo; ; ", "journal": "", "ref_id": "b27", "title": "Encoding and fusing semantic connection and linguistic evidence for implicit discourse relation recognition", "year": "2022" }, { "authors": "Wei Xiang; Zhenglin Wang; Lu Dai; Bang Wang", "journal": "", "ref_id": "b28", "title": "ConnPrompt: Connective-cloze prompt learning for implicit discourse relation recognition", "year": "2022" }, { "authors": "Biao Zhang; Jinsong Su; Deyi Xiong; Yaojie Lu; Hong Duan; Junfeng Yao", "journal": "", "ref_id": "b29", "title": "Shallow convolutional neural network for implicit discourse relation recognition", "year": "2015" }, { "authors": "Yingxue Zhang; Fandong Meng; Li Peng; Jian Ping; Jie Zhou", "journal": "", "ref_id": "b30", "title": "Context tracking network: Graph-based context modeling for implicit discourse relation recognition", "year": "2021" }, { "authors": "Peng Zhou; Wei Shi; Jun Tian; Zhenyu Qi; Bingchen Li; Hongwei Hao; Bo Xu", "journal": "", "ref_id": "b31", "title": "Attention-based bidirectional long short-term memory networks for relation classification", "year": "2016" } ]
[ { "formula_coordinates": [ 4, 98.8, 690.01, 190.34, 32.29 ], "formula_id": "formula_0", "formula_text": "g c = sigmoid(W c h SSP c + U c h CEP c ), (1) hc = g c h SSP c + (1 -g c ) h CEP c ,(2)" }, { "formula_coordinates": [ 4, 101.74, 734.52, 159.18, 12.58 ], "formula_id": "formula_1", "formula_text": "W c ∈ R d h ×d h , U c ∈ R d h ×d h are" }, { "formula_coordinates": [ 4, 332.71, 321.59, 191.7, 32.81 ], "formula_id": "formula_2", "formula_text": "g m = sigmoid(W m h DRP m + U m hc ), (3) hm = g m h DRP m + (1 -g m ) hc ,(4)" }, { "formula_coordinates": [ 4, 306.14, 364.63, 220.07, 25.31 ], "formula_id": "formula_3", "formula_text": "W m ∈ R d h ×d h , U m ∈ R d h ×d h are learn- able parameters." }, { "formula_coordinates": [ 4, 338.86, 470.76, 152.83, 10.81 ], "formula_id": "formula_4", "formula_text": "P D ([MASK] DRP = v d ∈ V | T (x))." }, { "formula_coordinates": [ 5, 108.81, 252.91, 180.32, 29.15 ], "formula_id": "formula_5", "formula_text": "P (v ∈ V | T (x)) = e pv i n j=1 e pv j .(6)" }, { "formula_coordinates": [ 5, 83.11, 468.34, 201.78, 33.98 ], "formula_id": "formula_6", "formula_text": "J(θ) = - 1 K K k=1 y (k) log(ŷ (k) ) + λ θ 2 , (7" }, { "formula_coordinates": [ 5, 284.89, 480.32, 4.24, 9.46 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 129.81, 623.81, 159.32, 10.77 ], "formula_id": "formula_8", "formula_text": "L = L d + βL s + γL c ,(8)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Input images", "publication_ref": [], "table_ref": [], "text": "Segmentation results " }, { "figure_ref": [], "heading": "Groundtruth", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "INTRODUCTION", "publication_ref": [ "b3", "b14", "b36", "b37", "b38", "b22", "b20", "b14", "b38", "b36", "b37", "b23", "b3" ], "table_ref": [], "text": "In recent years, semantic segmentation has achieved impressive performance by using deep neural networks. However, conventional semantic segmentation models generally have a fixed output space. Therefore, when encountering new categories, they need to be re-trained from scratch to update their segmentation capability. Moreover, these models require large-scale pixel-level labeled data, which are expensive and laborious to obtain. These issues limit their applicability in open-ended real-world scenarios. In this context, Cermelli et al. [4] proposed the Incremental Few-shot Semantic Segmentation (IFSS) task. It aims to effectively extend a semantic segmentation model to new categories using a few labeled samples, while maintaining its segmentation capability on previously learned old ones. In this way, the extendibility and flexibility of the model can be improved, which is critical for many real-world applications, such as autonomous driving and human-machine interaction.\nMore specifically, in the IFSS task, a base set with relatively more training samples is first provided to initialize the learnable parameters of a semantic segmentation model. Then, a few pixel-level annotated training samples of novel categories are given, helping incrementally expand the segmentation capability of the model to the encountered novel ones. However, the IFSS model is prone to fall into the semantic-aliasing issue due to data imbalance between base and novel classes. As shown in Figure 1, the semantic confusion between the base class \"dog\" and the encountered novel category \"cat\" misleads the model to draw the incorrect segmentation results, making the model performance unsatisfactory. Recently, semantic information has been successfully introduced in the few-shot classification task [15,[37][38][39], aiming to make feature embeddings more representative, e.g., GloVe [23] or word2vec [21] was employed in [15,39] to provide prior semantic information while [37,38] additionally considered the semantic guidance of CLIP [24].\nInspired by these methods, we propose to suppress the semanticaliasing issue in IFSS by fully considering the guidance of visual semantics. Therefore, we propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method in this paper, which is shown in Figure 2. On one hand, we propose to conduct Semantic Relation Alignment (SRA) in the base step, aiming to semantically align base class representations in latent semantic space. Therefore, the embeddings of base classes are constrained to have relatively low semantic correlations to categories that are different from them. Moreover, the cross-entropy loss is employed during this process to measure discrepancy between segmentation results and groundtruth label maps. As a result, the model is trained to segment base classes, while being aware of their semantic information. Based on the aligned base classes, Semantic-Guided Adaptation (SGA) is employed to incrementally adapt the model to novel classes. It aims to ensure affinities between visual and semantic embeddings of novel categories, thereby making the feature representations be consistent with their semantic information. By considering the semantic information of both the base and the novel classes, the semantic-aliasing issue can be alleviated. We evaluate our method on the public semantic segmentation datasets PASCAL VOC 2012 and COCO, following the cross validation used in [4]. On both these datasets, our method presents competitive performance.\nAll-in-all, the contributions of this paper can be summarized below:\n• In this paper, we propose to suppress the semantic-aliasing issue in IFSS by fully considering the guidance of semantic information, thereby making segmentation results more accurate. To realize this goal, we accordingly propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method. • We propose to conduct Semantic Relation Alignment (SRA) in the base step, aiming to semantically align the representations of base categories. Therefore, the base class embeddings are guided to have relatively low semantic correlations to categories that are different from them. • Based on the aligned base classes, we propose to conduct Semantic-Guided Adaptation (SGA) during the incremental learning stage, guiding the embeddings of novel classes to be consistent with their semantic information. In this way, the semantic aliasing between the base and the encountered novel categories can be alleviated." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [], "table_ref": [], "text": "In this section, we review methods that are relevant to our research. We first briefly introduce typical methods of semantic segmentation, few-shot learning, and incremental learning in section 2.1, section 2.2, and section 2.3. Then, we review related incremental few-shot semantic segmentation methods in section 2.4, and introduce their differences to our work." }, { "figure_ref": [], "heading": "Semantic Segmentation", "publication_ref": [ "b18", "b18", "b40", "b4", "b5", "b4", "b5", "b18", "b39", "b12", "b29", "b35", "b41", "b6", "b31" ], "table_ref": [], "text": "Semantic segmentation, a pixel-level image recognition technique, has achieved remarkable progress in recent years with development of deep learning. [19] is a typical deep-learning-based semantic segmentation method that uses the fully convolutional layer to realize efficient end-to-end dense predications for input images.\nInspired by [19], many semantic segmentation models have been proposed. Zhao et al. [41] further introduced the pyramid pooling module, aiming to fully aggregate global context information of visual scenes. Chen et al. [5,6] proposed to aggregate multi-scale context information using the atrous convolution, so as to make segmentation results more accurate. Based on [5,6,19], Zhang et al. [40] learned an inherent dictionary to aggregate semantic context information of a whole dataset, which help the model understand visual scenes in a more global way. Huang et al. [13] enhanced a semantic segmentation model with the proposed criss-cross attention layer. Therefore, sufficient context information is aggregated for each pixel, while the model is maintained with high efficiency.\nRecently, the methods [30,36,42] have successfully built semantic segmentation models upon the transformer [7,32], thereby further boosting visual representations of input images." }, { "figure_ref": [], "heading": "Few-shot Learning", "publication_ref": [ "b32", "b28", "b25", "b14", "b38", "b22", "b20", "b36", "b37", "b23" ], "table_ref": [], "text": "Few-shot learning aims to quickly transfer models to novel unseen categories according to only one or a few training instances, thereby reducing expenses cost on data preparation. Currently, few-shot learning methods are mainly based on metric learning, aiming at learning an effective metric classifier from given fewshot training instances. For example, Vinyals et al. [33] proposed a matching network that classifies query samples by measuring instance-wise consine similarity between queries and supports. Snell et al. [29] advanced the matching network by further introducing prototypical representations, thereby constructing global category representations for support samples. Santoro et al. [26] proposed a memory-augmented neural network that utilizes stored memory to make query categorization more accurate. Li et al. [15] and Zhang et al. [39] proposed to additionally consider semantic attributes encoded by GloVe [23] or word2vec [21], so as to further improve visual representations of input images. Besides, Xu et al. [37] and Yang et al. [38] proposed to further exploit the semantic guidance from CLIP [24], as they found semantic information encoded by CLIP is more effective in learning representative feature embeddings of visual scenes." }, { "figure_ref": [], "heading": "Incremental Learning", "publication_ref": [ "b11", "b15", "b24", "b1", "b13", "b1", "b13", "b33", "b17" ], "table_ref": [], "text": "Incremental learning aims to effectively transfer a model to new categories, while maintaining its previously learned old knowledge as much as possible. Knowledge distillation [12] has shown its advantages in overcoming a catastrophic forgetting problem [16]. Aiming to incorporate the knowledge distillation with the data-replay strategy, Rebuffi et al. [25] introduced the exemplar-based knowledge distillation at the cost of extra small storage expenses. Castro et al. [2] and Kang et al. [14] pointed out that it is necessary to achieve a good balance between old class knowledge maintenance and new class adaptation. Therefore, the cross-distillation loss and the balanced finetune strategy were utilized in [2], while [14] employed the adaptive feature consolidation strategy to restrict the representation drift of critical old class feature embeddings. Recently, Wang et al. [34] advanced the incremental learning model with the gradient boosting strategy, so as to guide the model to effectively learn its residuals to the target one. Liu et al. [18] further enhanced the data-replay strategy by designing the reinforced memory management mechanism. It dynamically adjusts the stored memory information in each incremental step, thereby helping to overcome the sub-optimal memory allocation problem." }, { "figure_ref": [], "heading": "Incremental Few-shot Semantic Segmentation", "publication_ref": [ "b3", "b3", "b26", "b3" ], "table_ref": [], "text": "Incremental Few-shot Semantic Segmentation (IFSS), proposed by Cermelli et al. [4], aims at enduing semantic segmentation models with the capability of few-shot incremental learning, thereby making them more suitable to be deployed in open-ended real-world applications. Aiming to address this task, Cermelli et al. [4] proposed the prototype-based knowledge distillation. It relieves the catastrophic forgetting issue by constraining the invariance of old class segmentation scores. Moreover, the overfitting to novel categories is suppressed by boosting the consistency between old and updated models. Shi et al. [27] proposed to build hyper-class feature representations, thereby helping to relieve the representation drift during the incremental learning. Furthermore, they adopted a different evaluation protocol than the one employed in [4]. Despite the success achieved by these methods, the guidance of visual semantics is ignored in them, which has been proven to play an important role in low-data tasks. Therefore, different from these works, in this paper, we study on how to exploit prior semantic information in IFSS to make segmentation results more accurate." }, { "figure_ref": [], "heading": "METHODOLOGY", "publication_ref": [], "table_ref": [], "text": "We elaborate on our proposed method in this section. We first give the preliminaries in Section 3.1. Then, the details about our Semantic-guided Relation Alignment and Adaptation (SRAA) are provided in Section 3.2." }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "The semantic space of the IFSS model is expanded over time. We define C 𝑡 as the categories encountered at the step 𝑡. Therefore, after learning in this step, the semantic space of the model is expanded to S 𝑡 = S 𝑡 -1 C 𝑡 , where S 𝑡 -1 = 𝑡 -1 𝑖=0 C 𝑖 denotes the semantic space learned after the step 𝑡 -1. In each step, the dataset\nD 𝑡 = 𝑿 𝑡 𝑛 , 𝒀 𝑡 𝑛 𝑁 𝑡\n𝑛=1 is provided to update learnable parameters, in which 𝑿 𝑡 𝑛 denotes the 𝑛-𝑡ℎ training image and 𝒀 𝑡 𝑛 is the label map of 𝑿 𝑡 𝑛 . In the IFSS task, the base dataset D 0 is provided in the base step (i.e., 𝑡 = 0) to initialize the parameters of the model, which contains relatively more training samples. After the base step, the dataset D 𝑡 is only in the few-shot setting, i.e., each category contains one or a few labeled training instances, which satisfies the condition 𝑁 𝑡 << 𝑁 0 for ∀𝑡 >= 1. For brevity, in this paper, the categories given in the base step are called base categories, while the categories encountered in the incremental learning stage are termed novel categories. In the step 𝑡, the model only has access to the dataset D 𝑡 , and the datasets in the previous steps are unavailable." }, { "figure_ref": [ "fig_2" ], "heading": "Semantic-guided Relation Alignment and Adaptation", "publication_ref": [], "table_ref": [], "text": "As described in Figure 2, our method consists of the two components that are Semantic Relation Alignment (SRA) and Semantic-Guided Adaptation (SGA). These two components are incorporated together to help the model be aware of semantic information of base and novel categories, thereby alleviating the semantic-aliasing issue between them. We elaborate on these two components in the following." }, { "figure_ref": [], "heading": "Semantic Relation", "publication_ref": [], "table_ref": [], "text": "Alignment. The goal of our SRA is to semantically align base classes in latent semantic space and guide the model to generate semantic-consistent visual representations. We first extract the visual embeddings 𝑭 0\n𝑛 𝑁 b 𝑛=1 from the input images 𝑿 0 𝑛 𝑁 b 𝑛=1 ⊂ D 0 using the visual encoder 𝑓 v (•|𝚯 0 v ), 𝑭 0 𝑛 𝑁 b 𝑛=1 = 𝑓 v ( 𝑿 0 𝑛 𝑁 b 𝑛=1 |𝚯 0 v )(1)\nwhere 𝚯 0 v indicates the learnable parameters of the visual encoder in the base step, and 𝑁 b denotes the number of images in a mini-batch. Meanwhile, the semantic encoder \n𝑓 s (•|𝚯 0 s ) encodes the semantic vectors 𝒔 0 𝑐 |C b | 𝑐=1 of the categories C b involved in the inputs, 𝒔 0 𝑐 |C b | 𝑐=1 = 𝑓 s (C b |𝚯 0 s )(2)\n𝒈 0 𝑐 = 𝑁 b 𝑛=1 𝐻 𝑖=1 𝑊 𝑗=1 (𝑭 0 𝑛, [𝑖,𝑗 ] * 𝑰 0,𝑐 𝑛, [𝑖,𝑗 ] ) 𝑁 b 𝑛=1 𝐻 𝑖=1 𝑊 𝑗=1 𝑰 0,𝑐 𝑛, [𝑖,𝑗 ](3)\n𝑠.𝑡 . 𝑰 0,𝑐 𝑛 = 𝒀 0 𝑛 == 𝑐 where 𝑭 0 𝑛 ∈ R 𝐻 ×𝑊 ×𝐷 denotes the feature map encoded by the visual encoder 𝑓 v (•|𝚯 0 v ), and 𝑰 0,𝑐 𝑛 ∈ R 𝐻 ×𝑊 indicates the binary mask about the category 𝑐. Of note, if the pixel at the position [𝑖, 𝑗] of 𝑿 0 𝑛 belongs to the category 𝑐, 𝑰 0,𝑐 𝑛, [𝑖,𝑗 ] = 1; otherwise, 𝑰 0,𝑐 𝑛, [𝑖,𝑗 ] = 0. Aiming to align base class features with their semantics, the relation alignment loss L align is employed. It jointly considers the correlations between visual and semantic embeddings that are paired and unpaired,\nL align = |C b | ∑︁ 𝑐 1 =1 |C b | ∑︁ 𝑐 2 =1,𝑐 2 ≠𝑐 1 𝒈 0 𝑐 1 * 𝒔 0 𝑐 2 |C b | × (|C b | -1) Unpaired - |C b | ∑︁ 𝑐=1 𝒈 0 𝑐 * 𝒔 0 𝑐 |C b | Paired .(4)\nThe paired visual-semantic embeddings indicate that the visual vector 𝒈 0 𝑐 and the semantic vector 𝒔 0 𝑐 belong to the same class, and thus 𝒈 0 𝑐 should be aligned to match 𝒔 0 𝑐 in latent space. On the contrary, if the visual embeddings 𝒈 0 𝑐 1 and the semantic embeddings 𝒔 0 𝑐 2 (𝑐 2 ≠ 𝑐 1 ) are unpaired, the correlations between them should be suppressed to ensure representation discrimination between categories. We minimize the relation alignment loss L align w.r.t. the learnable parameters of the visual encoder. Thereby, the model is guided to generate semantic-consistent visual representations, and the base classes embeddings are aligned with their semantic information.\nMoreover, the segmentation results Ȳ 0\n𝑛 ∈ R 𝐻 ×𝑊 × |C 0 | 𝑁 b 𝑛=1\nwith respect to the semantic space C 0 are drawn by using the base class\nprototypical classifier P 0 = 𝒑 0 𝑐 |C 0 | 𝑐=1 , which is shown below: Ȳ 0 𝑛, [𝑖,𝑗,𝑐 ] = 𝑃 (𝑐 |𝑿 0 𝑛, [𝑖,𝑗 ] , P 0 , 𝚯 0 v ) (5) = exp(𝑆𝑖𝑚(𝑭 0 𝑛, [𝑖,𝑗 ] , 𝒑 0 𝑐 )) 𝑐 ′ ∈C 0 exp(𝑆𝑖𝑚(𝑭 0 𝑛, [𝑖,𝑗 ] , 𝒑 0 𝑐 ′ ))\n.\nIn the above equation, 𝑃 (𝑐 |𝑿 0 𝑛, [𝑖,𝑗 ] , P 0 , 𝚯 0 v ) indicates the probability that the pixel 𝑿 0 𝑛, [𝑖,𝑗 ] is inferred as the category 𝑐 according to P 0 and 𝚯 0 v . 𝑆𝑖𝑚(•, •) is a similarity metric function, which aims to measures consine similarity between feature embeddings. The cross-entropy loss L ce is used to measure the discrepancy between the segmentation results and the groundtruth labels of the inputs,\nL ce = 1 𝑁 b 𝑁 b ∑︁ 𝑛=1 𝐶𝐸 ( Ȳ 0 𝑛 , 𝒀 0 𝑛 ).(6)\nBy jointly minimizing the relation alignment loss L align and the cross-entropy loss L ce during the training process, the model learns to segment base categories while being aware of their semantic information.\n3.2.2 Semantic-Guided Adaptation. After the base step, the model is incrementally extended to novel classes. We hope the model can also be aware of the semantic information of encountered novel categories. Therefore, we propose to ensure affinities between visual and semantic embeddings of encountered novel ones. Taking the step 𝑡 as an example, we first extract the visual embeddings from the images given in the few-shot dataset D 𝑡 using the visual encoder\n𝑓 v (•|𝚯 𝑡 v ), 𝑭 𝑡 𝑛 𝑁 𝑡 𝑛=1 = 𝑓 v ( 𝑿 𝑡 𝑛 𝑁 𝑡 𝑛=1 |𝚯 𝑡 v ).(7)\nIn the equation, 𝚯 𝑡 v indicates the parameters of the visual encoder in the step 𝑡. Meanwhile, the semantic encoder 𝑓 s (•|𝚯 𝑡 s ) encodes the semantic embeddings of the encountered new categories C 𝑡 ,\n𝒔 𝑡 𝑐 |C 𝑡 | 𝑐=1 = 𝑓 s (C 𝑡 |𝚯 𝑡 s ).(8)\nAfterwards, these semantic vectors are used to imprint the weights of the semantic prototypes P s,𝑡 = 𝒑 indicates the affinity between the visual features at the position [𝑖, 𝑗] and the semantic embeddings of the category 𝑐 ∈ C 𝑡 . The dense visual-semantic affinities reflect the relation between the visual embeddings and the semantics of the encountered novel classes.\nMoreover, the prototypes\nP 𝑡 -1 = 𝒑 𝑡 -1 𝑖 | 𝑡 -1 𝑗 =0 C 𝑗 | 𝑖=1\nlearned in the previous steps are utilized to compute the affinities of the current feature maps to the old categories Ã𝑡\n𝑛 ∈ R 𝐻 ×𝑊 × | 𝑡 -1 𝑗 =0 C 𝑗 | 𝑁 𝑡 𝑛=1 , Ã𝑡 𝑛, [𝑖,𝑗,𝑐 ] = 𝑭 𝑡 𝑛, [𝑖,𝑗 ] * 𝒑 𝑡 -1 𝑐 |𝑭 𝑡 𝑛, [𝑖,𝑗 ] | * |𝒑 𝑡 -1 𝑐 | , 𝑠.𝑡 ., 0 < 𝑐 <= | 𝑡 -1 𝑗=0 C 𝑗 |.(10)\nThe affinity maps Ã𝑡 𝑛 and Ā𝑡 𝑛 are concatenated together for each sample\n𝑨 𝑡 𝑛 = Ã𝑡 𝑛 ⊕ Ā𝑡 𝑛 ,(11)\nthereby producing\n𝑨 𝑡 𝑛 ∈ R 𝐻 ×𝑊 × | 𝑡 𝑗 =0 C 𝑗 | 𝑁 𝑡 𝑛=1 .\nWe use the cross entropy to constrain the correctness of the affinity maps\n𝑨 𝑡 𝑛 𝑁 𝑡 𝑛=1 L aff = 1 𝑁 𝑡 𝑁 𝑡 ∑︁ 𝑛=1 𝐶𝐸 (𝑨 𝑡 𝑛 , 𝒀 𝑡 𝑛 ),(12)\nso as to guide the visual embeddings of the novel class images to have high correlations to their visual semantics while suppressing the affinities to the old classes. As a result, the feature embeddings of the novel classes can be consistent with their semantic information. Moreover, knowledge distillation is adopted to suppress the overfitting to encountered novel categories\nL kd = 1 𝑁 𝑡 𝑁 𝑡 ∑︁ 𝑛=1 𝐶𝐸 (𝑨 𝑡 𝑛 , 𝑨 𝑡 -1 𝑛 ),(13)\nwhere 𝑨 𝑡 -1 𝑛 denotes the affinities drawn by the model after being trained in the step 𝑡 -1. The joint minimization of L aff and L kd w.r.t. 𝚯 𝑡 s , P 𝑡 -1 , and P s,𝑡 guides the model to be aware of the visual semantics of the encountered novel categories. Meanwhile, the prototypes P 𝑡 -1 and P s,𝑡 are optimized to be consistent to reflect the relationships between images and categories, so as to help accurately segment out the objects that belong to the encountered categories from images. After learning in the step 𝑡, the prototypes are updates: P 𝑡 ← P𝑡-1 Ps,𝑡 , where P𝑡-1 and Ps,𝑡 indicate the prototypes P 𝑡 -1 and P s,𝑡 after being optimized in the current step. The updated prototypes and visual encoder are employed to draw segmentation results for all the encountered classes, as same as the process shown in Eq. 5." }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "Experiments are provided in this section to validate our proposed method. We first introduce the datasets in Section 4.1 and give our implementation details in Section 4.2. Then, we report the main experimental results in Section 4.3, and conduct the ablation study in Section 4.4. " }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b7", "b9", "b0", "b16", "b3", "b3" ], "table_ref": [ "tab_1", "tab_2" ], "text": "We evaluate the proposed method on the two public semantic segmentation datasets that are PASCAL VOC 2012 [8,10] and COCO [1,17]. The PASCAL VOC 2012 dataset consists of 10582 training images and 1449 test images, which collected from 20 different categories. Following the previous work [4], we divide these 20 categories into four folds and each fold includes five categories, which is summarized in Table 1. In addition, on the COCO dataset, 80 categories are used to evaluate the performance of the model, including about 110k training samples and 5k test samples. As presented in Table 2, the 80 categories of this dataset are split into four folds as well, which is the same as [4] does. For the cross validation on both these datasets, we use the categories of three folds to build the base set, while the categories of the rest one fold are used for testing." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b3", "b36", "b37", "b36", "b37", "b3", "b10" ], "table_ref": [], "text": "Our codes are implemented using PyTorch and run on the tesla V100 GPU card. In the experiments, the SGD optimizer is adopted to optimize the learnable parameters of our model based on the poly learning rate. For the experiments on the PASCAL VOC 2012 and the COCO dataset, the training details are slightly different. Specifically, on the PASCAL VOC 2012 dataset, we set the number of the epochs as 30 on the base step and 400 during the incremental learning. Also, in each phase, the initial learning rate of the optimizer is set as 0.01. On the COCO dataset, we train the model on the base set for 50 epochs with the initial poly learning rate 0.02. Moreover, during the incremental learning stage, the epochs are set as 400, and the initial learning rate is set as 0.01. Following the previous work [4], we evaluate our method in both the single few-shot step setting and the multiple few-shot step setting based on the cross validation protocol.\nTable 3: The experimental results on the PASCAL VOC 2012 dataset. In the table, \"FT\" indicates directly finetuning the model on novel classes using the cross-entropy loss, and \"HM\" indicates the harmonic mean of the mIoU on base and novel classes. Also, the first-place and the second-place result in each column are highlighted in bold font and underscore respectively.\n(a) The results under the single few-shot step setting. The single few-shot step setting indicates that all novel categories are given at once in an incremental step, while the multiple fewshot step setting means novel categories are progressively given in multiple steps. We employ the mean Intersection-over-Union (mIoU) metric in our experiments to evaluate the performance of the method. Besides, we build our semantic encoder using CLIP, due to its powerful capability in encoding semantic information [37,38]. Following the previous methods [37,38], we freeze the parameters of the semantic encoder during the training process, i.e., 𝚯 𝑡 s = 𝚯 0 s for ∀𝑡 >= 1. Meanwhile, as same as [4], we build our visual encoder by using resnet101 [11]." }, { "figure_ref": [ "fig_3" ], "heading": "Main Results", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "The results of our method on the PASCAL VOC 2012 and the COCO dataset are summarized in Table 3 and Table 4 respectively. According to these results, we have the following observations. On the PASCAL VOC 2012 dataset, our method achieves higher mIoU on both base and novel categories than that of FT, RT, AMP, SPN, and PIFS under the single few-shot step setting. Despite the performance of MIB, ILT, LWF, DWI, and WI on base categories is better than that of ours in some cases, our method obviously achieves higher mIoU on novel categories. For example, on the 2-shot task, the novel class mIoU of our method is 14.7%, 4.3%, 8.5%, 7.6%, and 8.2% higher than that of MIB, ILT, LWF, DWI, and WI respectively. In the meantime, our method shows its superiority on the PAS-CAL VOC 2012 dataset under the multiple few-shot step setting as well. For example, on all the 1-shot, the 2-shot, and the 5-shot task, the novel class mIoU of PIFS is lower than that of our proposed method. Similar results can also be found on the experiments of the COCO dataset. For example, in the single few-shot step setting, the performance of our method is better than that of PIFS and AMP on both base and novel categories. Although the base class mIoU of our method is lower than that of the several compared methods, it consistently shows higher mIoU on encountered novel categories, e.g., our method's novel class mIoU is 5.8%, 4.9%, 5.5%, 2.6%, 3.5%, 1.8%, and 2.4% higher than that of MIB, ILT, LWF, SPN, RT, DWI, and WI on the 1-shot task. Moreover, under the multiple few-shot step setting, the novel class mIoU of our proposed method is higher than that of all the compared methods in the table. For example, our method's novel class mIoU is 0.9%, 2.1%, and 1.4% higher than that of the second-place method PIFS on the 1-shot, the 2-shot, and the 5-shot task. In Figure 3, we give our step-by-step" }, { "figure_ref": [], "heading": "Few-shot training samples", "publication_ref": [], "table_ref": [], "text": "Test samples Results on the SFS setting Results on the MFS setting Groundtruth\nStep 1\nStep 2\nStep 3\nStep 4\nStep 5" }, { "figure_ref": [], "heading": "bus car chair cat cow", "publication_ref": [], "table_ref": [], "text": "Step 1 achieve the promising semantic segmentation results. Moreover, when encountering new classes, it can still maintain high effectiveness in segmenting the categories learned in the previous few-shot learning steps." }, { "figure_ref": [ "fig_5", "fig_7" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In this subsection, we first study the influence of SGA and SRA on accuracy in Figure 4. \"w/o SGA\" indicates that the semantic guidance is not considered during the adaptation procedure. Thus, the prototypes about novel categories are imprinted by the mean visual embeddings of given samples. \"w/o SRA\" indicates that SRA is not employed, and base class embeddings are not aligned with their semantics. On one hand, the cooperation of our SRA and SGA (i.e., \"Full\") can achieve higher mIoU than that of the baseline model (\"w/o SGA and SRA\") on both base and novel classes under the two different settings, which demonstrates that the appropriate use of prior semantic information can make segmentation results more accurate. On the other hand, the removal of SGA or SRA (i.e., \"w/o SGA\" or \"w/o SRA\") leads to an obvious performance drop, thereby validating the importance of these two components. The results also indicate that semantic information should be considered in Figure 7: The visualized analysis for the influence of our SGA. In the figure, 𝑨 dog , 𝑨 cat , 𝑨 bus , or 𝑨 train denotes the affinity map of an image to the category \"dog\", \"cat\", \"bus\", or \"train\".\n𝑨 dog 𝑨 cat or 𝑨 bus 𝑨 train indicates the ratio map between 𝑨 dog and 𝑨 cat or 𝑨 bus and 𝑨 train both the base and the incremental learning stage. Otherwise, the training inconsistency between phases will reduce segmentation accuracy.\nThen, in Figure 5, we visualize the values of the paired and the unpaired term in the relation alignment loss L align during the training process. The results indicate that L align can be optimized stably. On one hand, with the epoch increases, the paired term is maximized progressively, thereby constraining that visual and semantic embeddings belonging to the same category have relatively high correlations. On the other hand, the minimization of the unpaired term suppresses the similarity between visual and semantic embeddings that are unpaired. As a result, the visual embeddings belonging to the same class are guided to have high semantic correlations, while the semantic correlations of different categories are limited. We also visualize the mean affinities between the visual and the semantic embeddings that are aligned by our method. The results in Figure 6 validate the effectiveness of our SRA again, e.g., SRA can obviously rectify visual embeddings to better match their semantic information.\nThe analysis for the influence of our SGA is provided in Figure 7. As can seen from the figure, without SGA, the sample about \"dog\" is incorrectly segmented as the class \"cat\" due to the incorrect affinity information, e.g., the affinity ratios 𝑨 dog 𝑨 cat have low values in the target regions. In contrast, the use of SGA can obviously boost the affinities to the target class, while suppressing the affinities to \"cat\". In this way, the segmentation results can be more accurate. For the instance \"bus\", the affinity ratios 𝑨 bus 𝑨 train show low values for a part of the target regions when not employing our SGA. Moreover, in the background areas, the affinity ratios have the incorrect high values. By leveraging the guidance of visual semantics, our method can rectify these incorrect affinities, thereby making segmentation results more accurate. Finally, in Figure 8, we also provide the qualitative analysis for the influence of our SRAA method on final segmentation results. The experimental results consistently validate the superiority of our method as well." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose to alleviate the semantic-aliasing issue in IFSS by conducting Semantic-guided Relation Alignment and Adaptation (SRAA). On one hand, we introduce Semantic Relation Alignment (SRA) in the base step, aiming to semantically align the representations of base categories and guide the model to generate semantic-consistent feature representations. On the other hand, we employ Semantic-Guided Adaptation (SGA) to incrementally adapt the model to novel classes. It ensures the visual-semantic affinities of encountered novel categories, so as to make their feature embeddings be consistent with the corresponding semantic information. By considering the semantic information of both the base and the novel categories, the semantic-aliasing issue can be relieved. Currently, it is still very challenging to incrementally achieve accurate segmentation results for objects with complex and varied boundaries in IFSS. In the future, we plan to overcome this problem by fully considering the fine-grained information of local features." } ]
Incremental few-shot semantic segmentation (IFSS) aims to incrementally extend a semantic segmentation model to novel classes according to only a few pixel-level annotated data, while preserving its segmentation capability on previously learned base categories. This task faces a severe semantic-aliasing issue between base and novel classes due to data imbalance, which makes segmentation results unsatisfactory. To alleviate this issue, we propose the Semantic-guided Relation Alignment and Adaptation (SRAA) method that fully considers the guidance of prior semantic information. Specifically, we first conduct Semantic Relation Alignment (SRA) in the base step, so as to semantically align base class representations to their semantics. As a result, the embeddings of base classes are constrained to have relatively low semantic correlations to categories that are different from them. Afterwards, based on the semantically aligned base categories, Semantic-Guided Adaptation (SGA) is employed during the incremental learning stage. It aims to ensure affinities between visual and semantic embeddings of encountered novel categories, thereby making the feature representations be consistent with their semantic information. In this way, the semantic-aliasing issue can be suppressed. We evaluate our model on the PASCAL VOC 2012 and the COCO dataset. The experimental results on both these two datasets exhibit its competitive performance, which demonstrates the superiority of our method.
Advancing Incremental Few-shot Semantic Segmentation via Semantic-guided Relation Alignment and Adaptation
[ { "figure_caption": "Figure 1 :1Figure 1: The typical examples for the semantic-aliasing issue in IFSS, which are obtained from the PASCAL VOC 2012 dataset on the 1-shot task. \"A→B\" indicates that regions belonging to A are incorrectly segmented as B.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "in which 𝚯 0 s represents the parameters of the semantic encoder, and 𝒔 0 𝑐 denotes the encoded semantic vector about the class 𝑐 ∈ C b . After that, the global visual representations 𝒈 0 𝑐 |C b | 𝑐=1 are aggregated for each of the categories C b ,", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: The illustration for our Semantic-guided Relation Alignment and Adaptation (SRAA) method. SRA aims to semantically align the representations of base classes in latent semantic space, and SGA aims at ensuring the visual-semantic affinities of encountered novel categories. Also, \"𝑓 s (•|•)\" and \"𝑓 v (•|•)\" represent a semantic encoder and a visual encoder respectively.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The visualization for the step-by-step segmentation results of our method on both the Single Few-shot Step (SFS) setting and the Multiple Few-shot Step (MFS) setting according to a labeled sample per category. The model is progressively extended to the novel classes in the MFS setting. In the SFS setting, the novel classes are given at once in an incremental step.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :Figure 5 :45Figure 4: The ablation study for our method on both the Single Few-shot Step (SFS) setting and the Multiple Few-shot Step (MFS) setting, which is conducted on the 1-shot task of the PASCAL VOC 2012 dataset.", "figure_data": "", "figure_id": "fig_4", "figure_label": "45", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure6: The mean affinities between the visual and the semantic embeddings with or without being aligned by our SRA, which is conducted on the COCO dataset. The values in the diagonal are the affinities between the visual and the semantic embeddings that belong to the same class, while the others are the affinities between them that are unpaired.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure8: The visualized analysis for the influence of our method on alleviating the semantic-aliasing issue in both the \"base→novel\" and the \"novel→base\" scenario, which is conducted on the PASCAL VOC 2012 dataset under the 1shot setting. Notice that \"A→B\" indicates regions belonging to A are incorrectly segmented as B.", "figure_data": "", "figure_id": "fig_7", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "𝑐=1 , which are used to guide the finetune process on the novel categories. Specifically, we first calculate the affinities Ā𝑡 𝑛 ∈ R 𝐻 ×𝑊 ×|C 𝑡 | 𝑁 𝑡 𝑛=1 between the visual embeddings of the given images and the semantics of the novel classes according to Eq. 9,Ā𝑡𝑛, [𝑖,𝑗,𝑐 ] =", "figure_data": "𝑭 𝑡 𝑛, [𝑖,𝑗 ] * 𝒑 s,𝑡 𝑐 |𝑭 𝑡 𝑛, [𝑖,𝑗 ] | * |𝒑 s,𝑡 𝑐 |, 𝑠.𝑡 ., 0 < 𝑐 <= |C 𝑡 |(9)where Ā𝑡 𝑛 denotes the dense visual-semantic affinities about thesample 𝑿 𝑡 𝑛 , and Ā𝑡 𝑛, [𝑖,𝑗,𝑐 ]", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The dataset split on the PASCAL VOC 2012 dataset.", "figure_data": "SplitCategories5-0aeroplane, bicycle, bird, boat, bottle5-1bus, car, cat, chair, cow5-2table, dog, horse, motorbike, person5-3plant, sheep, sofa, train, tv-monitor", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "The dataset split on the COCO dataset.", "figure_data": "SplitCategoriesperson, airplane, boat, parking meter, dog, elephant, refrigerator,20-0backpack, suitcase, sports ball, skateboard, wine glass, spoon,sandwich, hot dog, chair, dining table, mouse, microwave, scissorsbicycle, bus, traffic light, bench, horse, bear, umbrella, frisbee,20-1kite, surfboard, cup, bowl, orange, pizza,couch, toilet, remote,oven, book, teddy bearcar, train, fire hydrant, bird, sheep, zebra, handbag, skis,20-2baseball bat, tennis racket, fork, banana, broccoli, donut,potted plant, tv, keyboard,toaster, clock, hair driermotorcycle, truck, stop sign, cat, cow, giraffe, tie, snowboard,20-3baseball glove, bottle, knife, apple, carrot, cake, bed, laptop,cell phone, sink, vase, toothbrush", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "The results under the multiple few-shot step setting.", "figure_data": "1-shot2-shot5-shot(b) 1-shot2-shot5-shotmIoU (%)mIoU (%)mIoU (%)mIoU (%)mIoU (%)mIoU (%)Method base novel HM base novel HM base novel HMMethod base novel HM base novel HM base novel HMFT58.39.716.7 59.1 19.7 29.5 55.8 29.6 38.7FT47.23.97.2 53.54.48.1 58.77.713.6WI [22] 62.7 15.5 24.8 63.3 19.2 29.5 63.3 21.7 32.3WI [22] 66.6 16.1 25.9 66.6 19.8 30.5 66.6 21.9 33.0DWI [9] 64.3 15.4 24.8 64.8 19.8 30.4 64.9 23.5 34.5DWI [9] 67.2 16.3 26.2 67.5 21.6 32.7 67.6 25.4 36.9RT [31] 59.1 12.1 20.1 60.9 21.6 31.9 60.4 27.5 37.8RT [31] 49.25.810.4 36.04.98.6 45.1 10.0 16.4AMP [28] 57.5 16.7 25.8 54.4 18.8 27.9 51.9 18.9 27.7AMP [28] 58.6 14.5 23.2 58.4 16.3 25.5 57.1 17.2 26.4SPN [35] 59.8 16.3 25.6 60.8 26.3 36.7 58.4 33.4 42.5SPN [35] 49.88.113.9 56.4 10.4 17.6 61.6 16.3 25.8LWF [16] 61.5 10.7 18.2 63.6 18.9 29.2 59.7 30.9 40.8LWF [16] 42.13.36.2 51.63.97.3 59.87.513.4ILT [20] 64.3 13.6 22.5 64.2 23.1 34.0 61.4 32.0 42.1ILT [20] 43.73.36.1 52.24.48.1 59.07.913.9MIB [3] 61.05.29.7 63.5 12.7 21.1 65.0 28.1 39.3MIB [3] 43.92.64.9 51.92.14.0 60.95.810.5PIFS [4] 60.9 18.6 28.4 60.5 26.4 36.8 60.0 33.4 42.8PIFS [4] 64.1 16.9 26.7 65.2 23.7 34.8 64.5 27.5 38.6Ours65.2 19.1 29.5 62.7 27.4 38.1 63.8 36.7 46.6Ours66.4 18.8 29.3 65.1 26.4 37.6 64.3 28.7 39.7", "figure_id": "tab_3", "figure_label": "", "figure_type": "table" }, { "figure_caption": "The experimental results on the COCO dataset.", "figure_data": "(b) The results under the multiple few-shot step setting.1-shot2-shot5-shot1-shot2-shot5-shotmIoU (%)mIoU (%)mIoU (%)mIoU (%)mIoU (%)mIoU (%)Method base novel HM base novel HM base novel HMMethod base novel HM base novel HM base novel HMFT41.24.17.5 41.57.312.4 41.6 12.3 19.0FT38.54.88.5 40.36.811.6 39.5 11.5 17.8WI [22] 43.86.911.9 44.27.913.5 43.68.714.6WI [22] 46.3 8.314.1 46.59.315.5 46.3 10.3 16.9DWI [9] 44.57.512.8 45.09.415.6 44.9 12.1 19.1DWI [9] 46.29.215.3 46.5 11.4 18.3 46.6 14.5 22.1RT [31] 46.2 5.810.2 46.7 8.814.8 46.9 13.7 21.2RT [31] 38.45.29.2 43.8 10.1 16.4 44.1 16.0 23.5AMP [28] 37.57.412.4 35.78.814.2 34.6 11.0 16.7AMP [28] 36.67.913.0 36.09.214.7 33.2 11.0 16.5SPN [35] 43.56.711.7 43.7 10.2 16.5 43.7 15.6 22.9SPN [35] 40.38.714.3 41.7 12.5 19.2 41.4 18.2 25.3LWF [16] 43.93.87.0 44.37.112.3 44.6 12.9 20.1LWF [16] 41.04.17.5 42.76.511.3 42.3 12.6 19.4ILT [20] 46.2 4.48.0 46.36.511.5 47.0 11.0 17.8ILT [20] 43.76.210.9 47.1 10.0 16.5 45.3 15.3 22.9MIB [3] 43.83.56.5 44.46.010.6 44.7 11.9 18.8MIB [3] 40.43.15.8 42.75.29.3 43.8 11.5 18.2PIFS [4] 40.88.213.7 40.9 11.1 17.5 42.8 15.7 23.0PIFS [4] 40.4 10.4 16.5 40.1 13.1 19.8 41.1 18.3 25.3Ours41.2 9.3 15.2 42.1 12.7 19.5 42.6 17.1 24.4Ours40.7 11.3 17.7 40.5 15.2 22.1 41.0 19.7 26.6", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" } ]
Yuan Zhou; Xin Chen; Yanrong Guo; Shijie Hao; Richang Hong; Qi Tian
[ { "authors": "Holger Caesar; Jasper Uijlings; Vittorio Ferrari", "journal": "", "ref_id": "b0", "title": "Coco-stuff: Thing and stuff classes in context", "year": "2018" }, { "authors": "Manuel J Francisco M Castro; Nicolás Marín-Jiménez; Cordelia Guil; Karteek Schmid; Alahari", "journal": "", "ref_id": "b1", "title": "End-to-end incremental learning", "year": "2018" }, { "authors": "Fabio Cermelli; Massimiliano Mancini; Samuel Rota Bulo; Elisa Ricci; Barbara Caputo", "journal": "", "ref_id": "b2", "title": "Modeling the background for incremental learning in semantic segmentation", "year": "2020" }, { "authors": "Fabio Cermelli; Massimiliano Mancini; Yongqin Xian; Zeynep Akata; Barbara Caputo", "journal": "", "ref_id": "b3", "title": "Prototype-based Incremental Few-Shot Semantic Segmentation", "year": "2021" }, { "authors": "Liang-Chieh Chen; George Papandreou; Iasonas Kokkinos; Kevin Murphy; Alan L Yuille", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b4", "title": "Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs", "year": "2017" }, { "authors": "Liang-Chieh Chen; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b5", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b6", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Mark Everingham; Ali Sm; Luc Eslami; Van Gool; K I Christopher; John Williams; Andrew Winn; Zisserman", "journal": "International Journal of Computer Vision", "ref_id": "b7", "title": "The pascal visual object classes challenge: A retrospective", "year": "2015" }, { "authors": "Spyros Gidaris; Nikos Komodakis", "journal": "", "ref_id": "b8", "title": "Dynamic few-shot visual learning without forgetting", "year": "2018" }, { "authors": "Bharath Hariharan; Pablo Arbeláez; Lubomir Bourdev; Subhransu Maji; Jitendra Malik", "journal": "IEEE", "ref_id": "b9", "title": "Semantic contours from inverse detectors", "year": "2011" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b10", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b11", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Zilong Huang; Xinggang Wang; Lichao Huang; Chang Huang; Yunchao Wei; Wenyu Liu", "journal": "", "ref_id": "b12", "title": "Ccnet: Criss-cross attention for semantic segmentation", "year": "2019" }, { "authors": "Minsoo Kang; Jaeyoo Park; Bohyung Han", "journal": "", "ref_id": "b13", "title": "Class-incremental learning by knowledge distillation with adaptive feature consolidation", "year": "2022" }, { "authors": "Aoxue Li; Weiran Huang; Xu Lan; Jiashi Feng; Zhenguo Li; Liwei Wang", "journal": "", "ref_id": "b14", "title": "Boosting few-shot learning with adaptive margin loss", "year": "2020" }, { "authors": "Zhizhong Li; Derek Hoiem", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b15", "title": "Learning without forgetting", "year": "2017" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; James Hays; Pietro Perona; Deva Ramanan; Piotr Dollár; C Lawrence; Zitnick ", "journal": "Springer", "ref_id": "b16", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Yaoyao Liu; Bernt Schiele; Qianru Sun", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b17", "title": "RMM: Reinforced memory management for class-incremental learning", "year": "2021" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b18", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "Umberto Michieli; Pietro Zanuttigh", "journal": "", "ref_id": "b19", "title": "Incremental learning techniques for semantic segmentation", "year": "2019" }, { "authors": "Tomas Mikolov; Kai Chen; Greg Corrado; Jeffrey Dean", "journal": "", "ref_id": "b20", "title": "Efficient estimation of word representations in vector space", "year": "2013" }, { "authors": "Alex Nichol; Joshua Achiam; John Schulman", "journal": "", "ref_id": "b21", "title": "On first-order metalearning algorithms", "year": "2018" }, { "authors": "Jeffrey Pennington; Richard Socher; Christopher D Manning", "journal": "", "ref_id": "b22", "title": "Glove: Global vectors for word representation", "year": "2014" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark", "journal": "PMLR", "ref_id": "b23", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": " Sylvestre-Alvise; Alexander Rebuffi; Georg Kolesnikov; Christoph H Sperl; Lampert", "journal": "", "ref_id": "b24", "title": "icarl: Incremental classifier and representation learning", "year": "2001" }, { "authors": "Adam Santoro; Sergey Bartunov; Matthew Botvinick; Daan Wierstra; Timothy Lillicrap", "journal": "PMLR", "ref_id": "b25", "title": "Meta-learning with memory-augmented neural networks", "year": "2016" }, { "authors": "Guangchen Shi; Yirui Wu; Jun Liu; Shaohua Wan; Wenhai Wang; Tong Lu", "journal": "", "ref_id": "b26", "title": "Incremental few-shot semantic segmentation via embedding adaptiveupdate and hyper-class representation", "year": "2022" }, { "authors": "Mennatullah Siam; Boris Oreshkin; Martin Jagersand", "journal": "", "ref_id": "b27", "title": "Adaptive masked proxies for few-shot segmentation", "year": "2019" }, { "authors": "Jake Snell; Kevin Swersky; Richard Zemel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Prototypical networks for few-shot learning", "year": "2017" }, { "authors": "Robin Strudel; Ricardo Garcia; Ivan Laptev; Cordelia Schmid", "journal": "", "ref_id": "b29", "title": "Segmenter: Transformer for semantic segmentation", "year": "2021" }, { "authors": "Yonglong Tian; Yue Wang; Dilip Krishnan; Joshua B Tenenbaum; Phillip Isola", "journal": "Springer", "ref_id": "b30", "title": "Rethinking few-shot image classification: a good embedding is all you need?", "year": "2020" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b31", "title": "Attention is all you need", "year": "2017" }, { "authors": "Oriol Vinyals; Charles Blundell; Timothy Lillicrap; Daan Wierstra", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b32", "title": "Matching networks for one shot learning", "year": "2016" }, { "authors": " Fu-Yun; Da-Wei Wang; Han-Jia Zhou; De-Chuan Ye; Zhan", "journal": "Springer", "ref_id": "b33", "title": "Foster: Feature boosting and compression for class-incremental learning", "year": "2022" }, { "authors": "Yongqin Xian; Subhabrata Choudhury; Yang He; Bernt Schiele; Zeynep Akata", "journal": "", "ref_id": "b34", "title": "Semantic projection network for zero-and few-label semantic segmentation", "year": "2019" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b35", "title": "SegFormer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Jingyi Xu; Hieu Le", "journal": "", "ref_id": "b36", "title": "Generating representative samples for few-shot classification", "year": "2022" }, { "authors": "Fengyuan Yang; Ruiping Wang; Xilin Chen", "journal": "", "ref_id": "b37", "title": "Semantic Guided Latent Parts Embedding for Few-Shot Learning", "year": "2023" }, { "authors": "Baoquan Zhang; Xutao Li; Yunming Ye; Zhichao Huang; Lisai Zhang", "journal": "", "ref_id": "b38", "title": "Prototype completion with primitive knowledge for few-shot learning", "year": "2021" }, { "authors": "Hang Zhang; Kristin Dana; Jianping Shi; Zhongyue Zhang; Xiaogang Wang; Ambrish Tyagi; Amit Agrawal", "journal": "", "ref_id": "b39", "title": "Context encoding for semantic segmentation", "year": "2018" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b40", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "Sixiao Zheng; Jiachen Lu; Hengshuang Zhao; Xiatian Zhu; Zekun Luo; Yabiao Wang; Yanwei Fu; Jianfeng Feng; Tao Xiang; Philip Hs Torr", "journal": "", "ref_id": "b41", "title": "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers", "year": "2021-03-10" } ]
[ { "formula_coordinates": [ 3, 275.48, 85.57, 82.77, 623.84 ], "formula_id": "formula_0", "formula_text": "D 𝑡 = 𝑿 𝑡 𝑛 , 𝒀 𝑡 𝑛 𝑁 𝑡" }, { "formula_coordinates": [ 3, 322.37, 376.08, 235.84, 48.41 ], "formula_id": "formula_1", "formula_text": "𝑛 𝑁 b 𝑛=1 from the input images 𝑿 0 𝑛 𝑁 b 𝑛=1 ⊂ D 0 using the visual encoder 𝑓 v (•|𝚯 0 v ), 𝑭 0 𝑛 𝑁 b 𝑛=1 = 𝑓 v ( 𝑿 0 𝑛 𝑁 b 𝑛=1 |𝚯 0 v )(1)" }, { "formula_coordinates": [ 3, 317.73, 454.93, 240.47, 48.14 ], "formula_id": "formula_2", "formula_text": "𝑓 s (•|𝚯 0 s ) encodes the semantic vectors 𝒔 0 𝑐 |C b | 𝑐=1 of the categories C b involved in the inputs, 𝒔 0 𝑐 |C b | 𝑐=1 = 𝑓 s (C b |𝚯 0 s )(2)" }, { "formula_coordinates": [ 3, 365.46, 566.52, 192.74, 32.2 ], "formula_id": "formula_3", "formula_text": "𝒈 0 𝑐 = 𝑁 b 𝑛=1 𝐻 𝑖=1 𝑊 𝑗=1 (𝑭 0 𝑛, [𝑖,𝑗 ] * 𝑰 0,𝑐 𝑛, [𝑖,𝑗 ] ) 𝑁 b 𝑛=1 𝐻 𝑖=1 𝑊 𝑗=1 𝑰 0,𝑐 𝑛, [𝑖,𝑗 ](3)" }, { "formula_coordinates": [ 4, 74.23, 389.14, 219.81, 47.68 ], "formula_id": "formula_4", "formula_text": "L align = |C b | ∑︁ 𝑐 1 =1 |C b | ∑︁ 𝑐 2 =1,𝑐 2 ≠𝑐 1 𝒈 0 𝑐 1 * 𝒔 0 𝑐 2 |C b | × (|C b | -1) Unpaired - |C b | ∑︁ 𝑐=1 𝒈 0 𝑐 * 𝒔 0 𝑐 |C b | Paired .(4)" }, { "formula_coordinates": [ 4, 201.08, 562.04, 73.84, 13.03 ], "formula_id": "formula_5", "formula_text": "𝑛 ∈ R 𝐻 ×𝑊 × |C 0 | 𝑁 b 𝑛=1" }, { "formula_coordinates": [ 4, 53.8, 586.66, 240.25, 66.93 ], "formula_id": "formula_6", "formula_text": "prototypical classifier P 0 = 𝒑 0 𝑐 |C 0 | 𝑐=1 , which is shown below: Ȳ 0 𝑛, [𝑖,𝑗,𝑐 ] = 𝑃 (𝑐 |𝑿 0 𝑛, [𝑖,𝑗 ] , P 0 , 𝚯 0 v ) (5) = exp(𝑆𝑖𝑚(𝑭 0 𝑛, [𝑖,𝑗 ] , 𝒑 0 𝑐 )) 𝑐 ′ ∈C 0 exp(𝑆𝑖𝑚(𝑭 0 𝑛, [𝑖,𝑗 ] , 𝒑 0 𝑐 ′ ))" }, { "formula_coordinates": [ 4, 390.97, 402.74, 167.23, 25.29 ], "formula_id": "formula_7", "formula_text": "L ce = 1 𝑁 b 𝑁 b ∑︁ 𝑛=1 𝐶𝐸 ( Ȳ 0 𝑛 , 𝒀 0 𝑛 ).(6)" }, { "formula_coordinates": [ 4, 318.4, 562.45, 239.8, 25.7 ], "formula_id": "formula_8", "formula_text": "𝑓 v (•|𝚯 𝑡 v ), 𝑭 𝑡 𝑛 𝑁 𝑡 𝑛=1 = 𝑓 v ( 𝑿 𝑡 𝑛 𝑁 𝑡 𝑛=1 |𝚯 𝑡 v ).(7)" }, { "formula_coordinates": [ 4, 403.39, 631.92, 154.81, 13.96 ], "formula_id": "formula_9", "formula_text": "𝒔 𝑡 𝑐 |C 𝑡 | 𝑐=1 = 𝑓 s (C 𝑡 |𝚯 𝑡 s ).(8)" }, { "formula_coordinates": [ 5, 157.37, 201.15, 83.58, 16.55 ], "formula_id": "formula_10", "formula_text": "P 𝑡 -1 = 𝒑 𝑡 -1 𝑖 | 𝑡 -1 𝑗 =0 C 𝑗 | 𝑖=1" }, { "formula_coordinates": [ 5, 75.13, 227.87, 218.91, 49.01 ], "formula_id": "formula_11", "formula_text": "𝑛 ∈ R 𝐻 ×𝑊 × | 𝑡 -1 𝑗 =0 C 𝑗 | 𝑁 𝑡 𝑛=1 , Ã𝑡 𝑛, [𝑖,𝑗,𝑐 ] = 𝑭 𝑡 𝑛, [𝑖,𝑗 ] * 𝒑 𝑡 -1 𝑐 |𝑭 𝑡 𝑛, [𝑖,𝑗 ] | * |𝒑 𝑡 -1 𝑐 | , 𝑠.𝑡 ., 0 < 𝑐 <= | 𝑡 -1 𝑗=0 C 𝑗 |.(10)" }, { "formula_coordinates": [ 5, 145.62, 305.49, 148.43, 10.25 ], "formula_id": "formula_12", "formula_text": "𝑨 𝑡 𝑛 = Ã𝑡 𝑛 ⊕ Ā𝑡 𝑛 ,(11)" }, { "formula_coordinates": [ 5, 128.11, 319.61, 102.64, 13.04 ], "formula_id": "formula_13", "formula_text": "𝑨 𝑡 𝑛 ∈ R 𝐻 ×𝑊 × | 𝑡 𝑗 =0 C 𝑗 | 𝑁 𝑡 𝑛=1 ." }, { "formula_coordinates": [ 5, 125.73, 335.21, 168.32, 44.31 ], "formula_id": "formula_14", "formula_text": "𝑨 𝑡 𝑛 𝑁 𝑡 𝑛=1 L aff = 1 𝑁 𝑡 𝑁 𝑡 ∑︁ 𝑛=1 𝐶𝐸 (𝑨 𝑡 𝑛 , 𝒀 𝑡 𝑛 ),(12)" }, { "formula_coordinates": [ 5, 121.64, 456.24, 172.4, 25.07 ], "formula_id": "formula_15", "formula_text": "L kd = 1 𝑁 𝑡 𝑁 𝑡 ∑︁ 𝑛=1 𝐶𝐸 (𝑨 𝑡 𝑛 , 𝑨 𝑡 -1 𝑛 ),(13)" } ]
2023-06-12
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b18", "b31", "b16", "b30", "b44", "b49", "b54", "b15", "b41", "b43", "b16", "b17", "b45", "b73", "b43", "b0", "b60", "b0", "b60", "b4", "b45", "b73", "b3", "b24" ], "table_ref": [], "text": "Automated video production is experiencing a surge in demand across various industries, including media, gaming, film, and television [19,32]. This increased demand has propelled video generation research to the forefront of deep generative modeling, leading to rapid advancements in the field [17,31,45,50,55]. In recent years, diffusion models [16] have demonstrated remarkable success in generating visually appealing images in open-domains [42,44]. Notably, some commercial applications have leveraged these advanced techniques to create engaging and imaginative pictures, such as using the text of \"A Chinese girl's wedding, 1980s, China,\" or \"Albert Einstein eating vegetables, cow in the background.\" Building upon such success, in this paper, we take one step further and aim to extend their capabilities to high-quality text-to-video generation.\nAs is widely known, the development of open-domain text-to-video models poses grand challenges, due to the limited availability of large-scale text-video paired data and the complexity of constructing space-time models from scratch. To solve the challenges, current approaches are primarily built on pretrained image generation models. These approaches typically adopt space-time separable architectures, where spatial operations are inherited from the image generation model [17,18]. To further incorporate temporal modeling, various strategies have been employed, including pseudo-3D modules [46,74], serial 2D and 1D blocks [44], and parameter-free techniques like temporal shift [1] or tailored spatiotemporal attention [61]. However, these approaches overlook the crucial interplay between time and space for visually engaging text-to-video generation. On one hand, parameter-free approaches [1,61] rely on manually designed rules that fail to capture the intrinsic nature of videos and often lead to the generation of unnatural motions. On the other hand, learnable 2D+1D modules and blocks [5,46,74] primarily focus on temporal modeling, either directly feeding temporal features to spatial features, or combining them through simplistic element-wise additions. This limited interactivity usually results in temporal distortions and discrepancies between the input texts and the generated videos, which hinders the overall quality and coherence of the generated content.\nTo address the above issues, we take one step further in this paper which highlights the complementary nature of both spatial and temporal features in videos. Specifically, we propose a novel Swapped Spatiotemporal Cross-Attention (Swap-CA) for text-to-video generation. Instead of solely relying on separable 2D+1D self-attention [4] that replaces computationally expensive 3D self-attention as shown in Fig. 1 (a) and (c), we aim to further enhance the interaction between spatial and temporal features. While 3D window self-attention [25] reduces the computational cost and incorporates both modalities, such work treats space and time dimensions indiscriminately, which largely limits its ability to capture complex spatiotemporal patterns, especially in generation tasks. Compared with existing works, our swap attention mechanism facilitates bidirectional guidance between spatial and temporal features by considering one feature as the query and the other as the key/value. To ensure the reciprocity of information flow, we swap the role of the \"query\" in adjacent layers.\nBy deeply interplaying spatial and temporal features through the proposed swap attention, we present a holistic VideoFactory framework for text-to-video generation. In particular, we adopt the latent diffusion framework and design a spatiotemporal U-Net for 3D noise prediction. To unlock the full potential of the proposed model and fulfill high-quality video generation, we propose to construct a large-scale video generation dataset, named HD-VG-130M. This dataset consists of 130 million text-video pairs from open-domains, encompassing high-definition, widescreen, and watermark-free characters. Additionally, our spatial super-resolution model can effectively upsample videos to a resolution of 1376 × 768, thus ensuring engaging visual experience. We conduct comprehensive experiments and show that our approach outperforms existing methods in terms of both quantitative and qualitative comparisons. In summary, our paper makes the following significant contributions:\n• We reveal the significance of learning joint spatial and temporal features for video generation, and introduce a novel swapped spatiotemporal cross-attention mechanism to reinforce both space and time interactions. • To facilitate training, we curate a comprehensive video dataset comprising the largest 130 million text-video pairs to-date, which can support high-quality video generation with high-definition, widescreen, and watermark-free characters.\nBy effectively enforcing the mutual learning of spatial and temporal representations, our approach achieves outstanding visual quality in text-to-video generation tasks, while ensuring precisely semantic alignment between the input text and the generated videos." }, { "figure_ref": [], "heading": "Related Works", "publication_ref": [ "b29", "b40", "b62", "b71", "b26", "b9", "b55", "b39", "b6", "b5", "b33", "b38", "b43", "b42", "b27", "b65", "b2", "b41", "b30", "b34", "b56", "b11", "b21", "b34", "b68", "b14", "b17", "b7", "b14", "b53", "b45", "b0", "b4", "b8", "b12", "b13", "b19", "b28", "b59", "b60", "b69", "b73", "b73", "b0", "b69" ], "table_ref": [], "text": "Text-to-Image Generation. Generating realistic images from corresponding descriptions combines the challenging components of language modeling and image generation. Traditional text-to-image generation methods [30,41,63,72,27] are mainly based on GANs [10] and are only able to model simple scenes such as birds [56]. Later work extends the scope of text-to-image generation to open domains with better modeling techniques and training data on much larger scales. DALL•E [40] and CogView [7] leverage auto-regressive vision transformers with variational auto-encoders and jointly train on text and image tokens. In recent years, diffusion models have shown great ability in visual generation [6]. For text-to-image multi-modality generation, GLIDE [34], DALL•E 2 [39],\nand Imagen [44] leverage diffusion models to achieve impressive results. Based on these successes, some work further extends customization [43,28], image guidance [66], and precise control [3]. Despite advances in generation ability, diffusion models are computationally expensive for training and inference, especially on high resolutions. To reduce the cost, latent diffusion [42] conducts the diffusion process on a compressed latent space rather than the original pixel space. This paper further explores how to extend the high-efficient latent diffusion for video generation.\nText-to-Video Generation. Additional controls are often added to make the generated videos more responsive to demand [31,35,57,12], and this paper focuses on the controlling mode of texts. Early text-to-video generation models [22,35] mainly use convolutional GAN models with Recurrent Neural Networks (RNNs) to model temporal motions. Although complex architectures and auxiliary losses are introduced, GAN-based models cannot generate videos beyond simple scenes like moving digits and close-up actions. Recent works extend text-to-video to open domains with large-scale transformers [69] or diffusion models [15]. Considering the difficulty of high-dimensional video modeling and the scarcity of text-video datasets, training text-to-video generation from scratch is unaffordable. As a result, most works acquire knowledge from pretrained text-to-image models.\nCogVideo [18] inherits from a pretrained text-to-image model CogView2 [8]. Imagen Video [15] and Phenaki [54] adopt joint image-video training. Make-A-Video [46] learns motion on video data alone, eliminating the dependency on text-video data. To reduce the high cost of video generation, latent diffusion has been widely utilized for video generation [1,5,9,13,14,20,29,60,61,70,74]. MagicVideo [74] inserts a simple adaptor after the 2D convolution layer. Latent-Shift [1] adopts a parameter-free temporal shift module to exchange information across different frames. PDVM [70] projects the 3D video latent into three 2D image-like latent spaces. Although the research on textto-video generation is very active, existing research ignores the inter and inner correlation between spatial and temporal modules. In this paper, we revisit the design of text-driven video generation." }, { "figure_ref": [], "heading": "High-Definition Video Generation Dataset", "publication_ref": [ "b61", "b47", "b32", "b63", "b1", "b14", "b45", "b63", "b20" ], "table_ref": [], "text": "Datasets of diverse text-video pairs are the prerequisite for training open-domain text-to-video generation models. However, existing text-video datasets are always limited in either scale or quality, thus hindering the upper bound of high-quality video generation. Referring to Tab. 1, MSR-VTT [62] and UCF101 [48] only have 10K and 13K video clips respectively. Although large in scale, HowTo100M [33] is specified for instructional videos, which has limited diversity for open-domain generation tasks. Despite being appropriate in both scale and domain, the formats of textual annotations in HD-VILA-100M [64] are subtitle transcripts, which lack visual contents related descriptions for high-quality video generation. Additionally, the videos in HD-VILA-100M have complex scene transitions, which are disadvantageous for models to learn temporal correlations. WebVid-10M [2] has been used in some previous video generation works [15,46], considering its relatively large-scale (10M) and descriptive captions. Nevertheless, videos in WebVid-10M are of low resolution and have poor visual qualities with watermarks in the center.\nTo tackle the problems above and achieve high-quality video generation, we propose a large-scale text-video dataset, namely HD-VG-130M, including 130M text-video pairs from open-domain in high-definition (720p), widescreen and watermark-free formats. We first sample according to the video labels of HD-VILA-100M [64] to collect original high-definition videos from YouTube. As the original videos have complex scene transitions which are adverse for models to learn temporal correlations, we then detect and split scenes in these original videos using PySceneDetect1 , resulting in 130M single scene video clips. Finally, we caption video clips with BLIP-2 [21], in view of its " }, { "figure_ref": [], "heading": "High-Quality Text-to-Video Generation", "publication_ref": [ "b16", "b17" ], "table_ref": [], "text": "To enable spatiotemporal interaction, we design a diffusion model for high-quality video generation.\nSpatiotemporal Inter-Connection. To reduce computational costs and leverage pretrained image generation models, space-time separable architectures have gained popularity in text-to-video generation [17,18]. These architectures handle spatial operations independently on each frame, while temporal operations consider multiple frames for each spatial position. In the following, we refer to the features predicted by 2D/spatial modules in space-time separable networks as \"spatial features\", and \"temporal features\" vice versa. As discussed in Sec. 1, prior works have neglected the crucial interaction between spatial and temporal features. To tackle this limitation, we promote the mutual reinforcement of these features through a series of cross-attention operations." }, { "figure_ref": [ "fig_2" ], "heading": "Denote a basic operation CrossAttention", "publication_ref": [ "b4", "b41", "b10", "b35", "b24", "b51", "b37", "b1", "b41", "b37", "b50", "b67", "b23", "b36", "b22", "b66", "b57", "b72", "b9" ], "table_ref": [], "text": "(x, y) = softmax( QK T √ d ) • V , with Q = W (i) Q • x, K = W (i) K • y, V = W (i) V • y,(1)\nwhere\nW (i) Q , W(i)\nK , and\nW (i)\nV are learnable projection matrices in the i-th layer. The direction of cross-attention, specifically whether Q originates from spatial or temporal features, plays a decisive role in determining the impact of cross-attention. In general, spatial features tend to encompass a greater amount of contextual information, which can improve the alignment of temporal features with the input text. On the other hand, temporal features have a complete receptive field of the time series, which may enable spatial features to generate visual content more effectively. To leverage At the end of each U-Net block, we employ a swapped cross-attention scheme on 3D windows to facilitate a comprehensive integration of spatial and temporal features. In the case of two consecutive blocks, the first block employs temporal features to guide spatial features, while in the second block, their roles are reversed. This reciprocal arrangement ensures a balanced and mutually beneficial interaction between the spatial and temporal modalities throughout the model.\nboth aspects effectively, we propose a strategy of swapping the roles of Q and K, V in adjacent two blocks. This approach ensures that both temporal and spatial features receive sufficient information from the other modality, enabling a comprehensive and mutually beneficial interaction.\nGlobal attention greatly increases the computational costs in terms of memory and running time.\nTo improve efficiency, we conduct 3D window attention. Given a video feature in the shape of F × H × W and a 3D window size of F w × H w × W w , we organize the windows to process the feature in a non-overlapping manner, leading to ⌈ F Fw ⌉ × ⌈ H Hw ⌉ × ⌈ W Ww ⌉ distinct 3D windows. Within each window, we perform spatiotemporal cross-attention. By adopting the 3D window scheme, we effectively reduce computational costs without compromising performance.\nFollowing prior text-to-image arts [5,42], we incorporate 2× down/upsampling along the spatial dimension to establish a hierarchical structure. Furthermore, research [11,36] has pointed out that the temporal dimension is sensitive to compression. In light of these considerations, we do compress the temporal dimension and conduct shift windows [25], which advocates an inductive bias of locality. On the spatial dimension, we do not shift since the down/upsampling already introduces connections between neighboring non-overlapping 3D windows.\nTo this end, we propose a Swapped Spatiotemporal Cross-Attention (Swap-CA) in 3D windows. Let t l and s l represent the predictions of 2D and 1D modules. We utilize Multi-head Cross Attention (MCA) to compute their interactions by Swap-CA as\nsl = Proj l in ⊙ GN(s l ), tl = Proj l in ⊙ GN(t l ); h l = 3DW-MCA(LN(s l ), LN( tl )) + sl ; hl = FFN ⊙ LN(h l ) + h l ; z l = t l + s l + Swap-CA(s l , t l ) = t l + s l + Proj l out ( hl ),(2)\nwhere GN, Proj, LN, 3D Window-based Multi-head Cross-Attention (3DW-MCA) are learnable modules. By initializing the output projection Proj l-1 out by zero, we have z l = t l-1 + s l-1 , i.e., Swap-CA is skipped so that it is reduced to a basic addition operation. This allows us to initially train the diffusion model using addition operations, significantly speeding up the training process. Subsequently, we can switch to Swap-CA to enhance the model's performance.\nThen for the next spatial-temporal separable block, we apply shifted 3D window multi-head crossattention (3DSW-MCA) and interchange the roles of s and t, as\nh l+1 = 3DSW-MCA(LN( tl+1 ), LN(s l+1 )) + tl+1 .(3)\nIn all 3DSW-MCA, we shift the window along the temporal dimension by ⌈ Fw 2 ⌉ elements.\nTable 2: Ablation study on spatiotemporal interaction strategies. We report the FVD [52] and CLIPSIM [38] on 1K samples from the validation set of WebVid-10M [2]. The computational cost is evaluated on inputs of shape 4 × 16 × 32 × 32. Details can be found in the supplementary material.\nT and S represent spatial and temporal features, respectively. Overall Architecture. We adopt LDM [42] as the text-to-image backbone. We employ an autoencoder to compress the video into a down-sampled 3D latent space. Within this latent space, we perform diffusion optimization using an hourglass spatial-temporal separable U-Net model. Text features are extracted with a pretrained CLIP [38] model and inserted into the U-Net model through cross-attention on the spatial dimension.\nOur framework is illustrated in Fig. 3. To strike a balance between performance and efficiency, we exclusively apply Swap-CA at the end of each U-Net encoder and decoder block. In other positions, we employ a straightforward fusion technique using a 1×1×1 convolution to combine spatial and temporal features. To enhance the connectivity among temporal modules, we introduce skip connections that connect temporal modules separated by spatial down/upsampling modules. This strategy promotes stronger integration and information flow within the temporal dimension of the network architecture.\nSuper-Resolution Towards Higher Quality. To obtain visually satisfying results, we further perform Super-Resolution (SR) on the generated video. One key to improving SR performance is designing a degradation model that closely resembles the actual degradation process [51,68,24,37,23,67]. In our scenario, the generated video quality suffers from both the diffusion and auto-encoder processes. Therefore, we adopt the hybrid degradation model in Real-ESRGAN [58] to simulate possible quality degradation caused by the generated process. During training, an original video frame is downsampled and degraded using our model, and the SR network attempts to perform SR on the resulting low-resolution image. We adopt RCAN [73] with 8 residual blocks as our SR network.\nIt is trained with a vanilla GAN [10] to improve visual satisfaction. With a suitable degradation design, our SR network can further reduce possible artifacts and distortion in the frames, increase their resolution, and improve their visual quality." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b20", "b1", "b64", "b49", "b52", "b48", "b70", "b46", "b69", "b17", "b73", "b12", "b25", "b4", "b58", "b59", "b12", "b17", "b25", "b4", "b25", "b12" ], "table_ref": [], "text": "Our model predicts images at a resolution of 344×192 (with a latent space resolution of 43×24). Then a 4×upscaling is produced in our SR model, resulting in a final output resolution of 1376 × 768.\nOur model is trained with 32 NVIDIA V100 GPUs. We utilize our HD-VG-130M as training data to promote the generation visual qualities. Furthermore, considering that the textual captions in HD-VG-130M are annotated by BLIP-2 [21], which may have some discrepancies with human expressions, we adopt a joint training strategy with WebVid-10M [2] to ensure the model could generalize well to diverse humanity textual inputs. This approach allows us to benefit from the large-scale text-video pairs and the superior visual qualities of HD-VG-130M while maintaining the generalization ability to diverse textual inputs in real scenarios, enhancing the overall training process. More details can be found in the supplementary.\nTable 3: Text-to-video generation on UCF101.\nMethod Zero-shot FVD↓ VideoGPT [65] No 2880.6 MoCoGAN [50] No 2886.8 +StyleGAN2 [53] No 1821.4 MoCoGAN-HD [49] No 1729.6 DIGAN [71] No 1630.2 StyleGAN-V [47] No 1431.0 PVDM [70] No 343.6 CogVideo [18] Yes 701.6 MagicVideo [74] Yes 699.0 LVDM [13] Yes 641.8 ModelScope [26] Yes 639.9 Video LDM [5] Yes 550.6 Ours Yes 410.0\nTable 4: Text-to-video generation on MSR-VTT.\nMethod Zero-shot CLIPSIM↑ GODIVA [59] No 0.2402 NUWA [60] No 0.2439 LVDM [13] Yes 0.2381 CogVideo [18] Yes 0.2631 ModelScope [26] Yes 0.2795 Video LDM [5] Yes 0.2929 Ours Yes 0.3005\nTable 5: Text-to-video generation on WebVid.\nMethod FVD↓ CLIPSIM↑ ModelScope [26] 414.11 0.3000 LVDM [13] 455.53 0.2751 Ours 292.35 0.3070" }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Studies", "publication_ref": [ "b37", "b1" ], "table_ref": [], "text": "Spatiotemporal Inter-Connection. We first evaluate the design of our swapped cross-attention mechanism. As shown in Tab. 2, using temporal as Q generally leads to better CLIP similarity (CLIPSIM) [38], revealing a better text-video alignment. The reason might be that language crossattention only exists in spatial modules. Thus, using spatial features to guide temporal ones implicitly enhance semantic guidance. Reversely, using spatial as Q leads to significantly better FVD, revealing better video quality. The reason might be that the spatial features can better perceive the overall video by using temporal features as guidance. This experiment demonstrates the benefits of introducing cross-attention, as well as the different acts of spatial and temporal features. Combining these two aspects, we propose to swap the roles of x and y every two blocks. In this way, both the temporal and spatial features can get sufficient information from the other modality, leading to improved FVD and CLIPSIM scores. 3D window attention not only does not decrease the performance but also greatly reduces the computational cost.\nHigh-Definition Video Generation Dataset. As shown in Tab. 6, we evaluate the effect of our HD-VG-130M. After adding HD-VG-130M in training, the result on the validation set of WebVid-10M [2] has been improved by 45.74 in FVD, which verifies the superior quality of our HD-VG-130M\nfor training text conditioned video generation model. The visual comparison can also be found in Fig. 4. The visual qualities are greatly improved with the help of our high-quality text-video dataset, especially the watermark on the generated video is eliminated. " }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [ "b1", "b47", "b61", "b47", "b16", "b45", "b47", "b61", "b61" ], "table_ref": [], "text": "To fully evaluate the generation performance of our VideoFactory, we conduct automatic evaluations on three different datasets, WebVid-10M [2] (Val) same as the domain of part of our training data, as well as UCF101 [48] and MSR-VTT [62] in zero-shot setting.\nAutomatic Evaluation on UCF101. As mentioned in Sec. 3, the textual annotations in UCF101 [48] are class labels. We first follow [17,46] which achieves the best compared with other methods both in zero-shot setting and beats most of the methods which have tuned on UCF101 [48]. The results verify that our proposed VideoFactory could generate more coherent and realistic videos.\nAutomatic Evaluation on MSR-VTT. As shown in Tab. 4, we also evaluate the CLIPSIM on the widely used video generation benchmark MSR-VTT [62]. We randomly choose one prompt per example from MSR-VTT [62] to generate 2990 videos in total. Although in a zero-shot setting, our method achieves the best compared to other methods with an average CLIPSIM score of 0.3005, which suggests the semantic alignment between the generated videos and the input text.\nAutomatic Evaluation on WebVid-10M (Val). Referring to Tab. 5, we randomly extract 5K textvideo pairs from WebVid-10M which are exclusive from the training data to form a validation set and conduct evaluations on it. Our method achieves an FVD of 292.35 and a CLIPSIM of 0.3070, significantly surpassing the existing methods ModelScope and LVDM. The results demonstrate the superiority of our approach.\nHuman Evaluation. To overcome the limitation of existing metrics, and evaluate the performance from the aspect of humans, we conduct a user study to compare our VideoFactory with four stateof-the-arts. Specifically, we choose two models (i.e., ModelScope and LVDM) which have released their codes and pretrained models, and two methods (i.e., Make-A-Video and Imagen Video) which only show some samples on their websites. In each case, each participant will be given two samples of the same text from our method and one competitor, and is asked to compare the two samples in terms of the video quality and text-video correlation and give an overall preference. We demonstrate the results in Tab. 7, and we also report the number of parameter ratios for fair comparisons." }, { "figure_ref": [ "fig_4", "fig_5" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "In Fig. 5, we show the text-to-video comparison results against Make-A-Video, Imagen Video, and Video LDM. The prompts and generated results are collected from their official project website. \"iron man is walking on the street, high resolution.\"\n\"Superhero in red cape and mask is dancing in bedroom at home having fun enjoying music and leisure time. superman, lifestyle and apartment concept.\"\n\"Coffee pours into a glass.\"\n\"Honey bees on flower pollination macro shot\" Besides, we demonstrate more generated samples of our method in Fig. 6. Video demos can be found in our supplementary." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we propose a high-quality open-domain video generation framework namely Video-Factory, which produces high-definition (1376×768), widescreen (16:9) videos without watermarks. We revisit the spatial and temporal modeling in video generation, and present a novel swapped cross-attention mechanism which enables spatial and temporal information alternately to attend to each other. Furthermore, we propose a widescreen, watermark-free, high-definition HD-VG-130M dataset, with 130 million open-domain text-video pairs to unlock the power of our model as much as possible. Experiments confirm the high spatial quality, temporal consistency, and fitness to the text of synthesized videos from our VideoFactory, proving it the new benchmark of text-to-video generation." } ]
We present VideoFactory, an innovative framework for generating highquality open-domain videos. VideoFactory excels in producing high-definition (1376×768), widescreen (16:9) videos without watermarks, creating an engaging user experience. Generating videos guided by text instructions poses significant challenges, such as modeling the complex relationship between space and time, and the lack of large-scale text-video paired data. Previous approaches extend pretrained text-to-image generation models by adding temporal 1D convolution/attention modules for video generation. However, these approaches overlook the importance of jointly modeling space and time, inevitably leading to temporal distortions and misalignment between texts and videos. In this paper, we propose a novel approach that strengthens the interaction between spatial and temporal perceptions. In particular, we utilize a swapped cross-attention mechanism in 3D windows that alternates the "query" role between spatial and temporal blocks, enabling mutual reinforcement for each other. To fully unlock model capabilities for high-quality video generation, we curate a large-scale video dataset called HD-VG-130M. This dataset comprises 130 million text-video pairs from the open-domain, ensuring high-definition, widescreen and watermark-free characters. Objective metrics and user studies demonstrate the superiority of our approach in terms of per-frame quality, temporal correlation, and text-video alignment, with clear margins.
VideoFactory: Swap Attention in Spatiotemporal Diffusions for Text-to-Video Generation
[ { "figure_caption": "Figure 1 :1Figure 1: The paradigm of Swapped Spatiotemporal Cross-Attention (Swap-CA) in comparison with existing video attention schemes. Instead of only conducting self-attention in (a)-(c), we perform cross-attention between spatial and temporal modules in a U-Net, which encourages more spatiotemporal mutual reinforcement.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Statistics of video categories, clip durations, and caption word lengths in HD-VG-130M. HD-VG-130M covers a wide range of video categories.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: An illustration of our video diffusion model incorporating Swapped Spatiotemporal Cross-Attention (Swap-CA).At the end of each U-Net block, we employ a swapped cross-attention scheme on 3D windows to facilitate a comprehensive integration of spatial and temporal features. In the case of two consecutive blocks, the first block employs temporal features to guide spatial features, while in the second block, their roles are reversed. This reciprocal arrangement ensures a balanced and mutually beneficial interaction between the spatial and temporal modalities throughout the model.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Text-to-video generation effects w/o and w/ HD-VG-130M for training.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure5: Subjection text-to-video generation results compared with Imagen Video, Make-A-Video, and Video-LDM (Cases above are collected from their public project websites). which achieves the best compared with other methods both in zero-shot setting and beats most of the methods which have tuned on UCF101[48]. The results verify that our proposed VideoFactory could generate more coherent and realistic videos.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Generated samples of our VideoFactory. We can observe high-quality generated results with clear motion, rich detail, and well semantic alignment.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparison of different video datasets. Existing text-video datasets are always limited in either scale or quality, while our HD-VG-130M includes 130M text-video pairs from open-domain in high-definition, widescreen and watermark-free formats.", "figure_data": "DatasetVideo clips ResolutionDomainTextWatermark-freeMSR-VTT [62]10K240popencaption✓UCF101 [48]13K240phuman action class label✓HowTo100M [33]136M240pinstructionalsubtitle✓HD-VILA-100M [64]103M720popensubtitle✓WebVid-10M [2]10M360popencaption✗HD-VG-130M (Ours)130M720popencaption✓9LGHR&DWHJRULHV&OLS'XUDWLRQV&DSWLRQ/HQJWKV7UDYHO9HKLFOHV $QLPDWLRQ (QWHUWDLQPHQW 6FLHQFH +RZWR $QLPDOV 6SRUWVaV aV aV aV !V aVa a !2WKHUV&DWHJRULHV", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Effect of training on different datasets.", "figure_data": "Training DataFVD ↓WebVid-10M [2] 475.09WebVid-10M [2] 429.75 + HD-VG-130M", "figure_id": "tab_3", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "User Preference. The number indicates the percentage of humans that prefer our method over the compared method. We also show the ratio of the network parameter v.s. Ours.", "figure_data": "SampleMethodParam Ratio Video Quality Text-Video OverallPretrained ModelModelScope [26] LVDM [13]0.90× 0.57×0.8875 0.91550.8575 0.85550.9300 0.9370Open WebsiteMake-A-Video [46] Imagen Video [15]4.76× 7.97×0.5417 0.42910.4958 0.25820.5417 0.3818", "figure_id": "tab_4", "figure_label": "7", "figure_type": "table" } ]
Wenjing Wang; Huan Yang; Zixi Tuo; Huiguo He; Junchen Zhu; Jianlong Fu; Jiaying Liu
[ { "authors": "Jie An; Songyang Zhang; Harry Yang; Sonal Gupta; Jia-Bin Huang; Jiebo Luo; Xi Yin", "journal": "", "ref_id": "b0", "title": "Latent-Shift: Latent diffusion with temporal shift for efficient text-to-video generation", "year": "2023" }, { "authors": "Max Bain; Arsha Nagrani; Gül Varol; Andrew Zisserman", "journal": "", "ref_id": "b1", "title": "Frozen in time: A joint video and image encoder for end-to-end retrieval", "year": "2021" }, { "authors": "Yogesh Balaji; Seungjun Nah; Xun Huang; Arash Vahdat; Jiaming Song; Karsten Kreis; Miika Aittala; Timo Aila; Samuli Laine; Bryan Catanzaro; Tero Karras; Ming-Yu Liu", "journal": "", "ref_id": "b2", "title": "eDiff-I: Text-to-image diffusion models with an ensemble of expert denoisers", "year": "2022" }, { "authors": "Gedas Bertasius; Heng Wang; Lorenzo Torresani", "journal": "ICML", "ref_id": "b3", "title": "Is space-time attention all you need for video understanding", "year": "2021" }, { "authors": "Andreas Blattmann; Robin Rombach; Huan Ling; Tim Dockhorn; Seung Wook Kim; Sanja Fidler; Karsten Kreis", "journal": "", "ref_id": "b4", "title": "Align your latents: High-resolution video synthesis with latent diffusion models", "year": "2023" }, { "authors": "Prafulla Dhariwal; Alexander Nichol", "journal": "", "ref_id": "b5", "title": "Diffusion models beat GANs on image synthesis", "year": "2021" }, { "authors": "Ming Ding; Zhuoyi Yang; Wenyi Hong; Wendi Zheng; Chang Zhou; Da Yin; Junyang Lin; Xu Zou; Zhou Shao; Hongxia Yang; Jie Tang", "journal": "", "ref_id": "b6", "title": "CogView: Mastering text-to-image generation via transformers", "year": "2021" }, { "authors": "Ming Ding; Wendi Zheng; Wenyi Hong; Jie Tang", "journal": "", "ref_id": "b7", "title": "CogView2: Faster and better text-toimage generation via hierarchical transformers", "year": "2022" }, { "authors": "Patrick Esser; Johnathan Chiu; Parmida Atighehchian; Jonathan Granskog; Anastasis Germanidis", "journal": "", "ref_id": "b8", "title": "Structure and content-guided video synthesis with diffusion models", "year": "2023" }, { "authors": "Ian J Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron C Courville; Yoshua Bengio", "journal": "", "ref_id": "b9", "title": "Generative adversarial nets", "year": "2014" }, { "authors": "Amirhossein Habibian; Jakub M Ties Van Rozendaal; Taco Tomczak; Cohen", "journal": "", "ref_id": "b10", "title": "Video compression with rate-distortion autoencoders", "year": "2019" }, { "authors": "Tiankai Hang; Huan Yang; Bei Liu; Jianlong Fu; Xin Geng; Baining Guo", "journal": "", "ref_id": "b11", "title": "Language-guided face animation by recurrent stylegan-based generator", "year": "2022" }, { "authors": "Yingqing He; Tianyu Yang; Yong Zhang; Ying Shan; Qifeng Chen", "journal": "", "ref_id": "b12", "title": "Latent video diffusion models for high-fidelity long video generation", "year": "2022" }, { "authors": "Yingqing He; Tianyu Yang; Yong Zhang; Ying Shan; Qifeng Chen", "journal": "", "ref_id": "b13", "title": "Latent video diffusion models for high-fidelity video generation with arbitrary lengths", "year": "2022" }, { "authors": "Jonathan Ho; William Chan; Chitwan Saharia; Jay Whang; Ruiqi Gao; Alexey A Gritsenko; P Diederik; Ben Kingma; Mohammad Poole; David J Norouzi; Tim Fleet; Salimans", "journal": "", "ref_id": "b14", "title": "Imagen Video: High definition video generation with diffusion models", "year": "2022" }, { "authors": "Jonathan Ho; Ajay Jain; Pieter Abbeel", "journal": "NeurIPS", "ref_id": "b15", "title": "Denoising diffusion probabilistic models", "year": "2020" }, { "authors": "Jonathan Ho; Tim Salimans; Alexey A Gritsenko; William Chan; Mohammad Norouzi; David J ", "journal": "", "ref_id": "b16", "title": "Fleet. Video diffusion models", "year": "2022" }, { "authors": "Wenyi Hong; Ming Ding; Wendi Zheng; Xinghan Liu; Jie Tang", "journal": "", "ref_id": "b17", "title": "CogVideo: Large-scale pretraining for text-to-video generation via transformers", "year": "2022" }, { "authors": "J Bhautik; Kristen Joshi; David Stewart; Shapiro", "journal": "", "ref_id": "b18", "title": "Bringing impressionism to life with neural style transfer in Come Swim", "year": "2017" }, { "authors": "Levon Khachatryan; Andranik Movsisyan; Vahram Tadevosyan; Roberto Henschel; Zhangyang Wang; Shant Navasardyan; Humphrey Shi", "journal": "", "ref_id": "b19", "title": "Text2Video-Zero: Text-to-image diffusion models are zero-shot video generators", "year": "2023" }, { "authors": "Junnan Li; Dongxu Li; Silvio Savarese; Steven Hoi", "journal": "", "ref_id": "b20", "title": "BLIP-2: Bootstrapping language-image pre-training with frozen image encoders and large language models", "year": "2023" }, { "authors": "Yitong Li; Martin Renqiang Min; Dinghan Shen; David E Carlson; Lawrence Carin", "journal": "", "ref_id": "b21", "title": "Video generation from text", "year": "2018" }, { "authors": "Chengxu Liu; Huan Yang; Jianlong Fu; Xueming Qian", "journal": "", "ref_id": "b22", "title": "Learning trajectory-aware transformer for video super-resolution", "year": "2022" }, { "authors": "Chengxu Liu; Huan Yang; Jianlong Fu; Xueming Qian", "journal": "", "ref_id": "b23", "title": "TTVFI: learning trajectory-aware transformer for video frame interpolation", "year": "2022" }, { "authors": "Ze Liu; Jia Ning; Yue Cao; Yixuan Wei; Zheng Zhang; Stephen Lin; Han Hu", "journal": "", "ref_id": "b24", "title": "Video swin transformer", "year": "2022" }, { "authors": "Zhengxiong Luo; Dayou Chen; Yingya Zhang; Yan Huang; Liang Wang; Yujun Shen; Deli Zhao; Jingren Zhou; Tieniu Tan", "journal": "", "ref_id": "b25", "title": "VideoFusion: Decomposed diffusion models for high-quality video generation", "year": "2023" }, { "authors": "Yiyang Ma; Huan Yang; Bei Liu; Jianlong Fu; Jiaying Liu", "journal": "", "ref_id": "b26", "title": "AI illustrator: Translating raw descriptions into images by prompt-based cross-modal generation", "year": "2022" }, { "authors": "Yiyang Ma; Huan Yang; Wenjing Wang; Jianlong Fu; Jiaying Liu", "journal": "", "ref_id": "b27", "title": "Unified multi-modal latent diffusion for joint subject and text conditional image generation", "year": "2023" }, { "authors": "Yue Ma; Yingqing He; Xiaodong Cun; Xintao Wang; Ying Shan; Xiu Li; Qifeng Chen", "journal": "", "ref_id": "b28", "title": "Follow your pose: Pose-guided text-to-video generation using pose-free videos", "year": "2023" }, { "authors": "Elman Mansimov; Emilio Parisotto; Jimmy Lei; Ruslan Ba; Salakhutdinov", "journal": "", "ref_id": "b29", "title": "Generating images from captions with attention", "year": "2016" }, { "authors": "Michaël Mathieu; Camille Couprie; Yann Lecun", "journal": "", "ref_id": "b30", "title": "Deep multi-scale video prediction beyond mean square error", "year": "2016" }, { "authors": "Willi Menapace; Stéphane Lathuilière; Sergey Tulyakov; Aliaksandr Siarohin; Elisa Ricci", "journal": "", "ref_id": "b31", "title": "Playable video generation", "year": "2021" }, { "authors": "Antoine Miech; Dimitri Zhukov; Jean-Baptiste Alayrac; Makarand Tapaswi; Ivan Laptev; Josef Sivic", "journal": "", "ref_id": "b32", "title": "HowTo100M: Learning a text-video embedding by watching hundred million narrated video clips", "year": "2019" }, { "authors": "Alex Nichol; Prafulla Dhariwal; Aditya Ramesh; Pranav Shyam; Pamela Mishkin; Bob Mcgrew; Ilya Sutskever; Mark Chen", "journal": "", "ref_id": "b33", "title": "GLIDE: Towards photorealistic image generation and editing with text-guided diffusion models", "year": "2021" }, { "authors": "Yingwei Pan; Zhaofan Qiu; Ting Yao; Houqiang Li; Tao Mei", "journal": "", "ref_id": "b34", "title": "To create what you tell: Generating videos from captions", "year": "2017" }, { "authors": "Jorge Pessoa; Helena Aidos; Pedro Tomás; A T Mário; Figueiredo", "journal": "IEEE SiPS", "ref_id": "b35", "title": "End-to-end learning of video compression using spatio-temporal autoencoders", "year": "2020" }, { "authors": "Zhongwei Qiu; Huan Yang; Jianlong Fu; Dongmei Fu", "journal": "", "ref_id": "b36", "title": "Learning spatiotemporal frequencytransformer for compressed video super-resolution", "year": "2022" }, { "authors": "Alec Radford; Jong Wook Kim; Chris Hallacy; Aditya Ramesh; Gabriel Goh; Sandhini Agarwal; Girish Sastry; Amanda Askell; Pamela Mishkin; Jack Clark; Gretchen Krueger; Ilya Sutskever", "journal": "", "ref_id": "b37", "title": "Learning transferable visual models from natural language supervision", "year": "2021" }, { "authors": "Aditya Ramesh; Prafulla Dhariwal; Alex Nichol; Casey Chu; Mark Chen", "journal": "", "ref_id": "b38", "title": "Hierarchical text-conditional image generation with CLIP latents", "year": "2022" }, { "authors": "Aditya Ramesh; Mikhail Pavlov; Gabriel Goh; Scott Gray; Chelsea Voss; Alec Radford; Mark Chen; Ilya Sutskever", "journal": "", "ref_id": "b39", "title": "Zero-shot text-to-image generation", "year": "2021" }, { "authors": "Scott E Reed; Zeynep Akata; Xinchen Yan; Lajanugen Logeswaran; Bernt Schiele; Honglak Lee", "journal": "", "ref_id": "b40", "title": "Generative adversarial text to image synthesis", "year": "2016" }, { "authors": "Robin Rombach; Andreas Blattmann; Dominik Lorenz; Patrick Esser; Björn Ommer", "journal": "", "ref_id": "b41", "title": "High-resolution image synthesis with latent diffusion models", "year": "2022" }, { "authors": "Nataniel Ruiz; Yuanzhen Li; Varun Jampani; Yael Pritch; Michael Rubinstein; Kfir Aberman", "journal": "", "ref_id": "b42", "title": "DreamBooth: Fine tuning text-to-image diffusion models for subject-driven generation", "year": "2023" }, { "authors": "Chitwan Saharia; William Chan; Saurabh Saxena; Lala Li; Jay Whang; Emily L Denton; Seyed Kamyar; Seyed Ghasemipour; Raphael Gontijo Lopes; Burcu Karagol Ayan; Tim Salimans; Jonathan Ho; David J Fleet; Mohammad Norouzi", "journal": "", "ref_id": "b43", "title": "Photorealistic text-to-image diffusion models with deep language understanding", "year": "2022" }, { "authors": "Masaki Saito; Eiichi Matsumoto; Shunta Saito", "journal": "", "ref_id": "b44", "title": "Temporal generative adversarial nets with singular value clipping", "year": "2017" }, { "authors": "Uriel Singer; Adam Polyak; Thomas Hayes; Xi Yin; Jie An; Songyang Zhang; Qiyuan Hu; Harry Yang; Oron Ashual; Oran Gafni", "journal": "", "ref_id": "b45", "title": "Make-A-Video: Text-to-video generation without text-video data", "year": "2022" }, { "authors": "Ivan Skorokhodov; Sergey Tulyakov; Mohamed Elhoseiny", "journal": "", "ref_id": "b46", "title": "StyleGAN-V: A continuous video generator with the price, image quality and perks of stylegan2", "year": "2022" }, { "authors": "Khurram Soomro; Mubarak Amir Roshan Zamir; Shah", "journal": "", "ref_id": "b47", "title": "UCF101: A dataset of 101 human actions classes from videos in the wild", "year": "2012" }, { "authors": "Yu Tian; Jian Ren; Menglei Chai; Kyle Olszewski; Xi Peng; Dimitris N Metaxas; Sergey Tulyakov", "journal": "", "ref_id": "b48", "title": "A good image generator is what you need for high-resolution video synthesis", "year": "2021" }, { "authors": "Sergey Tulyakov; Ming-Yu Liu; Xiaodong Yang; Jan Kautz", "journal": "", "ref_id": "b49", "title": "MoCoGAN: Decomposing motion and content for video generation", "year": "2018" }, { "authors": "Zixi Tuo; Huan Yang; Jianlong Fu; Yujie Dun; Xueming Qian", "journal": "", "ref_id": "b50", "title": "Learning data-driven vector-quantized degradation model for animation video super-resolution", "year": "2023" }, { "authors": "Thomas Unterthiner; Karol Sjoerd Van Steenkiste; Raphaël Kurach; Marcin Marinier; Sylvain Michalski; Gelly", "journal": "", "ref_id": "b51", "title": "Towards accurate generative models of video: A new metric & challenges", "year": "2018" }, { "authors": "Yuri Viazovetskyi; Vladimir Ivashkin; Evgeny Kashin", "journal": "Springer", "ref_id": "b52", "title": "StyleGAN2 distillation for feedforward image manipulation", "year": "2020" }, { "authors": "Ruben Villegas; Mohammad Babaeizadeh; Pieter-Jan Kindermans; Hernan Moraldo; Han Zhang; Mohammad Taghi Saffar; Santiago Castro; Julius Kunze; Dumitru Erhan", "journal": "", "ref_id": "b53", "title": "Phenaki: Variable length video generation from open domain textual description", "year": "2022" }, { "authors": "Carl Vondrick; Hamed Pirsiavash; Antonio Torralba", "journal": "", "ref_id": "b54", "title": "Generating videos with scene dynamics", "year": "2016" }, { "authors": "Catherine Wah; Steve Branson; Peter Welinder; Pietro Perona; Serge Belongie", "journal": "", "ref_id": "b55", "title": "The Caltech-UCSD", "year": "2011" }, { "authors": " Ting-Chun; Ming-Yu Wang; Jun-Yan Liu; Guilin Zhu; Andrew Liu; Jan Tao; Bryan Kautz; Catanzaro", "journal": "", "ref_id": "b56", "title": "Video-to-video synthesis", "year": "2018" }, { "authors": "Xintao Wang; Liangbin Xie; Chao Dong; Ying Shan", "journal": "", "ref_id": "b57", "title": "Real-ESRGAN: Training real-world blind super-resolution with pure synthetic data", "year": "2021-10" }, { "authors": "Chenfei Wu; Lun Huang; Qianxi Zhang; Binyang Li; Lei Ji; Fan Yang; Guillermo Sapiro; Nan Duan", "journal": "", "ref_id": "b58", "title": "GODIVA: Generating open-domain videos from natural descriptions", "year": "2021" }, { "authors": "Chenfei Wu; Jian Liang; Lei Ji; Fan Yang; Yuejian Fang; Daxin Jiang; Nan Duan", "journal": "", "ref_id": "b59", "title": "Nüwa: Visual synthesis pre-training for neural visual world creation", "year": "2022" }, { "authors": "Jay Zhangjie Wu; Yixiao Ge; Xintao Wang; Stan Weixian Lei; Yuchao Gu; Wynne Hsu; Ying Shan; Xiaohu Qie; Mike Zheng Shou", "journal": "", "ref_id": "b60", "title": "Tune-a-video: One-shot tuning of image diffusion models for text-to-video generation", "year": "2022" }, { "authors": "Jun Xu; Tao Mei; Ting Yao; Yong Rui", "journal": "", "ref_id": "b61", "title": "MSR-VTT: A large video description dataset for bridging video and language", "year": "2016" }, { "authors": "Tao Xu; Pengchuan Zhang; Qiuyuan Huang; Han Zhang; Zhe Gan; Xiaolei Huang; Xiaodong He", "journal": "", "ref_id": "b62", "title": "AttnGAN: Fine-grained text to image generation with attentional generative adversarial networks", "year": "2018" }, { "authors": "Tiankai Hongwei Xue; Yanhong Hang; Yuchong Zeng; Bei Sun; Huan Liu; Jianlong Yang; Baining Fu; Guo", "journal": "", "ref_id": "b63", "title": "Advancing high-resolution video-language representation with large-scale video transcriptions", "year": "2022" }, { "authors": "Wilson Yan; Yunzhi Zhang; Pieter Abbeel; Aravind Srinivas", "journal": "", "ref_id": "b64", "title": "VideoGPT: Video generation using vq-vae and transformers", "year": "2021" }, { "authors": "Binxin Yang; Shuyang Gu; Bo Zhang; Ting Zhang; Xuejin Chen; Xiaoyan Sun; Dong Chen; Fang Wen", "journal": "", "ref_id": "b65", "title": "Paint by example: Exemplar-based image editing with diffusion models", "year": "2023" }, { "authors": "Fuzhi Yang; Huan Yang; Jianlong Fu; Hongtao Lu; Baining Guo", "journal": "", "ref_id": "b66", "title": "Learning texture transformer network for image super-resolution", "year": "2020" }, { "authors": "Fuzhi Yang; Huan Yang; Yanhong Zeng; Jianlong Fu; Hongtao Lu", "journal": "", "ref_id": "b67", "title": "Degradation-guided meta-restoration network for blind super-resolution", "year": "2022" }, { "authors": "Lijun Yu; Yong Cheng; Kihyuk Sohn; José Lezama; Han Zhang; Huiwen Chang; Alexander G Hauptmann; Ming-Hsuan Yang; Yuan Hao; Irfan Essa; Lu Jiang", "journal": "", "ref_id": "b68", "title": "MAGVIT: masked generative video transformer", "year": "2022" }, { "authors": "Sihyun Yu; Kihyuk Sohn; Subin Kim; Jinwoo Shin", "journal": "", "ref_id": "b69", "title": "Video probabilistic diffusion models in projected latent space", "year": "2023" }, { "authors": "Sihyun Yu; Jihoon Tack; Sangwoo Mo; Hyunsu Kim; Junho Kim; Jung-Woo Ha; Jinwoo Shin", "journal": "", "ref_id": "b70", "title": "Generating videos with dynamics-aware implicit generative adversarial networks", "year": "2022" }, { "authors": "Han Zhang; Tao Xu; Hongsheng Li", "journal": "", "ref_id": "b71", "title": "StackGAN: Text to photo-realistic image synthesis with stacked generative adversarial networks", "year": "2017" }, { "authors": "Yulun Zhang; Kunpeng Li; Kai Li; Lichen Wang; Bineng Zhong; Yun Fu", "journal": "", "ref_id": "b72", "title": "Image superresolution using very deep residual channel attention networks", "year": "2018" }, { "authors": "Daquan Zhou; Weimin Wang; Hanshu Yan; Weiwei Lv; Yizhe Zhu; Jiashi Feng", "journal": "", "ref_id": "b73", "title": "MagicVideo: Efficient video generation with latent diffusion models", "year": "2022" } ]
[ { "formula_coordinates": [ 4, 213.08, 606.84, 291.59, 38.99 ], "formula_id": "formula_0", "formula_text": "(x, y) = softmax( QK T √ d ) • V , with Q = W (i) Q • x, K = W (i) K • y, V = W (i) V • y,(1)" }, { "formula_coordinates": [ 4, 135.52, 654.34, 45.91, 14.22 ], "formula_id": "formula_1", "formula_text": "W (i) Q , W(i)" }, { "formula_coordinates": [ 4, 205.37, 654.34, 19.84, 11.87 ], "formula_id": "formula_2", "formula_text": "W (i)" }, { "formula_coordinates": [ 5, 192.67, 530.73, 312, 61.88 ], "formula_id": "formula_3", "formula_text": "sl = Proj l in ⊙ GN(s l ), tl = Proj l in ⊙ GN(t l ); h l = 3DW-MCA(LN(s l ), LN( tl )) + sl ; hl = FFN ⊙ LN(h l ) + h l ; z l = t l + s l + Swap-CA(s l , t l ) = t l + s l + Proj l out ( hl ),(2)" }, { "formula_coordinates": [ 5, 203.44, 690.91, 301.23, 11.03 ], "formula_id": "formula_4", "formula_text": "h l+1 = 3DSW-MCA(LN( tl+1 ), LN(s l+1 )) + tl+1 .(3)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b28", "b29", "b12", "b7", "b29", "b30", "b0", "b7" ], "table_ref": [], "text": "In recent years, Generative Adversarial Networks (GANs) [21] have been significantly developed and achieve remarkable performance, which can generate images with both high-resolution and high-quality [29,30]. Moreover, numerous studies [4, 13,36] have illustrated that the latent space learned by these models encodes a broad range of meaningful semantics, enabling the manipulation of synthesized images. Consequently, comprehending and investigating a well-trained GAN model constitutes a crucial and active research domain.\nTo edit real-world images, a widely adopted approach involves a two-stage process comprising GAN inversion and latent code editing. GAN inversion is the initial step, where an image is inverted into the latent space of a GAN model to find the corresponding latent code. Following this, the latent code is edited in a semantically meaningful manner to obtain a new code that is used to generate the edited output image. The existing works [38,6,17] primarily focus on GAN inversion, aiming to reconstruct an image from the latent code that closely resembles the input image. When performing GAN inversion, the objective is not just to faithfully reconstruct the input image, but also to enable effective image editing in the subsequent stages. However, previous studies [43] have highlighted the existence of a trade-off between image reconstruction and editing. This trade-off is known to depend primarily on the embedding space where the input image is mapped to. For instance, StyleGAN [30,31] contains two popular embedding spaces, namely the native StyleGAN W space and the extended W + space. In general, inverting an image to the W space yields excellent editability. However, it has been shown to be infeasible for faithfully reconstructing the input image. Conversely, the W + space enables more accurate reconstructions, but it is associated with limited editing capabilities.\nAs for GAN inversion, the most recent works [39,17,6] adopt two-stage strategy to achieve high reconstruction accuracy. These methods first use off-the-shelf methods [1,38] to determine an approximate latent code. Then, they augment the latent space to include the given image by slightly altering the generator. PTI [39] directly fine-tune the generator via hundreds of optimization steps, which is time-consuming. HyperInverter [17] leverages the hypernetworks to predict the residual weights of the generator, which reduces processing time but sacrifices reconstruction accuracy. Moreover, existing methods often results in unpredictable changes to appearance or identity when editing pose, due to the lack of modeling the 3D structure.\nIn this work, we address these limitations to achieve multi-view-consistent image editing while obtaining robust reconstruction accuracy and high inference speed. Our method leverages recently developed 3D-aware GAN, i.e. EG3D [10], as generator. Since EG3D is designed with the StyleGAN backbone from the ground up, it inherits the well-studied properties of the StyleGAN latent space. Similar to previous work [39,17,6], our method also adopts a two-stage strategy. In the first stage, we invert the input image to an editable latent code using off-the-shelf inversion techniques. In addition, the auxiliary network is proposed to refine the generator parameters with the given image as input. The auxiliary network consists of two part: one predicts offsets for the weights of the convolutional layers which can recover the lost details, the other predicts offsets for sampling positions of volume rendering which can rectify structural errors. In the second stage, we perform meta-learning to adapt the auxiliary network to the input image, then the final reconstructed image is synthesized via the updated auxiliary network. Different from PTI which directly fine-tune the generator, our meta-auxiliary network can adapt to a new image in few steps which significantly reduces processing time and maintains comparable performance.\nThe main contributions can be summarized as follows:\n• We present a 3D-aware GAN framework for GAN inversion and image editing. A novel auxiliary network is proposed to update the parameters of a pre-trained generator and rectify volume rendering process.\n• We are the first to incorporate meta-learning into GAN inversion. With meta-learning strategy, the auxiliary network can quickly adapt to unseen images.\n• Experimental results demonstrate the superior performance of our method compared to the existing works." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b2", "b18", "b39", "b3", "b40", "b48", "b0", "b1", "b27", "b7", "b44", "b26", "b47", "b31", "b6", "b19", "b17", "b41", "b10" ], "table_ref": [], "text": "Latent Space Manipulation The latent space of a well-trained GAN generator encapsulates numerous interpretable semantic directions, which can be utilized for image editing. Hence, multiple methods have been proposed for discovering semantic latent directions through different levels of supervision. Some methods [3,19,40] leverage semantic labels for full supervision, necessitating pre-trained attribute classifiers and being limited to known attributes.\nOther methods [24,41,44,46] adopt principal component analysis or contrastive learning to explore unique editing directions in an unsupervised manner. However, in order to apply latent manipulation to real images, GAN inversion should be first performed. GAN Inversion GAN inversion [49] is the process of locating a latent code that can be passed to the generator to reconstruct a given image. Existing methods can be roughly categorized into three groups: optimization-based, encoderbased and two-stage methods. Optimization-based methods [1,2,14,28] directly optimize the latent code to minimize the reconstruction error for a given image, which are timeconsuming but achieve high accuracy. Encoder-based methods [5,37,38,45,27] train an encoder over a large number of samples to learn a mapping from an image to its latent representation. Such methods are efficient during inference but inferior in reconstruction quality to optimization-based method. Some two-stage methods [48,8] combine both above approaches, which first encode images to initial latent codes and then optimize the latent codes. Instead of optimizing the latent codes, other two-stage methods [39,6,17] turn to fine-tune the generator. HyperStyle [6] and Hyper-Inverter [17] utilize an additional hypernetwork to refine the generator weights with respect to the given image. PTI [39] directly adopt backpropagation for weight optimization, which achieves the best reconstruction performance but requires a substantially longer time. In comparison to PTI, our method adopts meta learning to accelerate optimization process and improve reconstruction quality.\nMeta-Learning To achieve test-time adaptation without greatly increasing the cost of computation, meta-learning has been proved to be effective. Meta-learning is originally proposed in few-shot classification, which aims to learn prior knowledge across tasks [32,7,20]. Among the meta-learning systems, MAML [18] has greatly enjoyed the attention for its simplicity and generalizability. Recently, several works [35,42,11,12] 3. Method" }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [ "b0", "b1", "b27", "b39" ], "table_ref": [], "text": "GAN inversion task aims to optimize a latent code ω that can be passed to the pre-trained generator to reconstruct a given image x:\nω = arg min ω L(x, G(ω; θ G ))(1)\nwhere G(ω; θ G ) is the image reconstructed by a pre-trained generator G with parameter θ G , using the latent code ω. L is the loss objective. Solving Eq.2 via optimization-based methods [1,2,14,28] typically requires hundreds of iterations, which takes several minutes per image. To improve the performance, encoder-based methods introduce an encoder E to predict latent code as ω = E(x), which only takes a few seconds for inference. Then, a latent manipulation f can be applied over the inverted latent code ω to obtain an edited image as G(f (ω); θ G ). In practice, the commonly used latent code manipulation method is Inter-FaceGAN [40], and the manipulation f can be formulated as f (ω) = ω + αn, where α is magnitude constant and n represents semantic direction of a specific facial attribute.\nApart from finding more accurate latent code ω, recent works [39,17,6] turn to inject new identities into the wellbehaved latent space of generator. Given a target image, they first utilize existing methods to find an initial latent code ωinit leading to an approximate reconstruction. Then, either an optimization process or a hypernetwork is adopted to adapt the generator parameters to the specific image:\nθG = arg min θ G L(x, G(ω init ; θ G ))(2)\nwhere θG represents the adapted generator parameters. The final reconstruction image is obtained by utilizing the initial latent and adapted parameters as ŷ = G(ω init ; θG )." }, { "figure_ref": [ "fig_0" ], "heading": "Overview", "publication_ref": [ "b26" ], "table_ref": [], "text": "Our proposed method is designed to efficiently modify the parameters of the generator in order to adapt to unseen images, as illustrated in Fig. 2. The input of our method includes an image x, a generator G with parameters θ G , and an initial inverted latent code ωinit , which is obtained using an off-the-shelf encoder [27].\nTo achieve our goal of minimizing the objective defined in Eq. 2, we introduce an auxiliary network Aux that is responsible for predicting a new set of parameters θG for generator G. The predicted parameters are given by θG = Aux(x; θ Aux ), where θ Aux are parameters of the auxiliary network. However, the auxiliary network only takes the target image x as input without the initial output G(ω init ; θ G ), which is insufficient to infer the desired modifications. Therefore, to assist the auxiliary network, we introduce meta-learning strategy to fine-tune the auxiliary network for test-time adaptation.\nTo balance the trade-off between reconstruction and editability, it is crucial that the initial latent code resides in a well-behaved region of the latent space. To achieve this, we utilize the encoder to invert the input image into the W space, and the encoder is kept fixed during the training process. It will be demonstrated that although making adjustments around the initial latent code, the same editing methods as those used with the original generator can be applied.\nIn practice, rather than directly predicting the generator parameters, our auxiliary network predicts a set of offsets with respect to the original parameters. In addition, to balance between fidelity and efficiency, we perform metalearning with a small number of iterations (5 in this work)." }, { "figure_ref": [ "fig_2", "fig_2" ], "heading": "Model Architecture", "publication_ref": [], "table_ref": [], "text": "The goal of auxiliary network is to provide additional information which can not be recovered by the latent code. The detailed architecture of the auxiliary network is shown in Fig. 3, which can be divided into two modules.\nThe 2D-related module is to provide the missing texture information. We first employ a ResNet-101 [25] to extract intermediate features from the input image. Inspired by HyperStyle [6], a set of hypernetworks is used to predict a set of residual weights. We then use the residual weights to update the corresponding convolutional layers in the generator. Assume that the pre-trained generator has N convolutional layers with weights θ conv = (θ 1 , . . . , θ N ). We therefore propose to use several small hypernetworks H j to predict the residual weights ∆θ j of each convolutional layer, where j ∈ {1, . . . , N }. Finally, the updated generator has the convolutional weights as θconv = (θ 1 + ∆θ 1 , . . . , θ N + ∆θ N ).\nThe 3D-related module aims to rectify unaligned structure. The pre-trained generator leverage the tri-plane feature representation, which consists three axis-aligned 2D feature Evaluate ∇ θ Aux L(x i , G(x i ; θ Aux )); 6 Compute adapted parameters using ;\n7 θ Aux i ← θ Aux -α∇ θ Aux L(x i , G(x i ; θ Aux )) 8 end 9 θ Aux ← θ Aux -β∇ θ Aux xi L(x i , G(x i ; θ Aux i ))\n10 end planes. The feature of any 3D point p ∈ R 3 is queried by projecting x onto the planes, retrieving three feature vectors via bilinear interpolation, and aggregating the vectors by summation, i.e.,\nF (p) = F xy (p) + F yz (p) + F xz (p)\nwhere F ij : R 3 → R C is a function mapping 3D coordinates to features on the ij plane via projection and interpolation. The subsequent volume rendering will infer the corresponding feature F (p) with sampling point p.\nInspired by [9], we introduce a deformation function D : R 3 → R 3 to rectify coordinate p. Then, the corresponding feature of p is formulated as\nF (p) = (F xy • D)(p) + (F yz • D)(p) + (F xz • D)(p) (3)\nIn practice, we construct a similar offset tri-plane structure to realize the deformation function D. As shown in Fig. 3, we take the highest feature from 2D-related module as input followed by several StyleGAN blocks, which is modulated by the viewpoint of the input image. In this sense, the 3D-related module can reconstruct 3D features from input image. Therefore, the structural offsets can be inferred by comparing 3D features from input image and generator, which constitutes an offset tri-plane structure. Then, the deformation D can be written as\n∆p = D xy (p) + D yz (p) + D xz (p)(4)\nwhere D ij : R 3 → R 3 is a function mapping 3D coordinates to offsets and the deformed coordinate can be written as D(p) = p + ∆p. Consequently, the adapted generator parameters θG consist of both adapted weights θconv and sampling point deformation D." }, { "figure_ref": [], "heading": "Meta-Learning Strategy", "publication_ref": [ "b17" ], "table_ref": [], "text": "The results obtained from the above architecture is suboptimal since it only exploits the external data and does not take advantage of internal information from test images. Therefore, we introduce meta-learning strategy to learn model parameters to facilitate test-time adaptation. For a test image, our auxiliary network is update and adapted to the specific test image. It is necessary to be noted that since the pre-trained generator contains diverse facial information, directly applying meta-learning strategy to the generator's parameters would damage the prior information.\nIn this paper, we adopt the model-agnostic meta-learning (MAML) [18] approach. MAML can find a good initialization of the parameters that are sensitive to changes in task, so that small update can make large improvements. The entire training procedure is listed in Alg. 1. For one gradient update, new adapted parameters is\nθ Aux i = θ Aux -α∇ θ Aux L(x, G(x; θ Aux )) (5\n)\nwhere α is the task-level learning rate. In training stage, the model parameters θ Aux are optimized to achieve minimal test error. Concretely, the meta-objective is arg min\nθ Aux xi∼p(x) L(x i , G(x i ; θ Aux i ))(6)\nMeta-learning optimization is performed using Eq. 6, which is to learn the knowledge across task. Any gradientbased optimization can be used for meta-learning training. For stochastic gradient descents, the parameter update rule is expressed as\nθ Aux = θ Aux -β∇ θ Aux xi∼p(x) L(x i , G(x i ; θ Aux i )) (7)\nwhere β is the meta-learning rate. The above process is the training phase of meta-learning, which optimizes the auxiliary network so that it learns how to adapt to unseen samples. During the inference phase, only task-level update shown in Eq. 5 is performed." }, { "figure_ref": [], "heading": "Training Loss", "publication_ref": [ "b14" ], "table_ref": [], "text": "Similar to previous methods, our training is guided by an image-space reconstruction objective. Particularly, the final loss objective is defined as:\nL = L 2 + λ lpips L LP IP S + λ id L id + λ adv L adv (8\n)\nwhere λ lpips , λ id , λ adv are constants. The pixel-wise loss L 2 is defined as:\nL 2 (x, ŷ) = x -ŷ 2 (9)\nThe LPIPS [47] loss utilize the perceptual feature extractor P to learn perceptual similarities, which is defined as:\nL LP IP S (x, ŷ) = P (x) -P (ŷ) 2 (10)\nThe identity loss uses the pre-trained ArcFace [15] network R to measure cosine similarity:\nL id (x, ŷ) = 1 -R(x) -R(ŷ)(11)\nwhere • denotes cosine similarity. And the adversarial loss L adv is constructed following StyleGAN." }, { "figure_ref": [], "heading": "Method", "publication_ref": [ "b7", "b26" ], "table_ref": [], "text": "MSE↓ LPIPS↓ FID↓ Time(s)↓ pSp [38] 0.0250 0.1500 25.00 1.72 StyleTransformer [27] " }, { "figure_ref": [], "heading": "Experiment", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experimental Settings", "publication_ref": [ "b29", "b28", "b7", "b26", "b29" ], "table_ref": [], "text": "Datasets and Baselines We conduct the experiments on the FFHQ [30] as training dataset, and the CelebA-HQ [29] as testing dataset. We compare our results with the stateof-the-art encoder-based methods pSp [38] and StyleTransformer [27]. For two-stage methods, we choose PTI [39] and HyperInverter [17] to compare with our results. It is worth noted that pSp and StyleTransformer employ W + space, while PTI and HyperInverter choose W space. Specially, the above methods are originally implemented on StyleGAN2 [30]. As EG3D [10] is designed with Style-GAN2 backbone from the ground up, the above methods can be directly employed with EG3D as generator. Therefore, for fair comparisons, we use the official configurations to fine-tune these methods on EG3D. Implementation Details In our experiments, the pretrained EG3D generators being used are obtained directly from EG3D [10] repository. To stabilize the meta-learning scheme, we divide the whole training procedure into two phase. In the first phase, we train the auxiliary network directly and obtain approximate parameters. In the second phase, we use the above parameter as initialization and train with meta-learning. For constants in the loss objective, we set λ lpips = 0.8, λ id = 0.1 and λ adv = 0.005." }, { "figure_ref": [], "heading": "Reconstruction Results", "publication_ref": [ "b25" ], "table_ref": [], "text": "Quantitative Results We use several metrics to measure the reconstruction quality of our method compared with existing, including pixel-wise MSE, perceptual LPIPS [47] and FID [26] as shown in Tab. 1. Compared with encoderbased methods (pSp and StyleTransformer), our method is able to reconstruct out-of-domain visual details, such as clothing and hairstyles. Although two-stage method PTI also utilizes optimization technique and achieve accurate reconstructions, PTI requires hundreds of optimization steps and comes with a high computational cost. On the contrary, our method can obtain comparable performance in few iteration steps, which demonstrates the effectiveness of metalearning. In addition, the encoder-based two-phase method HyperInverter can not fully utilize the information to update the generator, while our method significantly outperform HyperInverter using optimization. " }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "We visualize the reconstruction results in Fig. 4. Compared with optimization-based method PTI, our method can achieve visually comparable results with an inference time several orders of magnitude faster. Further, compared with encoder-based method, our method better preserve texture details and better capture the input identity, such as hairstyles and clothing." }, { "figure_ref": [], "heading": "Editing Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Quantitative Results", "publication_ref": [ "b39" ], "table_ref": [], "text": "We first perform a quantitative evaluation for editing ability. Following previous works [39,17], we present an experiment to test the effect of editing operator with the same editing magnitude on the latent code inverted by different inversion methods. We opt for age and gender for two editing directions in this experiment. Given the latent code ω, we apply the editing operator to obtain the new latent code as ω edit = ω + α * n where α is the editing magnitude and n is the semantic direction learned by InterFaceGAN [40]. To quantitatively evaluate the editing ability, we measure the amount of age change for age edit and gender edit when applying the same α on each baseline. The result are shown in Tab. 2. Since our method works on highly editable W space, the editability of our method outperform single-stage encoder-based method. In addition, our method achieve comparable performance compared with other W space methods." }, { "figure_ref": [ "fig_5" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "We demonstrate the qualitative results for editing in Fig. 5. As our reconstruction is more robust than encoder-based methods, it allows better editing results. As can be seen, our work can perform reasonable edits while still preserving faithfully non-editing. W + space methods (pSp and StyleTransformer) invert the input image into poorly-behaved latent regions. Therefore, their editing is less meaningful and introduces significant artifacts. In contrast, our method produces significant editing effects with fewer artifact, which demonstrates the superior editing ability in the well-behaved W space. Specially, we also compare our method with previous works in the supplementary material, which utilize StyleGAN as generator. It is obvious that StyleGAN struggle to synthesize images from different viewing points, while our 3D-based GAN editing method can easily deal with different poses." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "Is meta-auxiliary network required?\nThe row (2) in Tab. 3 demonstrates the importance of our meta-auxiliary network. To be specific, meta-auxiliary network aims to adapt the parameters of the generator to the given image, then experiment of row (2) directly fine-tune the parameters. As can be seen, the meta-auxiliary network can significantly accelerate optimization process and promote reconstruction performance. of the meta-learning strategy as row (3) in Tab. 3. In this experiment, we directly train the auxiliary network without the subsequently meta-learning process. During inference, we fine-tune the parameters of the auxiliary network. The results listed in row ( 1) and ( 3) show that our full version with meta-learning surpasses the simplified ones.\nHow important are the modules in Auxiliary network?\nIn the framework of our auxiliary network, we design two domain-specific module: 2D-related module and 3D-related module. We separately investigate the effectiveness of the two modules in Tab. 3. In row (4), we remove 2D-related module which mainly provide missing texture details. The result shows that the missing details significantly degrade reconstruction performance. In addition, we remove 3Drelated module which mainly rectify structural misalignment in row (5). As can be seen, the 3D information also promote reconstruction result but not as important as 2Drelated module." }, { "figure_ref": [], "heading": "Additional Study", "publication_ref": [], "table_ref": [], "text": "Generator Adaptation Although our method builds on 3D-GAN generator, the meta-auxiliary network is also effective in previous 2D-GAN generator, e.g., StyleGAN. In order to demonstrate the superior of the proposed method, we take StyleGAN as generator and compared with previous 2D-specific inversion methods. Specially, we only preserve 2D-related module in our auxiliary network, while adopting the same meta-learning strategy. The detailed experiments are listed in the supplementary material.\nEditing Profile Images Unlike 2D-GAN generator, 3D-GAN generator adopts 3D representation and contains information of 3D solid but not texture from a specific viewing point. However, GAN inversion normally takes a single image as input, which lacks comprehensive 3D information. The above issue would lead to unexpected results when editing profile images, as shown in the first row of Fig. 6. To deal with profile images, a simple but effective way is designing a loss function taking 3D structure into consideration. In this work, we have tried Flip Loss. Assume the input profile image is I P , then we can obtain its horizontal flip image I f P . Obviously, the image I f P is similar to the real image from the symmetrical viewing point. In addition, the 3D-GAN generator can easily obtain the inversion result from the corresponding symmetrical viewing Figure 6: Results of editing profile images. In the first row, we adopts Equ. 8 as loss function. In the second row, we add an additional Flip Loss, as listed in Equ. 12. point, denoted as Îf P . Then, we can use I f P and Îf P to construct a new loss function which take another viewing point into count. Considering I f P may not be the same as the real image, the widely adopted L 2 loss is not suitable. In this work, we directly use LPIPS loss and GAN loss, which is the same as loss in Equ. 8. Then the overall loss function would be\nL f = λ 1 L lpips (I f P , Îf P ) + λ 2 L adv (I f P , Îf P )(12)\nwhere λ 1 adn λ 2 is constants. The overall loss function is L all = L + L f . With the Flip Loss, the reconstruction result is shown in the second row of Fig. 6. As we can seen, the Flip Loss introduce additional constraint on 3D structure and significantly promote editing performance of profile image." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we introduce meta-auxiliary network, a novel framework for GAN inversion. We leverage metalearning training strategy to accelerate optimization process, which achieve optimization-level reconstructions at encoder-like inference times. The proposed method successfully reduce performance and efficiency gap between encoder-based method and optimization-based method. In addition, we introduce 3D-GAN generator into GAN inversion, which can preserve identity consistency when rotate viewing point. Our method can edit facial attributes and rotate pose simultaneously, which remains a difficult issue for 2D-GAN inversion. Specially, the proposed method performs well both in 2D-GAN and 3D-GAN, demonstrating excellent generalization ability. In summary, we believe this approach to be an essential step towards interactive and semantic in-the-wild image editing and may open the door for many intriguing real-world scenarios." } ]
Figure 1: Image Inversion and editing results of our model on CelebA-HQ dataset. From the left to right, we show results on inversion and attribute editing from different viewpoints. From the top to down, we show editing results on different facial attribute. The proposed method use an auxiliary network to enhance the ability of a pre-trained 3D GAN, which allows it to not only predict accurate inversion results, but also provides flexible editing results disentangled from viewpoint.
Meta-Auxiliary Network for 3D GAN Inversion
[ { "figure_caption": "Figure 2 :2Figure2: Given an image x, we begin with an approximate latent code ωinit predicted by an off-the-shelf encoder. With the image x as input, our auxiliary network predicted a set of offsets ∆θ, which are used to modulate both StyleGAN's parameters and volume rendering. To assist the auxiliary network, we propose a meta-learning training scheme for fast adaptation, which can update auxiliary network during inference. With the meta-learned auxiliary network, the generator G will be parameterized with new params θG and generated final output ŷ.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "extent the MAML scheme to low-level vision tasks. It allows the pre-trained model to be optimized in a way such that it can quickly adapt to any given image. The generalizability of its model-agnostic algorithm motivates us to integrate test-time adaptation into GAN inversion using MAML training scheme.Generative 3D-aware image synthesis Building on the success of 2D image-based GANs[30,31], recent efforts have focused on training 3D-aware multi-views con-sistent GANs from collections of single-view 2D images in an unsupervised manner. Achieving this challenging goal requires a combination of a neural scene representation and differentiable rendering algorithm. Recent work in this domain builds on representation using meshes [33], voxel grids[23], multiple planes [16, 10], or a combination of low-resolution voxel grids with 2D CNN-based layers[34, 22]. Among these methods, the current SOTA method EG3D [10] uses an tri-plane-based volume representation combined with volume rendering. In this study, we extend GAN inversion from 2D-based StyleGAN to 3D-based EG3D, which allows the generator to synthesize images in multiple views and apply latent space manipulation.", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Auxiliary network design. The network consists of 2D-related module and 3D-related module. The network first encode the input image into features. Then the 2Drelated module predicts the residual weights ∆θ for convolutional layers, while the 3D-related module predicts an offset tri-plane structure for coordinate deformation in volume rendering.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Algorithm 1 :1Meta learning algorithm Require: p(x): uniform distributions over images Require: α, β: step size hyper-parameters 1 Initialize parameters θ Aux ; 2 while not converged do 3 Sample a batch of images x i ∼ p(x); 4 foreach i do 5", "figure_data": "", "figure_id": "fig_3", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Qualitative reconstruction comparison results.", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative editing comparison results.", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "", "figure_data": "", "figure_id": "", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Quantitative results for image reconstruction.", "figure_data": "0.0249 0.1501 26.811.56HyperInverter[17]0.0206 0.1313 18.922.22PTI[39]0.0135 0.1011 18.48 96.23Ours0.0148 0.0718 14.574.11", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Quantitative evaluation of editable. We measure the amount of age edit and gender edit when applying the same editing magnitude α on each method.", "figure_data": "Our method successfully", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study.", "figure_data": "Is meta-learning required?We test the effectiveness", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Bangrui Jiang; Zhenhua Guo; Yujiu Yang
[ { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b0", "title": "Image2stylegan: How to embed images into the stylegan latent space", "year": "2019" }, { "authors": "Rameen Abdal; Yipeng Qin; Peter Wonka", "journal": "", "ref_id": "b1", "title": "Image2stylegan++: How to edit the embedded images", "year": "2020" }, { "authors": "Rameen Abdal; Peihao Zhu; J Niloy; Peter Mitra; Wonka", "journal": "ACM Transactions on Graphics (ToG)", "ref_id": "b2", "title": "Styleflow: Attribute-conditioned exploration of stylegangenerated images using conditional continuous normalizing flows", "year": "2021" }, { "authors": "Yuval Alaluf; Or Patashnik; Daniel Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b3", "title": "Only a matter of style: Age transformation using a style-based regression model", "year": "2021" }, { "authors": "Yuval Alaluf; Or Patashnik; Daniel Cohen-Or", "journal": "", "ref_id": "b4", "title": "Restyle: A residual-based stylegan encoder via iterative refinement", "year": "2021" }, { "authors": "Yuval Alaluf; Omer Tov; Ron Mokady; Rinon Gal; Amit Bermano", "journal": "", "ref_id": "b5", "title": "Hyperstyle: Stylegan inversion with hypernetworks for real image editing", "year": "2022" }, { "authors": "Sungyong Baik; Janghoon Choi; Heewon Kim; Dohee Cho; Jaesik Min; Kyoung Mu; Lee ", "journal": "", "ref_id": "b6", "title": "Meta-learning with taskadaptive loss function for few-shot learning", "year": "2021" }, { "authors": "David Bau; Jun-Yan Zhu; Jonas Wulff; William Peebles; Hendrik Strobelt; Bolei Zhou; Antonio Torralba", "journal": "", "ref_id": "b7", "title": "Seeing what a gan cannot generate", "year": "2019" }, { "authors": "Petr Alexander W Bergman; Yifan Kellnhofer; Eric R Wang; David B Chan; Gordon Lindell; Wetzstein", "journal": "", "ref_id": "b8", "title": "Generative neural articulated radiance fields", "year": "2022" }, { "authors": "Connor Z Eric R Chan; Matthew A Lin; Koki Chan; Boxiao Nagano; Shalini De Pan; Orazio Mello; Leonidas J Gallo; Jonathan Guibas; Sameh Tremblay; Khamis", "journal": "", "ref_id": "b9", "title": "Efficient geometry-aware 3d generative adversarial networks", "year": "2022" }, { "authors": "Zhixiang Chi; Yang Wang; Yuanhao Yu; Jin Tang", "journal": "", "ref_id": "b10", "title": "Testtime fast adaptation for dynamic scene deblurring via metaauxiliary learning", "year": "2021" }, { "authors": "Myungsub Choi; Janghoon Choi; Sungyong Baik; Tae ; Hyun Kim; Kyoung Mu; Lee ", "journal": "", "ref_id": "b11", "title": "Scene-adaptive video frame interpolation via meta-learning", "year": "2020" }, { "authors": "Edo Collins; Raja Bala; Bob Price; Sabine Susstrunk", "journal": "", "ref_id": "b12", "title": "Editing in style: Uncovering the local semantics of gans", "year": "2020" }, { "authors": "Antonia Creswell; Anil Anthony; Bharath ", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b13", "title": "Inverting the generator of a generative adversarial network", "year": "2018" }, { "authors": "Jiankang Deng; Jia Guo; Niannan Xue; Stefanos Zafeiriou", "journal": "", "ref_id": "b14", "title": "Arcface: Additive angular margin loss for deep face recognition", "year": "2019" }, { "authors": "Yu Deng; Jiaolong Yang; Jianfeng Xiang; Xin Tong", "journal": "", "ref_id": "b15", "title": "Gram: Generative radiance manifolds for 3d-aware image generation", "year": "2022" }, { "authors": "Anh Tan M Dinh; Rang Tuan Tran; Binh-Son Nguyen; Hua", "journal": "", "ref_id": "b16", "title": "Hyperinverter: Improving stylegan inversion via hypernetwork", "year": "2022" }, { "authors": "Chelsea Finn; Pieter Abbeel; Sergey Levine", "journal": "PMLR", "ref_id": "b17", "title": "Modelagnostic meta-learning for fast adaptation of deep networks", "year": "2017" }, { "authors": "Lore Goetschalckx; Alex Andonian; Aude Oliva; Phillip Isola", "journal": "", "ref_id": "b18", "title": "Ganalyze: Toward visual definitions of cognitive image properties", "year": "2019" }, { "authors": "Micah Goldblum; Steven Reich; Liam Fowl; Renkun Ni; Valeriia Cherepanova; Tom Goldstein", "journal": "PMLR", "ref_id": "b19", "title": "Unraveling metalearning: Understanding feature representations for few-shot tasks", "year": "2020" }, { "authors": "Ian Goodfellow; Jean Pouget-Abadie; Mehdi Mirza; Bing Xu; David Warde-Farley; Sherjil Ozair; Aaron Courville; Yoshua Bengio", "journal": "Communications of the ACM", "ref_id": "b20", "title": "Generative adversarial networks", "year": "2020" }, { "authors": "Jiatao Gu; Lingjie Liu; Peng Wang; Christian Theobalt", "journal": "", "ref_id": "b21", "title": "Stylenerf: A style-based 3d aware generator for highresolution image synthesis", "year": "2021" }, { "authors": "Zekun Hao; Arun Mallya; Serge Belongie; Ming-Yu Liu", "journal": "", "ref_id": "b22", "title": "Gancraft: Unsupervised 3d neural rendering of minecraft worlds", "year": "2021" }, { "authors": "Erik Härkönen; Aaron Hertzmann; Jaakko Lehtinen; Sylvain Paris", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b23", "title": "Ganspace: Discovering interpretable gan controls", "year": "2020" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "IEEE", "ref_id": "b24", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Martin Heusel; Hubert Ramsauer; Thomas Unterthiner; Bernhard Nessler; Sepp Hochreiter", "journal": "Advances in neural information processing systems", "ref_id": "b25", "title": "Gans trained by a two time-scale update rule converge to a local nash equilibrium", "year": "2017" }, { "authors": "Xueqi Hu; Qiusheng Huang; Zhengyi Shi; Siyuan Li; Changxin Gao; Li Sun; Qingli Li", "journal": "", "ref_id": "b26", "title": "Style transformer for image inversion and editing", "year": "2022" }, { "authors": "Kyoungkook Kang; Seongtae Kim; Sunghyun Cho", "journal": "", "ref_id": "b27", "title": "Gan inversion for out-of-range images with geometric transformations", "year": "2021" }, { "authors": "Tero Karras; Timo Aila; Samuli Laine; Jaakko Lehtinen", "journal": "", "ref_id": "b28", "title": "Progressive growing of gans for improved quality, stability, and variation", "year": "2017" }, { "authors": "Tero Karras; Samuli Laine; Timo Aila", "journal": "", "ref_id": "b29", "title": "A style-based generator architecture for generative adversarial networks", "year": "2019" }, { "authors": "Tero Karras; Samuli Laine; Miika Aittala; Janne Hellsten; Jaakko Lehtinen; Timo Aila", "journal": "", "ref_id": "b30", "title": "Analyzing and improving the image quality of stylegan", "year": "2020" }, { "authors": "Kwonjoon Lee; Subhransu Maji; Avinash Ravichandran; Stefano Soatto", "journal": "", "ref_id": "b31", "title": "Meta-learning with differentiable convex optimization", "year": "2019" }, { "authors": "Yiyi Liao; Katja Schwarz; Lars Mescheder; Andreas Geiger", "journal": "", "ref_id": "b32", "title": "Towards unsupervised learning of generative models for 3d controllable image synthesis", "year": "2020" }, { "authors": "Michael Niemeyer; Andreas Geiger", "journal": "", "ref_id": "b33", "title": "Giraffe: Representing scenes as compositional generative neural feature fields", "year": "2021" }, { "authors": "Seobin Park; Jinsu Yoo; Donghyeon Cho; Jiwon Kim; Tae Hyun; Kim ", "journal": "Springer", "ref_id": "b34", "title": "Fast adaptation to super-resolution networks via meta-learning", "year": "2020" }, { "authors": "Or Patashnik; Zongze Wu; Eli Shechtman; Daniel Cohen-Or; Dani Lischinski", "journal": "", "ref_id": "b35", "title": "Styleclip: Text-driven manipulation of stylegan imagery", "year": "2021" }, { "authors": "Stanislav Pidhorskyi; Donald A Adjeroh; Gianfranco Doretto", "journal": "", "ref_id": "b36", "title": "Adversarial latent autoencoders", "year": "2020" }, { "authors": "Elad Richardson; Yuval Alaluf; Or Patashnik; Yotam Nitzan; Yaniv Azar; Stav Shapiro; Daniel Cohen-Or", "journal": "", "ref_id": "b37", "title": "Encoding in style: a stylegan encoder for image-to-image translation", "year": "2021" }, { "authors": "Daniel Roich; Ron Mokady; H Amit; Daniel Bermano; Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b38", "title": "Pivotal tuning for latent-based editing of real images", "year": "2022" }, { "authors": "Yujun Shen; Jinjin Gu; Xiaoou Tang; Bolei Zhou", "journal": "", "ref_id": "b39", "title": "Interpreting the latent space of gans for semantic face editing", "year": "2020" }, { "authors": "Yujun Shen; Bolei Zhou", "journal": "", "ref_id": "b40", "title": "Closed-form factorization of latent semantics in gans", "year": "2021" }, { "authors": "Jae Woong Soh; Sunwoo Cho; Nam Ik Cho", "journal": "", "ref_id": "b41", "title": "Metatransfer learning for zero-shot super-resolution", "year": "2020" }, { "authors": "Omer Tov; Yuval Alaluf; Yotam Nitzan; Or Patashnik; Daniel Cohen-Or", "journal": "ACM Transactions on Graphics (TOG)", "ref_id": "b42", "title": "Designing an encoder for stylegan image manipulation", "year": "2021" }, { "authors": "Andrey Voynov; Artem Babenko", "journal": "PMLR", "ref_id": "b43", "title": "Unsupervised discovery of interpretable directions in the gan latent space", "year": "2020" }, { "authors": "Tengfei Wang; Yong Zhang; Yanbo Fan; Jue Wang; Qifeng Chen", "journal": "", "ref_id": "b44", "title": "High-fidelity gan inversion for image attribute editing", "year": "2022" }, { "authors": "Enis Oguz Kaan Yüksel; Ezgi Simsar; Pinar Gülperi Er; Yanardag", "journal": "", "ref_id": "b45", "title": "Latentclr: A contrastive learning approach for unsupervised discovery of interpretable directions", "year": "2021" }, { "authors": "Richard Zhang; Phillip Isola; Alexei A Efros; Eli Shechtman; Oliver Wang", "journal": "", "ref_id": "b46", "title": "The unreasonable effectiveness of deep features as a perceptual metric", "year": "2018" }, { "authors": "Jiapeng Zhu; Yujun Shen; Deli Zhao; Bolei Zhou", "journal": "Springer", "ref_id": "b47", "title": "Indomain gan inversion for real image editing", "year": "2020" }, { "authors": "Jun-Yan Zhu; Philipp Krähenbühl; Eli Shechtman; Alexei A Efros", "journal": "Springer", "ref_id": "b48", "title": "Generative visual manipulation on the natural image manifold", "year": "2016" } ]
[ { "formula_coordinates": [ 3, 369.41, 547.36, 175.7, 18.14 ], "formula_id": "formula_0", "formula_text": "ω = arg min ω L(x, G(ω; θ G ))(1)" }, { "formula_coordinates": [ 4, 101.28, 641.84, 185.08, 19.78 ], "formula_id": "formula_1", "formula_text": "θG = arg min θ G L(x, G(ω init ; θ G ))(2)" }, { "formula_coordinates": [ 5, 54, 187.34, 217.42, 44.12 ], "formula_id": "formula_2", "formula_text": "7 θ Aux i ← θ Aux -α∇ θ Aux L(x i , G(x i ; θ Aux )) 8 end 9 θ Aux ← θ Aux -β∇ θ Aux xi L(x i , G(x i ; θ Aux i ))" }, { "formula_coordinates": [ 5, 132.08, 308.67, 154.29, 9.65 ], "formula_id": "formula_3", "formula_text": "F (p) = F xy (p) + F yz (p) + F xz (p)" }, { "formula_coordinates": [ 5, 55.09, 412.9, 231.27, 9.65 ], "formula_id": "formula_4", "formula_text": "F (p) = (F xy • D)(p) + (F yz • D)(p) + (F xz • D)(p) (3)" }, { "formula_coordinates": [ 5, 94.89, 561.58, 191.47, 9.65 ], "formula_id": "formula_5", "formula_text": "∆p = D xy (p) + D yz (p) + D xz (p)(4)" }, { "formula_coordinates": [ 5, 341.51, 228.5, 199.73, 12.69 ], "formula_id": "formula_6", "formula_text": "θ Aux i = θ Aux -α∇ θ Aux L(x, G(x; θ Aux )) (5" }, { "formula_coordinates": [ 5, 541.24, 230.89, 3.87, 8.64 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 5, 363.99, 286.32, 181.12, 22.6 ], "formula_id": "formula_8", "formula_text": "θ Aux xi∼p(x) L(x i , G(x i ; θ Aux i ))(6)" }, { "formula_coordinates": [ 5, 316.47, 381.07, 228.64, 22.6 ], "formula_id": "formula_9", "formula_text": "θ Aux = θ Aux -β∇ θ Aux xi∼p(x) L(x i , G(x i ; θ Aux i )) (7)" }, { "formula_coordinates": [ 5, 324.75, 536.23, 216.49, 9.65 ], "formula_id": "formula_10", "formula_text": "L = L 2 + λ lpips L LP IP S + λ id L id + λ adv L adv (8" }, { "formula_coordinates": [ 5, 541.24, 536.55, 3.87, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 5, 384.35, 582.51, 160.77, 9.65 ], "formula_id": "formula_12", "formula_text": "L 2 (x, ŷ) = x -ŷ 2 (9)" }, { "formula_coordinates": [ 5, 357.25, 628.79, 187.86, 9.65 ], "formula_id": "formula_13", "formula_text": "L LP IP S (x, ŷ) = P (x) -P (ŷ) 2 (10)" }, { "formula_coordinates": [ 5, 362.22, 675.08, 182.89, 9.65 ], "formula_id": "formula_14", "formula_text": "L id (x, ŷ) = 1 -R(x) -R(ŷ)(11)" }, { "formula_coordinates": [ 8, 333.04, 368.43, 212.07, 13.83 ], "formula_id": "formula_15", "formula_text": "L f = λ 1 L lpips (I f P , Îf P ) + λ 2 L adv (I f P , Îf P )(12)" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b18", "b29", "b29", "b9", "b29", "b6" ], "table_ref": [], "text": "A low-light image can be defined as an image captured in deficient illumination conditions that do not fully excite the detector. Hence, the output image is not even close to have ideal histogram distributions. In such conditions, a dedicated algorithm enhancing the image is needed to present a better image and to to help increasing the performance of consequent blocks which are trained or tuned under normal lightning conditions, such as object detection, etc.\nGlobal or local image histogram equalization techniques [4,36] are the first-thought candidates to solve the low-light image enhancement problem. However, they do not employ spatial information and work in pixel level that does not include surrounding content information. On the other hand, using deep neural network architectures, spatial information can be utilized and combined with color information in different scales. Therefore, deep neural architectures recently provide superior performance for low-light image enhance- ment problem as in most of other low-level and high-level vision problems.\nFor most of the applications, low-light image enhancement should be implemented in image signal processing(ISP) framework of a camera. ISP frameworks process the data in real-time, and include some certain blocks. Hence, it can be argued that the most crucial requirements for LLIE should be computational memory and complexity while not sacrificing from the visual quality. In recent years, there are some studies [19,30] that handles all the blocks of ISP blocks in a single network. However, it is not easy to deploy such a network of multi-million parameters in an edge device, i.e. surveillance camera. Therefore, it is crucial to have lightweight blocks to handle unusual conditions in ISP framework. The proposed solution is a good candidate for such cases to deploy in an edge device.\nInspired by several recent studies [3,30], we propose a feather-light network with carefully designed blocks to match the problem's nature at the hand instead of throwing all of the information into a huge network and hope to get the right output by some \"deep magic\". As with the previous studies, we model the LLI as an image generated as reflectance multiplied by illumination. For this purpose, first, we strive to achieve pixelwise scene illumination. Using such an approach, we achieve an equalized the image histogram while handling uneven illumination at the same time. Since the input signal has very low SNR, there exist inherent noise in the input and this becomes visible with the illumination adjustment, the noise and color inconsistencies should be taken care of by the subsequent blocks.\nTo solve these problems, we utilize the ideas presented in recent approaches [8, 10,30] that addresses these problems using channel and spatial attention blocks [7,25]. In simple terms these attention mechanisms enable selective feature extraction for image enhancement and denoising.\nWe propose a novel efficient neural network architecture named FLIGHT-Net is proposed for low-light image enhancement problem. It is shown that FLIGHT-Net gives outstanding results compared the state of the art. To the best of our knowledge, it is the lightest network that achieves great balance between run-time and performance among supervised learning LLIE methods." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b3", "b5", "b5", "b8", "b22", "b4", "b11", "b34", "b22", "b4", "b11", "b34", "b4", "b11", "b34", "b19", "b30", "b31" ], "table_ref": [], "text": "As in high-level vision tasks, the solutions based on deep neural architectures provide most successful results for image restoration tasks. Before deep learning era, traditional methods also give satisfactory results up to a point. Global and local histogram equalization methods [4,36] are most well-known solutions for the LLIE problem. Furthermore, Retinex theory increased the understanding of the problem from a more theoretical point of view. Following Retinex theory, in LIME [6], the overall solution is based on illumination map estimation (IME) approach. Although the IME approach is inspiring, being a traditional method LIME [6] failed to generalize well on different scenarios. On the other hand, IME block kept its existence on different deep learning approaches such as [3,29] and it is also a part of our proposed solution.\nStarting from the pioneer work [17], the deep learning based LLIE methods can be categorized according to general deep learning strategies. In other words, deep LLIE methods can be divided into four main categories as supervised learning [3,22,23,26,34], semi-supervised learning, zero-shot learning [5,12,35] and unsupervised learning. Although the number of other types of methods is also notable, the core part of the literature is formed by supervised and zero-shot learning based methods.\nIn supervised learning based methods, there are two main approaches. The first approach is to extract the en-hanced image by using a single end-to-end network [14,22]. The second and mostly utilized approach is to design the subnetworks according to the Retinex theory [11]. In these approaches, subnetworks are designed to reconstruct the illumination and refleftance parts of the enhanced image. In [23], two main blocks called Decom-Net and Enhance-Net are used to extract illumination and reflectance maps and then adjust the illumination according to the decomposed maps. Illumination adjustment is handled by pixelwise enhancement block in [3], while color correction is solved using a transformer block.\nIt is not easy to build a setup with the ground truth and low-light image pair required by the problem. Therefore, more recently, zero-shot learning methods [5,12,15,35] are proposed for LLIE problem. Zero-DCE and its extended version [5,12] solve the problem by predicting a set of highorder curves for a given image. In [35], a generative strategy is applied to decompose the illumination and reflectance components. After the decomposition, enhanced image is obtained by processing the illumination component. RUAS [15] utilizes again Retinex theory and neural architecture search strategy to determine the basic blocks called illumination estimation and noise removal module. Low-light enhancement problem is also elaborated as a sub-problem in some recent image restoration studies [20,31,32]." }, { "figure_ref": [], "heading": "Proposed Method", "publication_ref": [], "table_ref": [], "text": "LLIE can be posed as a reconstruction of the ideal image which is under the ideal light conditions. In this section, the motivation and formulation of FLIGHT-Net are introduced. Then the details of the network are presented." }, { "figure_ref": [], "heading": "Problem Formulation", "publication_ref": [ "b34" ], "table_ref": [], "text": "The proposed method and formulation are inspired by Retinex theory [11] and ISP framework [9]. As it is known, in the Retinex theory, an image consists of reflection and illumination via equation 1:\nI = R.L(1)\nwhere I, R, L donate image, reflectance and illumination. Retinex-based deep learning methods estimate an illumination map using low-light image [35]. Estimated illumination maps which is required to obtain a normal light image from a low light image, transform the image on a pixel-bypixel basis. Furthermore, considering the image acquisition from the camera, a series of transformations are applied to the linear raw RGB image, which makes obtaining normallight image from low-light image more challenging. The transformation can be expressed as in equation 2. where S denotes camera input and f I S P denotes the whole transformation function obtaining sRGB image from sensor input, i.e. raw data. The quality of an low-light image is affected by two main factors: low illumination and incamera noise. The sensor input in the sensor layer produces a Raw linear RGB image that is linearly related to the am-bient light. To obtain an image similar to a standard light image, the raw linear RGB image can be enhanced using an appropriate gain ratio, eliminating the first degradation of the low-light image. However, the second degradation, in-camera noise, increases linearly with the gain ratio and distorts the image.\nI = f I S P (S)(2)\nAnother crucial issue when working with low-light sRGB images is that raw image is processed through a series of non-linear operations such as white balance, gamma correction, noise reduction, contrast enhancement, and edge enhancement, hence it is no longer possible to obtain an normal-light sRGB image with the appropriate gain coefficient using the low-light sRGB image. Cui et.al [3] explains this by stating that the actual luminance degradation occurs in the raw-RGB space in the ISP framework and proposed the IAT network design, which is inspired and characterized the ISP process.\nInspired by Retinex theory [11], the ISP framework [9] and IAT network [3], a new deep learning network called FLIGHT-Net for low-light image enhancement is proposed. FLIGHT-Net is comprised of two primary network blocks: the Scene Dependent Illumination Adjustment (SDIA) block, and the Global ISP block (GISP). The SDIA block modifies the input in a pixel-wise manner, while the GISP block transforms the image globally. The formulation of the proposed method is shown in equation 3:\nN LI = f GI S P (f S D I A (LLI))(3)\nwhere LLI, N LI, f S D I A , f GI S P represent low-light image, normal-light image, SDIA and GISP blocks respectively. SDIA block is composed of two separate blocks: illumination map estimator and gain estimator. As stated earlier, a low-light image is an image in which the sensor receives less light than a normal-light image. The gain estimator predicts the required gain ratio necessary to facilitate image enhancement. However, an accurate gain ratio for LLIE is not possible due to the ISP applied when capturing the sRGB image. Therefore, the image is transformed using a suitable map (IM), which is the output of the illumination map estimator (IME) block to increase the effect of the gain value. As a result, the input image is mapped to the latent feature space by utilizing the SDIA.\nDuring the process of obtaining the normal-light image from the low-light image, in the SDIA, noise becomes more apparent. This is due to the inherent noise in the low-light image due to low PSNR and when low-light is illumination adjusted the noise is also becomes visible and deteriorates the visual quality. However, since the noise is visible along with the scene itself another network might be selectively diminish the noise while preserving or even enhancing the scene. Indeed our GISP block suppresses the noise while keeping structures intact through embodied attention mechanisms as shown 2c. We can consider GISP block as a transformation from a latent feature space to sRGB space. In other words, color correction, denoise and white balance operations of a typical ISP framework are mimicked via GISP block." }, { "figure_ref": [], "heading": "Network Framework", "publication_ref": [ "b32", "b23", "b17", "b26", "b15", "b15" ], "table_ref": [], "text": "FLIGHT-Net consists of two main network block as shown in Figure 2a. The first block that converts input locally is Scene Depended Illumination Adjustment (SDIA) Network Block, and other block Global ISP Network Block (GISP) that transform its input globally. SDIA include IME and GE block whose outputs are multiplied by the input image on a pixel-wise basis. On the other hand, the output of the GISP block is used directly as the network output.\nAs previously stated, the low-light image is characterized by insufficient light exposure onto the sensor, resulting in a dark and low dynamic range projection. Multiplying low-light image with a single gain coefficient can be considered as a naive way for obtaining a more pleasing image. However, this is usually not enough due to the non-linear effects introduced in the ISP framework such as local histogram enhancement where each pixel are subject to different illumination and hence need to be individually corrected, so, simply adjusting the gain coefficient alone is not sufficient. Since it may cause over-illumination in some parts, while leaving other areas in darkness. To address this issue more effectively, an IME block has been devised to estimate the illumination adjustment coefficients required for each pixel. Through this estimation process, the input image can be better prepared for gain adjustment block, ultimately leading to a more efficient utilization of the gain coefficient.\nThe IME block, illustrated in Figure 2b, includes CNN blocks for feature extraction subblocks and local gain coefficient. After feature extraction, the illumination map can be estimated effectively by multiplying map features and the local gain coefficient estimated with the help of LN by the features. At the end of the IME block, the sigmoid activation function is preferred because the IME block is designed to transform the image where the gain coefficient works efficiently.\nThe GE block in Figure 2b is used to estimate the appropriate gain coefficient depending on its input which might be captured varying light conditions. It consists of CNN blocks to extract features and a linear layer to estimate the required gain coefficient using these extracted features. After gain adjustment, the input image is converted to the latent feature, which is needed for GISP to deliver a normallight sRGB image on its output.\nThe GISP block is the second main block of FLIGHT-Net. As stated previously, the main purpose of this block is color correction and denoising. It consists of extended channel attention (ECA) block and dual path fine tune (DF) blocks. The ECA block is the extended version of the CA block in [33]. This block mainly strengthens the information like structures and patterns in the necessary channels while surpassing the unwanted information like noise. In the DF block, the extracted features in the channel are di-Method SNR-ALLIE [26] RetiNexNet [24] MBLLEN [18] DRBN [27] vided into two in order to transform features with different receptive field, different activation functions and kernel sizes in convolution operations. In addition, foremental information are carried forward and information at every stage is protected. The details of the GISP block are shown in Figure 2c.\nLast but not least we tried to reflect the current network design language to our design by using some of the suggestions for CNNs for reaching Transformer like performance as suggested in [16]. It is reported that CNN architectures can obtain the success of the transformers with a proper selection of some parameters. For instance, as it is known in classical CNN design, convolution kernel size 3x3 is mostly preferred. In the FLIGHT-Net, 5x5 and 7x7 kernel sizes are also preferred in the convolution operation, especially at the beginning of the blocks used for feature extraction. In conclusion, it seems that the results in [16] is indeed be beneficial for LLIE." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [ "b23", "b26", "b26", "b0" ], "table_ref": [], "text": "Training Image Validation Image LOL [24] 485 15 LOL-v2-Syn [27] 900 100 LOL-v2-Real [27] 689 100\nRellisur [1] 3610 LL 722 NL " }, { "figure_ref": [], "heading": "LL 43 NL", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b23", "b26", "b26", "b0", "b11", "b15" ], "table_ref": [], "text": "LOL-v1 [24], LOL-v2-real [27], LOL-v2-synthetic [27] and Rellisur [1] datasets are chosen for our experiment. The Method PSNR SSIM #P(M) AT(ms) Zero-DCE++ [12] During training, all experiments are performed in Tesla V100 GPU 16GB. ADAMW optimizer is selected in the training process as in [16]. Batch size is set to 16 and initial leaning rate is 8 × 10 -4 . The network is trained with initial learning rate for 6000 epoch and it reduced linearly 1/10th of it between 6000-12000 epoch. For fine tuning phase, previous scheduling is applied by starting from 4 × 10 -4 .\nThe loss function consists of two different components. Smooth L1 loss is preferred to ensure that the estimated image pixels are close to the normal-light images. The second loss is Multi-scale Structure Similarity Index Measure (MS-SSIM), which enforces the network to predict more visually pleasing image. The total loss function is calculated as:\nL T OT AL = α 1 * L L1 + α 2 * L M S-SSIM(4)\nα 1 and α 2 are hyperparameters for balancing the loss functions." }, { "figure_ref": [ "fig_5" ], "heading": "Comparative LLIE Results", "publication_ref": [ "b22", "b26", "b19", "b31", "b31", "b19", "b19" ], "table_ref": [], "text": "State-of-the-art LLIE [3,22,23,26,27,34] along with some image restoration networks [20,32] are selected for comparison. We report PSNR and SSIM results for LOL-v1, LOL-v2-real, LOL-v2-synthetic and Rellisur datasets For SSIM metric, again, we achieve best performance in three of four datasets with the help of MS-SSIM loss.\nComparative visual results on two images from LOL-v1 dataset for qualitative analysis are presented in Figure 5. In the first image, the effect of our SDIA block can be observed in very dark regions. The output of second image proves the success of our color correction and denoising blocks. The color distribution for this image is the best by far compared to LLIE methods and better than Mirnet-v2 [32]. Also, our method does not produce any artifact like in the result of [26] in third and fifth tassels from the left side.\nIn order to show the effectiveness of the proposed solution in computation, we compare our method with the state-of-the-art techniques. We utilize a mobile workstation which has a GPU of NVIDIA GeForce RTX 3070M 8GB. The total parameters and inference times of our method Although FLIGHT-Net has the lowest parameters with 25K compared to its main competitors [3,20,26], it has the best PSNR value with 24.96 and second-best SSIM value with 0.85 on LOL-v1 dataset. The best result for SSIM belongs to MAXIM [20] has 14.14M parameters which much more than the number of parameters of our method. IAT [3] can be considered as the main competitor of FLIGHT-Net when the number of parameters is taken into account since it is the only method with good performance among com-petitors with less than 100K parameters, however, its PSNR value is 23.38 which is much lower than the PSNR value of FLIGHT-Net. Zero-DCE has 10k parameters, but its performance is very far from state-of-art.\nDepending on our observation, the reason of slightly lower PSNR values in LOL-v2-real dataset is the small misalignment between ground truth and training images. This fact is also the reason of not reporting the performance values of IAT [3] since it follows a different strategy during training to eliminate this misalignment for LOL-v2-real dataset." }, { "figure_ref": [ "fig_6" ], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_4", "tab_4" ], "text": "In order to demonstrate the effectiveness of our subblocks and loss functions, five ablation studies are performed on LOL-v1 dataset. The quantitative results of the study are presented in Table 7 and visual results are given in Figure 6.\nIn the first stage of the ablation study, SDIA and GISP blocks are trained separately. As expected, the SDIA branch improved the illumination but also shows the necessity of GISP block. In output image obtained with SDIA alone, we have still spatial noise and color inconsistency and therefore low PSNR and SSIM values. The GISP module handles these issues as mentioned earlier. We also experience the performance of GISP module alone, however, its performance is far behind the overall proposed network. As expected, the SDIA branch increases visibility and transforms the image into the latent feature space required for GISP to fit the target image. We also test the effects of smooth L1 and MS-SSIM loss functions. PSNR and SSIM values for different combinations of loss functions are reported in Table 7. Training with only smooth L1 or MS-SSIM loss functions are not enough to get the optimum results. PSNR values in the case of using smooth L1 and MS-SSIM loss functions are 22.51 and 22.94 respectively while PSNR value is 24.96 in the case of using both loss functions. As a result, FLIGHT-Net achieves state-of-the-art performance when trained with the combination of smooth L1 and MS-SSIM loss." }, { "figure_ref": [], "heading": "Blocks", "publication_ref": [], "table_ref": [], "text": "Smooth " }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we have proposed a lightweight yet effective framework for low-light image enhancement problem. Our solution has achieved a state-of-the-art performance for the problem on several datasets with one of the lightest network in the literature. We believe that carefully investigating the forward problem formulation and image signal processing framework and designing the blocks accordingly helps reducing the number of parameters and boosting performance rather than hoping a pure huge bulky DNN to extract and solve for the hidden relations among the features. Lastly, as a future work, we have plans to extend our approach for the processing of low-light videos by considering temporally and spatially varying light conditions." } ]
Low-light image enhancement (LLIE) is an ill-posed inverse problem due to the lack of knowledge of the desired image which is obtained under ideal illumination conditions. Low-light conditions give rise to two main issues: a suppressed image histogram and inconsistent relative color distributions with low signal-to-noise ratio. In order to address these problems, we propose a novel approach named FLIGHT-Net using a sequence of neural architecture blocks. The first block regulates illumination conditions through pixel-wise scene dependent illumination adjustment. The output image is produced in the output of the second block, which includes channel attention and denoising sub-blocks. Our highly efficient neural network architecture delivers state-of-the-art performance with only 25K parameters. The method's code, pretrained models and resulting images will be publicly available.
FLIGHT Mode On: A Feather-Light Network for Low-Light Image Enhancement
[ { "figure_caption": "Figure 1 .1Figure 1. Performance Comparison on LOL-v1 Dataset. The diameters of the circles are proportional to the number of model parameters.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2. FLIGHT-NET", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 3 .3Figure 3. Visual results for LOL-v2-real, LOL-v2-synthetic and Rellisur datasets.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 .4Figure 4. Performance Comparison in Tables1, 2, 3 and 4 respectively and their corresponding comparative plots are given in Figure4.1 Our method achieve best PSNR results except the LOL-v2-real dataset. For SSIM metric, again, we achieve best performance in three of four datasets with the help of MS-SSIM loss.Comparative visual results on two images from LOL-v1 dataset for qualitative analysis are presented in Figure5. In the first image, the effect of our SDIA block can be observed in very dark regions. The output of second image proves the success of our color correction and denoising blocks. The color distribution for this image is the best by far compared to LLIE methods and better than Mirnet-v2[32]. Also, our method does not produce any artifact like in the result of[26] in third and fifth tassels from the left side.In order to show the effectiveness of the proposed solution in computation, we compare our method with the state-of-the-art techniques. We utilize a mobile workstation which has a GPU of NVIDIA GeForce RTX 3070M 8GB. The total parameters and inference times of our method", "figure_data": "", "figure_id": "fig_4", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 .5Figure 5. Visual comparison of our method on LOL-v1 dataset", "figure_data": "", "figure_id": "fig_5", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 .6Figure 6. Visual results of ablation study.", "figure_data": "", "figure_id": "fig_6", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Comparative performance results for LOL-v1 datasetMethod SNR-ALLIE [26] Retinex [15] IPT [2] Sparse[29] Band[28] MIR-Net[31] LPNet[13] Ours", "figure_data": "KIND [34] MAXIM [20] IAT [3] Ours", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparative performance results for Rellisur dataset", "figure_data": "", "figure_id": "tab_1", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "", "figure_data": ". Datasets used in experiments *low-light (LL), normal-light(NL)", "figure_id": "tab_2", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Comparative comparison of the methods according to the number of parameters (#P) and average computation time (AT) on LOL-v1 dataset without using GT images. LOL-v1 dataset is the first real dataset for low-light image enhancement and is used to test many state-of-art network [3, 22, 23, 26, 27, 34]. The LOL-v2 dataset is the enhanced version of the LOL-v1 dataset. It contains two different training and validation image pairs, a real captured image and a synthetically acquired image. Finally, Rellisur dataset is the first multi-purpose dataset for the problems of low-light image enhancement and super-resolution. Details of the training and validation image pairs of the datasets used in the experiment are summarized in the Table5.", "figure_data": "14.83 0.531 0.011.02LLFlow [22]21.13 0.853 38.86339.69Mirnet-v2 [32]24.74 0.851 5.86309.68SNR-Aware [26]24.61 0.842 39.1234.67IAT [3]23.38 0.809 0.0925.50Ours24.96 0.850.02511.47", "figure_id": "tab_3", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Results of the ablation study on LOL-v1 dataset", "figure_data": "L1 MS-SSIM PSNR SSIMSDIA20.78 0.73GISP23.70 0.83SDIA+GISP22.51 0.80SDIA+GISP22.94 0.84SDIA+GISP24.96 0.85", "figure_id": "tab_4", "figure_label": "7", "figure_type": "table" } ]
Mustafa Özcan; Hamza Ergezer; Mustafa Ayazaoglu
[ { "authors": "Andreas Aakerberg; Kamal Nasrollahi; Thomas B Moeslund", "journal": "", "ref_id": "b0", "title": "Rellisur: A real low-light image super-resolution dataset", "year": "2021" }, { "authors": "Hanting Chen; Yunhe Wang; Tianyu Guo; Chang Xu; Yiping Deng; Zhenhua Liu; Siwei Ma; Chunjing Xu; Chao Xu; Wen Gao", "journal": "", "ref_id": "b1", "title": "Pre-trained image processing transformer", "year": "2021" }, { "authors": "Ziteng Cui; Kunchang Li; Lin Gu; Shenghan Su; Peng Gao; Zhengkai Jiang; Yu Qiao; Tatsuya Harada", "journal": "BMVC", "ref_id": "b2", "title": "You only need 90k parameters to adapt light: A light weight transformer for image enhancement and exposure correction", "year": "2022" }, { "authors": "C Rafael; Gonzalez", "journal": "Pearson Education", "ref_id": "b3", "title": "Digital image processing", "year": "2009" }, { "authors": "Chunle Guo; Chongyi Li; Jichang Guo; Chen Change Loy; Junhui Hou; Sam Kwong; Runmin Cong", "journal": "", "ref_id": "b4", "title": "Zero-reference deep curve estimation for low-light image enhancement", "year": "2020" }, { "authors": "Xiaojie Guo; Yu Li; Haibin Ling", "journal": "IEEE Transactions on image processing", "ref_id": "b5", "title": "Lime: Low-light image enhancement via illumination map estimation", "year": "2016" }, { "authors": "Jie Hu; Li Shen; Gang Sun", "journal": "", "ref_id": "b6", "title": "Squeeze-and-excitation networks", "year": "2018" }, { "authors": "Zhixiong Huang; Jinjiang Li; Zhen Hua; Linwei Fan", "journal": "IEEE Transactions on Instrumentation and Measurement", "ref_id": "b7", "title": "Underwater image enhancement via adaptive group attention-based multiscale cascade transformer", "year": "2022" }, { "authors": "Andrey Ignatov; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b8", "title": "Replacing mobile camera isp with a single deep learning model", "year": "2020" }, { "authors": "Zhiying Jiang; Zhuoxiao Li; Shuzhou Yang; Xin Fan; Risheng Liu", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b9", "title": "Target oriented perceptual adversarial fusion network for underwater image enhancement", "year": "2022" }, { "authors": "H Edwin; Land", "journal": "Proceedings of the national academy of sciences", "ref_id": "b10", "title": "An alternative technique for the computation of the designator in the retinex theory of color vision", "year": "1986" }, { "authors": "Chongyi Li; Chunle Guo Guo; Chen Change Loy", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b11", "title": "Learning to enhance low-light image via zero-reference deep curve estimation", "year": "2021" }, { "authors": "Jiaqian Li; Juncheng Li; Faming Fang; Fang Li; Guixu Zhang", "journal": "IEEE Transactions on Multimedia", "ref_id": "b12", "title": "Luminance-aware pyramid network for low-light image enhancement", "year": "2020" }, { "authors": "Seokjae Lim; Wonjun Kim", "journal": "IEEE Transactions on Multimedia", "ref_id": "b13", "title": "Dslr: Deep stacked laplacian restorer for low-light image enhancement", "year": "2020" }, { "authors": "Risheng Liu; Long Ma; Jiaao Zhang; Xin Fan; Zhongxuan Luo", "journal": "", "ref_id": "b14", "title": "Retinex-inspired unrolling with cooperative prior architecture search for low-light image enhancement", "year": "2021" }, { "authors": "Zhuang Liu; Hanzi Mao; Chao-Yuan Wu; Christoph Feichtenhofer; Trevor Darrell; Saining Xie", "journal": "", "ref_id": "b15", "title": "A convnet for the 2020s", "year": "2022" }, { "authors": "Kin Gwn; Lore ; Adedotun Akintayo; Soumik Sarkar", "journal": "Pattern Recognition", "ref_id": "b16", "title": "Llnet: A deep autoencoder approach to natural low-light image enhancement", "year": "2017" }, { "authors": "Feifan Lv; Feng Lu; Jianhua Wu; Chongsoon Lim", "journal": "BMVC", "ref_id": "b17", "title": "Mbllen: Low-light image/video enhancement using cnns", "year": "2018" }, { "authors": "Ardhendu Shekhar Tripathi; Martin Danelljan; Samarth Shukla; Radu Timofte; Luc Van Gool", "journal": "Springer", "ref_id": "b18", "title": "Transform your smartphone into a dslr camera: Learning the isp in the wild", "year": "2022" }, { "authors": "Zhengzhong Tu; Hossein Talebi; Han Zhang; Feng Yang; Peyman Milanfar; Alan Bovik; Yinxiao Li", "journal": "", "ref_id": "b19", "title": "Maxim: Multi-axis mlp for image processing", "year": "2022" }, { "authors": "Wenjing Wang; Chen Wei; Wenhan Yang; Jiaying Liu", "journal": "IEEE", "ref_id": "b20", "title": "Gladnet: Low-light enhancement network with global awareness", "year": "2018" }, { "authors": "Yufei Wang; Renjie Wan; Wenhan Yang; Haoliang Li; Lap-Pui Chau; Alex Kot", "journal": "", "ref_id": "b21", "title": "Low-light image enhancement with normalizing flow", "year": "2022" }, { "authors": "Chen Wei; Wenjing Wang; Wenhan Yang; Jiaying Liu", "journal": "BMVC", "ref_id": "b22", "title": "Deep retinex decomposition for low-light enhancement", "year": "2018" }, { "authors": "Chen Wei; Wenjing Wang; Wenhan Yang; Jiaying Liu", "journal": "", "ref_id": "b23", "title": "Deep retinex decomposition for low-light enhancement", "year": "2018" }, { "authors": "Sanghyun Woo; Jongchan Park; Joon-Young Lee; In So Kweon", "journal": "", "ref_id": "b24", "title": "Cbam: Convolutional block attention module", "year": "2018" }, { "authors": "Xiaogang Xu; Ruixing Wang; Chi-Wing Fu; Jiaya Jia", "journal": "", "ref_id": "b25", "title": "Snr-aware low-light image enhancement", "year": "2022" }, { "authors": "Wenhan Yang; Shiqi Wang; Yuming Fang; Yue Wang; Jiaying Liu", "journal": "", "ref_id": "b26", "title": "From fidelity to perceptual quality: A semisupervised approach for low-light image enhancement", "year": "2020" }, { "authors": "Wenhan Yang; Shiqi Wang; Yuming Fang; Yue Wang; Jiaying Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b27", "title": "Band representation-based semi-supervised lowlight image enhancement: Bridging the gap between signal fidelity and perceptual quality", "year": "2021" }, { "authors": "Wenhan Yang; Wenjing Wang; Haofeng Huang; Shiqi Wang; Jiaying Liu", "journal": "IEEE Transactions on Image Processing", "ref_id": "b28", "title": "Sparse gradient regularized deep retinex network for robust low-light image enhancement", "year": "2021" }, { "authors": "Aditya Syed Waqas Zamir; Salman Arora; Munawar Khan; Fahad Hayat; Ming-Hsuan Shahbaz Khan; Ling Yang; Shao", "journal": "", "ref_id": "b29", "title": "Cycleisp: Real image restoration via improved data synthesis", "year": "2020" }, { "authors": "Aditya Syed Waqas Zamir; Salman Arora; Munawar Khan; Fahad Hayat; Ming-Hsuan Shahbaz Khan; Ling Yang; Shao", "journal": "Springer", "ref_id": "b30", "title": "Learning enriched features for real image restoration and enhancement", "year": "2020" }, { "authors": "Aditya Syed Waqas Zamir; Salman Arora; Munawar Khan; Fahad Hayat; Ming-Hsuan Shahbaz Khan; Ling Yang; Shao", "journal": "IEEE Transactions on Pattern Analysis and Machine Intelligence", "ref_id": "b31", "title": "Learning enriched features for fast image restoration and enhancement", "year": "2022" }, { "authors": "Yulun Zhang; Kunpeng Li; Kai Li; Lichen Wang; Bineng Zhong; Yun Fu", "journal": "", "ref_id": "b32", "title": "Image super-resolution using very deep residual channel attention networks", "year": "2018" }, { "authors": "Yonghua Zhang; Jiawan Zhang; Xiaojie Guo", "journal": "", "ref_id": "b33", "title": "Kindling the darkness: A practical low-light image enhancer", "year": "2019" }, { "authors": "Zunjin Zhao; Bangshu Xiong; Lei Wang; Qiaofeng Ou; Lei Yu; Fa Kuang", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b34", "title": "Retinexdip: A unified deep framework for low-light image enhancement", "year": "2021" }, { "authors": "Karel Zuiderveld", "journal": "Graphics gems", "ref_id": "b35", "title": "Contrast limited adaptive histogram equalization", "year": "1994" } ]
[ { "formula_coordinates": [ 2, 409.17, 538.9, 135.94, 8.96 ], "formula_id": "formula_0", "formula_text": "I = R.L(1)" }, { "formula_coordinates": [ 2, 399.62, 700.61, 145.5, 9.65 ], "formula_id": "formula_1", "formula_text": "I = f I S P (S)(2)" }, { "formula_coordinates": [ 4, 105.22, 329.09, 181.14, 9.65 ], "formula_id": "formula_2", "formula_text": "N LI = f GI S P (f S D I A (LLI))(3)" }, { "formula_coordinates": [ 6, 83.57, 591.79, 202.79, 9.65 ], "formula_id": "formula_3", "formula_text": "L T OT AL = α 1 * L L1 + α 2 * L M S-SSIM(4)" } ]
10.18653/v1/N19-1423
2024-02-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b17", "b32", "b31", "b46", "b30", "b0", "b28", "b8", "b34", "b36", "b35", "b29", "b38", "b13", "b27", "b19", "b39", "b44", "b11", "b47", "b23", "b40", "b45", "b53", "b16", "b21" ], "table_ref": [], "text": "Frame Parsing is a powerful tool for real-world applications in that it enables deep grasp of the meaning of a textual statement and automatic extraction of complex semantic descriptions of situations, including events, and their relations with the entities involved. To this effect, Frame Parsing can effectively be used for Event Extraction, as the two tasks share the common goal of recognizing and classifying argument structures of a target predicate. However, training supervised models for Frame Parsing is a data-intensive task, which is why comprehensive linguistic resources are available in few languages, thereby limiting further research and application in downstream tasks. Furthermore, most existing corpora are created by targeting the annotation of one single frame (or event) class per sentence. While lexicographically motivated, this procedure makes the training of automatic models and their application to real-world scenarios more complicated and burdensome.\nThe contribution described in this paper is twofold: first, we present EventNet-ITA (EvN-ITA) a large-scale, multi-domain corpus annotated fulltext (see Section 4) with over 200 semantic frames of events (Fillmore and Baker, 2001) and 3,600 specific frame elements in Italian, also discussing the motivation behind its creation, the annotation guidelines and the covered domains; secondly, we introduce an efficient multi-label sequential approach for eventive Frame Parsing and evaluate it on the dataset. This work aims at providing the community with a solid, manually-curated corpus and a ready-to-use model for frame-based event extraction for Italian, thus filling an existing data gap. In fact, recent works in application fields like computational social science (Minnema et al., 2021(Minnema et al., , 2022) ) or historical NLP (Sprugnoli and Tonelli, 2017;Menini et al., 2023) showed how semantic frames can be used as a powerful textual analysis tool to investigate a wide range of societal and historical phenomena. The annotated dataset, along with its full documentation, is released to the community under open license (see Section 7). The envisioned application purpose of EvN-ITA is that of enabling accurate mining of events from large collections of documents, with focus on individual, social and, in a broad sense, historical phenomena. The paper is structured as follows: Section 2 discusses existing work in Frame Parsing and Event Extraction, with a subsection focused on Italian; Section 3 introduces our annotated corpus and describes the motivations and design decisions that guided its creation, while Section 4 focuses on the annotation procedure. In Section 5 we discuss the methodology for Frame Parsing, a transformerbased multi-label sequence labeling approach. In Section 6 we evaluate our methodology and discuss the results. Section 7 provides the reader with pointers for the dataset and model release, while Section 8 concludes the paper and highlights future directions of our work.\nThe development of systems able to recognize and classify event mentions and their argument structure in text has been a long-term effort in computational linguistics and a variety of methods has been employed for the task of Event Extraction (Ahn, 2006;Liao and Grishman, 2010;Chen et al., 2015;Nguyen et al., 2016;Orr et al., 2018;Nguyen and Nguyen, 2019;Lu et al., 2021;Paolini et al., 2021). Event Extraction is the task of recognizing and classifying event mentions and entities involved in the event from a textual statement and it has seen applications in a wide range of fields, like social media analysis (de Bruijn et al., 2019), biomedical NLP (Li et al., 2019;Huang et al., 2020;Ramponi et al., 2020), history and humanities (Segers et al., 2011;Cybulska and Vossen, 2011;Sprugnoli and Tonelli, 2019;Lai et al., 2021;Rovera et al., 2021), as well as literary text mining (Sims et al., 2019). Although benchmark datasets exist, like Automatic Content Extraction (ACE) (Walker et al., 2006) for Event Extraction or TAC-KBP (Ellis et al., 2015) for multiple event-related tasks, they exhibit limitations in terms of size and domain coverage. Also, while they are well suited for evaluation campaigns, they have not been designed for use in real-world application tasks. Moreover, most of these corpora only exist for English, with few extensions for other languages (Ji et al., 2016)." }, { "figure_ref": [], "heading": "Frame Parsing", "publication_ref": [ "b12", "b48", "b49", "b1", "b17", "b41", "b18", "b42" ], "table_ref": [], "text": "Frame Parsing (Das et al., 2014;Swayamdipta et al., 2017Swayamdipta et al., , 2018) ) consists in recognizing, in a textual expression, a word or set of words (the lexical unit) as the predicate evoking a given frame and isolating the text spans that evoke the semantic arguments (frame elements) related to that frame. Frames are conceptual structures describing prototypical situations and their realizations in text. The reference linguistic resource for Frame Parsing in English is FrameNet (FN) (Baker et al., 1998;Fillmore and Baker, 2001;Ruppenhofer et al., 2006). In this work, we use Frame Parsing for extracting event frames. While event extraction initiatives have been based on a variety of models, approaches and schemes, which are not always interoperable or comparable, the advantage of using Frame Parsing for Event Extraction is the availability of an established resource, based on a unified, grounded theoretical framework (Fillmore et al., 1976). EvN-ITA differs from FN in that the latter is based on lexicographic annotation (one target lexical unit per sentence), providing only a small subset of full-text annotated data (Ruppenhofer et al., 2016), whereas EvN-ITA has been annotated by design in a fulltext fashion (see Section 4). Also, it is important to point out that EvN-ITA is not meant to be a comprehensive Italian version of the popular English FN. Instead, in this work we adopt part of the FN schema but focus exclusively on event-denoting frames, aiming at providing a large, self-contained and robust tool for frame-based Event Extraction in Italian." }, { "figure_ref": [], "heading": "Italian Event Extraction and Frame Semantics", "publication_ref": [ "b3", "b7", "b6", "b43", "b46", "b47", "b22", "b2", "b4", "b20", "b50", "b51", "b24" ], "table_ref": [], "text": "As for Italian, the Frame Labeling over Italian Texts Task (FLAIT) was organized at EVALITA in 2011 (Basili et al., 2012). Moreover, Event Extraction in Italian was the object of the EVENTI evaluation campaign at EVALITA 2014 (Caselli et al., 2014), which focused on temporal processing and was based on the Ita-TimeBank schema (Caselli et al., 2011). Later on, Caselli (2018) experimented with the same dataset using a neural architecture and evaluated the impact of different word embeddings for event extraction. While Italian Event Extraction approaches have traditionally been based on the TimeML (Saurí et al., 2006) classification scheme, which provides 7 broad, temporal-oriented classes, more recently the necessity has emerged of a more fine-grained annotation schema for event classification, as discussed by Sprugnoli and Tonelli (2017). Supported by a survey involving historians, the authors investigated the application of event extraction on historical texts. Sprugnoli and Tonelli (2019) describe a specific schema, adapting semantic categories provided by the Historical Thesaurus of the Oxford English Dictionary (HTOED) (Kay et al., 2009), resulting in 22 topicdriven event classes, thereby moving towards developing a richer and at the same time finer-grained inventory of classes for representing events in text.\nAs for frame semantics, on the other hand, Basili et al. (2017) and Brambilla et al. (2020) described a work in progress for the creation of IFrameNet, a large scale Italian version of FN, by using semiautomatic methods and manual validation for frame induction, with 5,208 sentences annotated with at least one lexical unit. However, the dataset has not been released so far. In fact, despite the considerable amount of work in lexical (Lenci et al., 2012a;Jezek et al., 2014) and frame semantics (Tonelli and Pianta, 2008;Tonelli et al., 2009;Lenci et al., 2010Lenci et al., , 2012b)), Italian still lacks an extensive, publicly available linguistic resource for Frame Parsing.\n3 Dataset" }, { "figure_ref": [], "heading": "EventNet-ITA", "publication_ref": [ "b1", "b41", "b37" ], "table_ref": [], "text": "In order to ensure multilingual compatibility, we employ a selection of event frames from FN (Baker et al., 1998;Ruppenhofer et al., 2006) The corpus -as well as the annotation schema -has been created with the purpose of covering historical narratives in a broad sense, but without committing to a specific textual genre. For this reason, as well as for creating a releasable corpus, sentences for the annotation set of EvN-ITA have been sampled from a subset of the Italian Wikipedia edition. In order to filter out irrelevant documents, i.e. documents not likely to contain events, we collected Wikipedia pages falling under the categories Events by country 1 and History by country. 2 This choice ensures a wide variety of featured events, both temporally (from ancient history to the present days) and geographically. Through standard pre-processing (tokenization, lemmatization and dependency parsing have been performed using TINT 3 (Palmero Aprosio and Moretti, 2018)), a pool of sentences, arranged by lemma, was generated, from which to pick for the annotation set. Annotated sentences are drawn from 16,309 different Wikipedia articles." }, { "figure_ref": [], "heading": "Domain coverage", "publication_ref": [ "b15", "b33", "b22", "b1" ], "table_ref": [], "text": "In the design phase of the resource, a manual analysis was made of existing corpora in multiple languages, in order to circumscribe the domains and classes to be modelled. Resources as Automatic Content Extraction (Doddington et al., 2004;Consortium et al., 2005), Event Nugget (Mitamura et al., 2015), the Historical Thesaurus of the Oxford English Dictionary (HTOED) (Kay et al., 2009) and FN (Baker et al., 1998) were reviewed and compared. FN is currently the most complete, rich and established existing resource and has been taken as reference for the development of EvN-ITA. This choice is motivated by the opportunities it offers in terms of reuse, coverage and possible multilingual extensions. EvN-ITA's annotation schema covers 205 different event frames, each provided with a set of specific frame elements (unique modeled frame elements amount to 3,571), and has been extensively documented by providing, for each frame, its definition, the corresponding set of lexical units and frame elements associated to it. The distribution of classes, arranged by topic, is depicted in Figure 1. Beside conflict-related events, that hold a prominent place in historical accounts and journalistic narratives, we have taken care to extend the collection of event types to other aspects of the life of societies and individuals, such as legislative and legal processes, work, establishment of and membership in social organizations, life events, as well as events related to the arts, economic processes and cognitive processes such as decisions, skills, judgements, amongst others. In the design of the resource, attention has been paid also to maintain the balance between internal coherence and usabil- " }, { "figure_ref": [ "fig_0" ], "heading": "Annotation", "publication_ref": [], "table_ref": [], "text": "The textual corpus, generated as discussed in Section 3.1, has been manually annotated by labeling event triggers with their frame class and predicate arguments with the corresponding frame element.\nAnnotation was performed at the sentence level and was conducted frame-driven, by first selecting significant event frames for the domain and subsequently identifying the most relevant lexical units for each frame. Given a sentence, any lexical units in our schema and all related frame elements are annotated, producing as many layers as there are event mentions (full-text annotation). Figure 2 shows an example of full-text annotation from EvN-ITA." }, { "figure_ref": [ "fig_0" ], "heading": "Format", "publication_ref": [], "table_ref": [], "text": "The IOB2 annotation format is being used, in which the B-tag identifies the first token of a span, the I-tag identifies all tokens inside the span and the Otag all out-of-mention tokens. Discontinuous mentions are allowed, both for frames and for frame elements. The only constraint for event mentions is that they cannot overlap: each token in a sentence can denote at most one event type. This does not hold for frame elements: in fact, given a sentence with multiple frame occurrences, frame elements from different annotation sequences (i.e. belonging to different frames) can always overlap, hence a token may be labeled with more than one frame element tag4 (See Figure 2)." }, { "figure_ref": [], "heading": "Guidelines", "publication_ref": [], "table_ref": [], "text": "EvN-ITA is thoroughly documented, both in the form of general annotation guidelines (what to annotate) and at the annotation schema level (frame description, lexical units, frame elements).\nAs for lexical units, we exclusively focus on nouns, verbs and multi-word expressions. Although also other parts-of-speech (adverbs, for example) can be loosely event-evoking, this focus is motivated by practical reasons: nouns and verbs, along with multi-word expressions, are the most frequent triggers of event mentions in text and are characterized by a richer syntactic structure, which in turn is crucial for harvesting information related to frame elements. Nouns are annotated as event triggers only if they reference directly the occurrence of an event, but not when the reference is indirect, for example: As for verbs, we annotate the main verb but not the auxiliary.\n[...]\nAppena Frame mentions are annotated regardless of their factuality value, which means that also negated or hypotetical frame mentions must be annotated, as well as those introduced by modals. Conversely, in EvN-ITA we do not annotate as frame mention lexical units that are used with metaphorical meaning or in the form of rhetoric expression.5 EvN-ITA's annotation schema consists of 205 frames and 837 lexical units, out of which 358 have at least 100 annotations each, 191 have a number of annotations comprised between 50 and 99, and 288 have a number of annotations comprised between 20 an 40. The annotation process has been oriented to keep the balance between frame completion and polysemy preservation. For this reason, we also annotated less frequent lexical units encountered in the corpus, resulting in a long queue of lexical units with less than 20 occurrences each. This strategy was adopted in order both to increase the flexibility of the resource and to set the stage for its future extension. Also, with the aim of improving robustness, for each lexical unit we annotated a number of negative examples, i.e. sentences in which the given lexical unit occurs without triggering any of the corresponding frames. 6Within the scope of this work, we consider as events any accomplishment, achievement or process, without distinction. The schema additionally models a number of states (e.g. BEING IN PLACE, CAPTIVITY) and relations (LEADERSHIP, DURA-TION RELATION, POSSESSION). As for semantic roles, we referred to FN's frame elements, with minor adaptations or additions, which in most cases tend towards increased specificity." }, { "figure_ref": [], "heading": "Inter-Annotator Agreement", "publication_ref": [ "b9" ], "table_ref": [], "text": "EvN-ITA was annotated by one single native speaker annotator with a solid background in Frame Semantics. For this reason, particular attention has been devoted to assessing the robustness, consistency and intelligibility of the resource by means of inter-annotator agreement analysis. We therefore validated our schema and guidelines by re-annotating 2,251 sentences, spanning over 61 classes, with a second (native speaker) annotator.\nWhen selecting classes to be included in this validation set, we paid attention to include pairs or triplets of frames with high semantic similarity,7 in order to stress the test. We used two different metrics for assessing agreement: Jaccard Index (computed as the ratio between the number of items annotated with the same label and the sum of all annotated items) and Cohen's Kappa (Cohen, 1960) different frames and frame elements in EvN-ITA, we observe that agreement values are high and indicate that the guidelines are sufficiently detailed in their description of the linguistic phenomena to be annotated. As expected, the annotation of frame elements has proven more challenging. A further manual analysis conducted on a sample of disagreement had three main sources: Ontological (67,5%) a textual span is recognized as frame or frame element by one annotator but not by the other;\nSpan Length (20,4%) annotators agree on the label but not on the exact span to annotate;\nClassification (12%) annotators agree on the span to annotate but assign two different labels." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [ "b12", "b0" ], "table_ref": [], "text": "A traditional approach for Frame Parsing, as well as for Event Extraction, is to break down the problem into sub-tasks (Das et al., 2014;Ahn, 2006), usually separating the steps of trigger identification, frame classification and argument extraction. However, a major downside of this approach, besides being more complex, is that it implies error propagation from higher-level sub-tasks downwards. Instead, we propose to learn all the tasks in one single step, allowing the model to simultaneously exploit tag relations on the time (sequence) axis and on the token axis. Thus, in this work Frame Parsing is approached end-to-end and is treated as a multilabel sequence tagging problem. The strength of this design option lies in its simplicity as it requires minimal pre-processing and does not imply the use of additional knowledge, as well as in its efficiency, as it minimizes computational requirements (see Section 5.2)." }, { "figure_ref": [], "heading": "Preprocessing", "publication_ref": [], "table_ref": [], "text": "The adoption of full-text annotation implies, at preprocessing time, the definition of each frame element as frame-specific, in order to avoid overlaps between frame elements with the same name but referring to different frames. In fact, many frames belonging to the same semantic area share a set of frame elements with the same name. For example, both motion frames FLEEING and MO-TION_DOWNWARDS have a frame element called MOVER. In EvN-ITA, frame elements referring to the same semantic role (thus carrying the same name) but belonging to different frames are assigned different, frame-aware labels. Therefore, frame elements MOVER-FLEEING and MOVER-MOTION_DOWNWARDS will be assigned two different labels. This data encoding strategy, in return, allows us to minimize the need for postprocessing (as each predicted frame element is implicitly linked to its frame) and enables the model to learn relationships between multiple frame elements occurring on the same token/span." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b14" ], "table_ref": [], "text": "Our frame parser aims at jointly extracting all frame mentions and all related frame elements in a target sentence. In other words, given an input sentence, each token must be labeled according to the event frame and/or frame element(s) it denotes. The underlying idea is to leverage mutual co-occurrence between frame (and frame element) classes, as certain frames typically tend to appear more often with, or have a semantic preference for, other frames. 8 This way, the model is led to not only learn correspondences between a word and a given frame or frame element, but also local patterns of co-occurrence between different frame elements.\nIn order to provide a reliable performance assessment, we opted for an 80/10/10 stratified train/dev/test split, thus ensuring the same proportion of (frame) labels in each split. Moreover, we generate 4 folds from the dataset, the first used for hyperparmeter search and the remaining three for evaluation. To this purpose we fine-tune a BERT model9 (Devlin et al., 2019) for Italian and show that the approach allows us to scale with thousands of (unique) labels without a remarkable computational and memory overhead. In this experimental setup we use MaChAmp, v 0.4 beta 2 (van der Goot et al., 2021), a toolkit supporting a variety of NLP tasks, including multi-label sequence labeling. We performed hyperparameter search by exploring the space with batch sizes between 8 and 256 and learning rates between 7.5e-4 and 7.5e-3. All other hyperparameters are left unchanged with respect to MaChAmp's default configuration for the multisequential task. 10 Overall, 64 configurations have been explored. The best hyperparameter values we found, according to the performance on the development set, are batch size of 64 and learning rate of 1.5e-3 and the resulting model has been used for the evaluation (Section 6). The training requires approximately 3.5 hours on an NVIDIA RTX A5000 GPU with 24 GB memory and 8192 CUDA cores. Figures in bold represent the reference performance values for EventNet-ITA.\nIn terms of memory, the maximum requirement is 5 GB RAM." }, { "figure_ref": [], "heading": "Evaluation", "publication_ref": [], "table_ref": [], "text": "In this section, we discuss the quantitative (Section 6.1) and qualitative (Section 6.2) performance of the multi-label sequence labeling approach on the EvN-ITA dataset." }, { "figure_ref": [], "heading": "Quantitative results", "publication_ref": [], "table_ref": [], "text": "Evaluation results are reported in Table 3 and Table 4 in an aggregated fashion in order to provide the reader with different views on performance.\nReported values have been obtained by separately computing the metrics class-wise on each fold, and then averaging the obtained scores. For each of the two groups of labels (frames and frame elements), beside the overall average performance, we provide the average of the n-best and n-worst performing classes and the average of the n most and least frequent classes in the dataset, on the three test sets, with n = 40 for frames and n = 200 for frame ele-ments. 11 We also compute the macro average and the weighted macro average of all classes, the latter providing a more realistic view in a context of highly unbalanced label distribution. With a strict F1-score of 0.9 for frames and 0.724 for frame elements, our system shows very promising results for the task. Overall, the results show that, despite being fundamentally token-based, our multi-label sequence tagging approach proves effective also in the identification of (multiple) textual spans in a sentence, scaling well on a dataset involving a very high number of classes. This is further confirmed by the small delta between relaxed and strict performance values." }, { "figure_ref": [], "heading": "Error Analysis", "publication_ref": [], "table_ref": [ "tab_4", "tab_5" ], "text": "To assess the potential of the proposed approach and the possible inconsistencies, we perform an error analysis on the test sets of the three folds, at token level. Since in a multi-label setting it is not always possible to establish a univocal correspondence between labels in the gold and predicted sets (given the possibility of multiple assignments on both sides), we proceed as follows: for each token, we filter out from both sets of labels (gold and predicted) the correctly matched labels. Based on this output, we focus on a subset of tokens, those labeled, in both sets, with exactly one label and we use it as an approximation for identifying most common errors. This allows us to focus on specific one-to-one label mismatchings, both for event frames (Table 5) and for frame elements (Table 6). Considering only event frames, analysis reveals that only 4.8% of the identified mismatches involves two event labels, while 95.2% involves a mismatch between an event label and the O-tag (out-of-mention). This ratio becomes more balanced with regard to frame elements (39% and 61%, respectively). Also, the impact of errors referred to the IOB schema remains very low, amounting to 1.16% for event frames and 4.8% for frame elements." }, { "figure_ref": [], "heading": "Gold", "publication_ref": [], "table_ref": [], "text": "Qualitatively, the analysis shows a clear pattern, namely that errors occur in most cases between frames with a high semantic similarity, like BLAM-ING/ACCUSE or HOSTILE_ENCOUNTER/WAR, which in some cases may be difficult to classify even for the human annotator. As for frame elements, errors occur mostly a) between the same frame element of two different event frames tically close frame elements within the same frame (EXPLANATION-DEATH vs. CAUSE-DEATH). These quite subtle error types further reveal how the multi-label sequence labeling approach is capable of learning cross-frame correspondences of frame elements, an aspect that we plan to further investigate in future work." }, { "figure_ref": [], "heading": "Dataset and Model Release", "publication_ref": [], "table_ref": [], "text": "The EvN-ITA annotated dataset, along with its documentation, is being released upon request12 , under CC-BY-SA 4.0 license13 . The model of the frame parser, described in Section 5.2, is available on Huggingface14 ." }, { "figure_ref": [], "heading": "Conclusion and Future Works", "publication_ref": [], "table_ref": [], "text": "In this paper we presented EvN-ITA, a large corpus annotated with event frames in Italian, accompained by an efficient multi-label sequential model for Frame Parsing, trained and evaluated on the corpus. Future work includes extrinsic tests of the resource on new data from different textual genres and the reinforcement of the schema, in view of providing a wider domain coverage and increased adaptability of the model. Moreover, is our plan to employ EvN-ITA as a benchmark to investigate the performance of different methodologies and learning models for Frame Parsing, as well as to explore strategies for multilingual applications." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "The first limitation of this work lies in the unique source of the data, Wikipedia, that, if on the one end guarantees an ample variety of topics and types of events, on the other hand, from the linguistic point of view it sets a constraint on a homogeneous linguistic style. As mentioned above, this will be the focus of our future effort. Secondly, in case of multiple mentions of the same event frame in a given sentence (this case concerns 6% of the sentences in EvN-ITA), the currently adopted methodology does not support automatic linking of frame elements to the exact frame mention they refer to in the sentence. Future approaches will be geared to take this issue into account." }, { "figure_ref": [], "heading": "A Examples of annotation", "publication_ref": [], "table_ref": [], "text": "While the full documentation of EvN-ITA, including annotation guidelines, frame-based descriptions and examples is being released along with the resource, in this section we provide more details about the annotation process.\nAs mentioned in Section 4, in EvN-ITA target partsof-speech are nouns, verbs and multiword expressions. In real-world data, however, beside events expressed in positive, factual form, there are often cases that raise exceptions. In EvN-ITA, event mentions are annotated regardless of their factuality value, which means that also negated, abstract, hypotetical event mentions must be annotated, as well as those introduced by modals verbs. Conversely, we do not annotate as event mention those lexical units that are used with metaphorical meaning or in the form of rethoric expression. In the following, we provide some examples15 :\n1. negated events Il visitatore / studioso poteva [intraprendere Ø] così [un viaggio Ø] dal microcosmo (la chimica), attraverso gli elementi primi della natura, al macrocosmo (l'astronomia) nel torrino che concludeva il percorso. Sostiene inoltre che, nonostante i problemi della filosofia della scienza e della ragione in generale, le \"questioni morali\" avranno [risposte Ø] oggettivamente giuste e sbagliate suffragate da fatti empirici su ciò che induce la gente a star bene e prosperare.\nFinally, as mentioned in Section 4, in order to increase robustness, EvN-ITA contains many negative examples. Given a lexical unit, a negative example is an occurrence of the lexical unit which denotes a meaning not covered by the current schema " }, { "figure_ref": [], "heading": "B Association between event frames", "publication_ref": [], "table_ref": [], "text": "In this section we present numerical evidence of association between events, mentioned in Section 5.2. As stated above, patterns of association between frames can be identified by computing their co-occurrence. We choose 5 event frames and list the first 5 most related event frames and the 5 most unrelated frames, using Pointwise Mutual Information (PMI).\nTarget " }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "I owe my gratitude to Serena Cristoforetti, for her dedicated contribution to the inter-annotator assessment process, carried out as part of her curricular internship at the Digital Humanities (DH) group. I am equally thankful to Alan Ramponi and Sara Tonelli (DH) and to Enrica Troiano (Vrije Universiteit Amsterdam) for proofreading the manuscript and for their precious and constructive feedback. This work and the effort it represents are dedicated to the memory of Anna Goy." } ]
This paper introduces EventNet-ITA, a large, multi-domain corpus annotated full-text with event frames for Italian. Moreover, we present and thoroughly evaluate an efficient multi-label sequence labeling approach for Frame Parsing. Covering a wide range of individual, social and historical phenomena, with more than 53,000 annotated sentences and over 200 modeled frames, EventNet-ITA constitutes the first systematic attempt to provide the Italian language with a publicly available resource for Frame Parsing of events, useful for a broad spectrum of research and application tasks. Our approach achieves a promising 0.9 strict F1-score for frame classification and 0.72 for frame element classification, on top of minimizing computational requirements.
EventNet-ITA: Italian Frame Parsing for Events
[ { "figure_caption": "Figure 2 :2Figure 2: An example of full-text annotation in EvN-ITA (English translation: The construction of the Alvitian fortification dates back to the time of the Norman invasion.).", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "ricostruita dalla devastazione del sisma la città fu [distrutta DESTROY-ING] nuovamente [...] (As soon as it was rebuilt from the devastation of the earthquake, the city was destroyed again [...]) Multi-word expressions are annotated when they break compositionality, for example in radere al suolo (raze to the ground) or aprire il fuoco (opening fire), or in verbal periphrastic use essere al corrente (being aware) or fare visita (paying a visit).", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "(for example MESSAGE-REQUEST vs. MESSAGE-QUESTIONING) or b) between frame elements that have a latent semantic correspondence in different frames (INTERLOCUTOR2-CONVERSATION vs. PARTY2-NEGOTIATION or REASON-BLAMING vs. OFFENSE-ACCUSE) or, still, c) between seman-", "figure_data": "", "figure_id": "fig_2", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "culminarono a Blois nel 1171 con la [morte DEATH] sul rogo di 31 ebrei.(culminated in Blois in 1711 with the death at the stake of 31 Jews.)", "figure_data": "Nell'aprile del 1700, Giovanni si am-malò terribilmente e si trovò quasi sulletto di [morte Ø]. (In April 1700, Johnfell terribly ill and was nearly on hisdeathbed.)", "figure_id": "tab_2", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Inter-annotator agreement scores.", "figure_data": ".", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Top 10 prediction errors between event frames.", "figure_data": "PredictedBLAMINGACCUSEHOSTILE_ENCOUNTERWARCONQUERINGOCCUPANCYACCUSEBLAMINGREPLACINGTAKE_PLACE_OFOCCUPANCYCONQUERINGBUILDINGMANUFACTURINGCREATE_ARTWORKTEXT_CREATIONKILLINGDEATHREQUESTQUESTIONING", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Top 10 prediction errors between frame elements (G = Gold, P = Predicted).", "figure_data": "", "figure_id": "tab_5", "figure_label": "6", "figure_type": "table" }, { "figure_caption": ". Negative examples are meant to improve the classifier's ability to work in an open-world setting and to generalize to extrinsic/unseen data.", "figure_data": "Lexical unit: istituirePositive example:La richiesta venne accolta e il papadiede l'autorizzazione a [istituire CRE-ATE_SOCIAL_ENTITY] [in InghilterraPLACE] [un tribunale ecclesiastico CRE-", "figure_id": "tab_7", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Correlation with the INVEST frame.", "figure_data": "Target: ARRIVINGRelated framesPMIDEPARTING1.94ENCOUNTER1.86MOVE AWAY1.82REMAIN IN PLACE1.78MOTION DOWNWARDS1.75...BEING MARRIED-1.55DECREASE ON A SCALE -1.58ACQUITTAL-1.59EARTHQUAKE-1.62TAKING SIDES-1.79", "figure_id": "tab_9", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Correlation with the ARRIVING frame.", "figure_data": "Target: CREATE ARTWORKRelated framesPMIPERFORMING ARTS 2.18ASSIGN TASK1.89TEMPORAL ORIGIN 1.73PUBLISHING1.72BEING LOCATED1.67...PURPOSE-1.49ROBBERY-1.50APPOINTING-1.51PROCESS END-1.59WAR-1.70", "figure_id": "tab_10", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Correlation with the CREATE ARTWORK frame.", "figure_data": "Target: TRIALRelated framesPMIACQUITTAL3.81SENTENCING3.59VERDICT3.12ACCUSE2.99EXECUTION2.80...FLEEING-1.29BEING LOCATED-1.36CREATE ARTWORK -1.37BEAT OPPONENT-1.39AGREEMENT-1.41", "figure_id": "tab_11", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Correlation with the TRIAL frame.", "figure_data": "Target: COMMUNICATIONRelated framesPMICONTACTING2.64QUESTIONING2.09ENCOUNTER2.04AWARENESS2.02GIVING1.88...EVENT ORDERING-1.22COUNTERATTACK-1.24APPOINTING ELECTION -1.25SUPPRESSING-1.25TAKE PLACE OF-1.53", "figure_id": "tab_12", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Correlation with the COMMUNICATION frame.", "figure_data": "", "figure_id": "tab_13", "figure_label": "11", "figure_type": "table" } ]
Marco Rovera
[ { "authors": "David Ahn", "journal": "", "ref_id": "b0", "title": "The stages of event extraction", "year": "2006" }, { "authors": "Collin F Baker; Charles J Fillmore; John B Lowe", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "The berkeley framenet project", "year": "1998" }, { "authors": "Roberto Basili; Silvia Brambilla; Danilo Croce; Fabio Tamburini", "journal": "", "ref_id": "b2", "title": "Developing a large scale framenet for italian: the iframenet experience", "year": "2017-12" }, { "authors": "Roberto Basili; Diego De Cao; Alessandro Lenci; Alessandro Moschitti; Giulia Venturi", "journal": "Springer", "ref_id": "b3", "title": "Evalita 2011: The frame labelingover italian texts task", "year": "2012" }, { "authors": "Silvia Brambilla; Danilo Croce; Fabio Tamburini; Roberto Basili", "journal": "CEUR WORK-SHOP PROCEEDINGS", "ref_id": "b4", "title": "Automatic induction of framenet lexical units in italian", "year": "2020" }, { "authors": "Tommaso Caselli", "journal": "", "ref_id": "b5", "title": "Italian event detection goes deep learning", "year": "2018" }, { "authors": "Tommaso Caselli; Valentina Bartalesi Lenzi; Rachele Sprugnoli; Emanuele Pianta; Irina Prodanof", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Annotating events, temporal expressions and relations in italian: the it-timeml experience for the ita-timebank", "year": "2011" }, { "authors": "Tommaso Caselli; Rachele Sprugnoli; Manuela Speranza; Monica Monachini", "journal": "", "ref_id": "b7", "title": "Eventi: Evaluation of events and temporal information at evalita 2014", "year": "2014" }, { "authors": "Yubo Chen; Liheng Xu; Kang Liu; Daojian Zeng; Jun Zhao", "journal": "", "ref_id": "b8", "title": "Event extraction via dynamic multipooling convolutional neural networks", "year": "2015" }, { "authors": "Jacob Cohen", "journal": "Educational and psychological measurement", "ref_id": "b9", "title": "A coefficient of agreement for nominal scales", "year": "1960" }, { "authors": " ", "journal": "ACE", "ref_id": "b10", "title": "Ace (automatic content extraction) english annotation guidelines for events", "year": "2005" }, { "authors": "Agata Cybulska; Piek Vossen", "journal": "", "ref_id": "b11", "title": "Historical event extraction from text", "year": "2011" }, { "authors": "Dipanjan Das; Desai Chen; F T André; Nathan Martins; Noah A Schneider; Smith", "journal": "Computational linguistics", "ref_id": "b12", "title": "Frame-semantic parsing", "year": "2014" }, { "authors": "Jens A De Bruijn; Hans De Moel; Brenden Jongman; Marleen C De Ruiter; Jurjen Wagemaker; Jeroen Cjh Aerts", "journal": "Scientific data", "ref_id": "b13", "title": "A global database of historic and real-time flood events based on social media", "year": "2019" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Alexis George R Doddington; Mark A Mitchell; Lance A Przybocki; Stephanie M Ramshaw; Ralph M Strassel; Weischedel", "journal": "", "ref_id": "b15", "title": "The automatic content extraction (ace) program-tasks, data, and evaluation", "year": "2004" }, { "authors": "Joe Ellis; Jeremy Getman; Dana Fore; Neil Kuster; Zhiyi Song; Ann Bies; Stephanie M Strassel", "journal": "Tac", "ref_id": "b16", "title": "Overview of linguistic resources for the tac kbp 2015 evaluations: Methodologies and results", "year": "2015" }, { "authors": "J Charles; Collin F Fillmore; Baker", "journal": "", "ref_id": "b17", "title": "Frame semantics for text understanding", "year": "2001" }, { "authors": "J Charles; Fillmore", "journal": "", "ref_id": "b18", "title": "Frame semantics and the nature of language", "year": "1976" }, { "authors": "Kung-Hsiang Huang; Mu Yang; Nanyun Peng", "journal": "", "ref_id": "b19", "title": "Biomedical event extraction with hierarchical knowledge graphs", "year": "2020" }, { "authors": "Elisabetta Jezek; Bernardo Magnini; Anna Feltracco; Alessia Bianchini; Octavian Popescu", "journal": "", "ref_id": "b20", "title": "Tpas: A resource of corpus-derived types predicateargument structures for linguistic analysis and semantic processing", "year": "2014" }, { "authors": "Heng Ji; Joel Nothman; Trang Dang", "journal": "", "ref_id": "b21", "title": "Overview of tac-kbp2016 trilingual edl and its impact on end-to-end cold-start kbp", "year": "2016" }, { "authors": "Christian Kay; Jane Roberts; Michael Samuels; Irené Wotherspoon", "journal": "Oxford University Press", "ref_id": "b22", "title": "Historical thesaurus of the Oxford English dictionary", "year": "2009" }, { "authors": "Dac Viet; Minh Lai; Heidi Van Nguyen; Thien Huu Kaufman; Nguyen", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Event extraction from historical texts: A new dataset for black rebellions", "year": "2021" }, { "authors": "Alessandro Lenci; Martina Johnson; Gabriella Lapesa", "journal": "", "ref_id": "b24", "title": "Building an italian framenet through semi-automatic corpus analysis", "year": "2010" }, { "authors": "Alessandro Lenci; Gabriella Lapesa; Giulia Bonansinga", "journal": "", "ref_id": "b25", "title": "Lexit: A computational resource on italian argument structure", "year": "2012" }, { "authors": "Alessandro Lenci; Simonetta Montemagni; Giulia Venturi; Maria Grazia Cutrulla", "journal": "", "ref_id": "b26", "title": "Enriching the isst-tanl corpus with semantic frames", "year": "2012" }, { "authors": "Diya Li; Lifu Huang; Ji Heng; Jiawei Han", "journal": "", "ref_id": "b27", "title": "Biomedical event extraction based on knowledgedriven tree-lstm", "year": "2019" }, { "authors": "Shasha Liao; Ralph Grishman", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Using document level cross-event inference to improve event extraction", "year": "2010" }, { "authors": "Yaojie Lu; Hongyu Lin; Jin Xu; Xianpei Han; Jialong Tang; Annan Li; Le Sun; Meng Liao; Shaoyi Chen", "journal": "", "ref_id": "b29", "title": "Text2event: Controllable sequence-tostructure generation for end-to-end event extraction", "year": "2021" }, { "authors": "Stefano Menini; Teresa Paccosi; Serra Sinem Tekiroglu; Sara Tonelli", "journal": "", "ref_id": "b30", "title": "Scent mining: Extracting olfactory events, smell sources and qualities", "year": "2023" }, { "authors": "Gosse Minnema; Sara Gemelli; Chiara Zanchi; Tommaso Caselli; Malvina Nissim", "journal": "Association for Computational Linguistics", "ref_id": "b31", "title": "Sociofillmore: A tool for discovering perspectives", "year": "2022" }, { "authors": "Gosse Minnema; Sara Gemelli; Chiara Zanchi; Viviana Patti; Tommaso Caselli; Malvina Nissim", "journal": "", "ref_id": "b32", "title": "Frame semantics for social nlp in italian: Analyzing responsibility framing in femicide news reports", "year": "2021" }, { "authors": "Teruko Mitamura; Yukari Yamakawa; Susan Holm; Zhiyi Song; Ann Bies; Seth Kulick; Stephanie Strassel", "journal": "", "ref_id": "b33", "title": "Event nugget annotation: Processes and issues", "year": "2015" }, { "authors": "Thien Huu Nguyen; Kyunghyun Cho; Ralph Grishman", "journal": "", "ref_id": "b34", "title": "Joint event extraction via recurrent neural networks", "year": "2016" }, { "authors": "Minh Trung; Thien Nguyen; Huu Nguyen", "journal": "", "ref_id": "b35", "title": "One for all: Neural joint modeling of entities and events", "year": "2019" }, { "authors": "Walker Orr; Prasad Tadepalli; Xiaoli Fern", "journal": "Association for Computational Linguistics", "ref_id": "b36", "title": "Event detection with neural networks: A rigorous empirical evaluation", "year": "2018" }, { "authors": "Alessio Palmero; Aprosio ; Giovanni Moretti", "journal": "", "ref_id": "b37", "title": "Tint 2.0: an all-inclusive suite for nlp in italian", "year": "2018" }, { "authors": "Giovanni Paolini; Ben Athiwaratkun; Jason Krone; Jie Ma; Alessandro Achille; Rishita Anubhai; Cicero Nogueira Dos Santos; Bing Xiang; Stefano Soatto", "journal": "", "ref_id": "b38", "title": "Structured prediction as translation between augmented natural languages", "year": "2021" }, { "authors": "Alan Ramponi; Rob Van Der Goot; Rosario Lombardo; Barbara Plank", "journal": "", "ref_id": "b39", "title": "Biomedical event extraction as sequence labeling", "year": "2020" }, { "authors": "Marco Rovera; Federico Nanni; Simone Paolo; Ponzetto ", "journal": "Journal on Computing and Cultural Heritage (JOCCH)", "ref_id": "b40", "title": "Event-based access to historical italian war memoirs", "year": "2021" }, { "authors": "Josef Ruppenhofer; Michael Ellsworth; Myriam Schwarzer-Petruck; Jan Christopher R Johnson; Scheffczyk", "journal": "", "ref_id": "b41", "title": "Framenet ii: Extended theory and practice", "year": "2006" }, { "authors": "Josef Ruppenhofer; Michael Ellsworth; Myriam Schwarzer-Petruck; Jan Christopher R Johnson; Scheffczyk", "journal": "", "ref_id": "b42", "title": "Framenet ii: Extended theory and practice", "year": "2016" }, { "authors": "Roser Saurí; Jessica Littman; Bob Knippen; Robert Gaizauskas; Andrea Setzer; James Pustejovsky", "journal": "Version", "ref_id": "b43", "title": "Timeml annotation guidelines", "year": "2006" }, { "authors": "Roxane Segers; Marieke Van Erp; Lourens Van Der; Lora Meij; Jacco Aroyo; Guus Van Ossenbruggen; Bob Schreiber; Johan Wielinga; Geertje Oomen; Jacobs", "journal": "", "ref_id": "b44", "title": "Hacking history via event extraction", "year": "2011" }, { "authors": "Matthew Sims; Jong Ho Park; David Bamman", "journal": "Association for Computational Linguistics", "ref_id": "b45", "title": "Literary event detection", "year": "2019" }, { "authors": "Rachele Sprugnoli; Sara Tonelli", "journal": "Natural Language Engineering", "ref_id": "b46", "title": "One, no one and one hundred thousand events: Defining and processing events in an inter-disciplinary perspective", "year": "2017" }, { "authors": "Rachele Sprugnoli; Sara Tonelli", "journal": "Computational Linguistics", "ref_id": "b47", "title": "Novel event detection and classification for historical texts", "year": "2019" }, { "authors": "Swabha Swayamdipta; Sam Thomson; Chris Dyer; Noah A Smith", "journal": "", "ref_id": "b48", "title": "Frame-semantic parsing with softmax-margin segmental rnns and a syntactic scaffold", "year": "2017" }, { "authors": "Swabha Swayamdipta; Sam Thomson; Kenton Lee; Luke Zettlemoyer; Chris Dyer; Noah A Smith", "journal": "", "ref_id": "b49", "title": "Syntactic scaffolds for semantic structures", "year": "2018" }, { "authors": "Sara Tonelli; Emanuele Pianta", "journal": "", "ref_id": "b50", "title": "Frame information transfer from english to italian", "year": "2008" }, { "authors": "Sara Tonelli; Daniele Pighin; Claudio Giuliano; Emanuele Pianta", "journal": "", "ref_id": "b51", "title": "Semi-automatic development of framenet for italian", "year": "2009" }, { "authors": "Rob Van Der Goot; Ahmet Üstün; Alan Ramponi; Ibrahim Sharaf; Barbara Plank", "journal": "Association for Computational Linguistics", "ref_id": "b52", "title": "Massive choice, ample tasks (MaChAmp): A toolkit for multitask learning in NLP", "year": "2021" }, { "authors": "Christopher Walker; Stephanie Strassel; Julie Medero; Kazuaki Maeda", "journal": "Linguistic Data Consortium", "ref_id": "b53", "title": "Ace 2005 multilingual training corpus", "year": "2006" } ]
[ { "formula_coordinates": [ 4, 327.96, 592.01, 15.76, 9.46 ], "formula_id": "formula_0", "formula_text": "[...]" } ]
2023-05-18
[ { "figure_ref": [ "fig_0", "fig_0" ], "heading": "Introduction", "publication_ref": [ "b11", "b15", "b16", "b27", "b36", "b10", "b32", "b24", "b4", "b26", "b53", "b2", "b14", "b22", "b14", "b12", "b39", "b5", "b0", "b29", "b52", "b17", "b45", "b14", "b38", "b23" ], "table_ref": [], "text": "In recent years, deep neural networks have achieved great success in many computer vision tasks, such as image classification [12,16,17,28,37], object recognition [11,33,25], and semantic segmentation [5,27,54]. As the performance of neural network models improves, their computational and storage costs also increase, making model compression an important research problem [3]. Knowledge distillation is an important model compression method [15].\nKnowledge distillation enables a smaller model with fewer parameters (the student model) to learn from a larger model (the teacher model) to achieve better performance. The vanilla knowledge distillation (KD) method uses the Kullback-Leibler (KL) divergence [23] to mimic the teacher model's logits using the student model [15], as shown in Figure 1(a). The student model learns from the logits of the teacher model to improve its performance. As researchers increasingly studied knowledge distillation, they enabled the student model to learn from the features of intermediate layers of the teacher model [13,40,6,1,30]. However, as the outputs of intermediate layers differ across deep learning models, the design complexity and computational costs of feature-based methods increase. Recently, some researchers have begun investigating distillation methods based on the logits of the teacher model [53,18,46]. The distillation loss is modified to enable the student model to The gradient obtained through backpropagation optimizes the student model and simultaneously optimizes the learning simplifier. Our SKD achieves better results than KD using the same loss function.\neffectively utilize knowledge of the teacher model, achieving distillation results comparable to or even superior to those of feature-based methods.\nIn logit-based methods, the logits of both the teacher and student models are softened using temperature, which leads to a softer label distribution, reduces the gap between the target class and other classes, and allows the distillation loss to focus more on other classes, thereby improving the training effect of the student model [15]. However, even with temperature, student models still cannot closely imitate a teacher model's logits due to insufficient capacity and limited data [39].\nIn real life, good teachers often simplify new knowledge according to students' abilities to help them better understand it. Based on the educational experience of human teachers, we propose a new method called student-friendly knowledge distillation (SKD), outlined in Figure 1(b), to optimize the output knowledge of the teacher model, making it easier for students to learn.\nSKD utilizes the learning simplifier to transform the output distribution of the teacher model into a new distribution that serves as the learning target for the student network. During the training process, the learning simplifier and the student model are jointly optimized using the distillation loss for gradient backpropagation. This allows the new logit distribution to better fit the characteristics of the student model, making it easier for the student model to imitate the teacher model. We design the learning simplifier using self-attention to better construct a simplified logit distribution for the student. The self-attention mechanism enables SKD to adjust the logit distribution of the teacher model along with the output of the student model by using the similarity relationships among the data in the output of the teacher model. This makes it easier for the student network to imitate the simplified logit distribution and learn the knowledge of the teacher network. To improve the learning effect of the learning simplifier on the relationships between data, we incorporate softening processing at the beginning of SKD. We use the temperature-scaled LogSoftmax function to soften the output of the teacher model, similar to temperature softening in the distillation loss. Larger models tend to produce sharper output distributions than smaller models [24], which means that the softened label distribution is more suitable for smaller student models to learn. We conducted extensive experiments, and the results showed that our SKD achieved the best performance in many combinations of knowledge distillation models.\nFurthermore, most existing knowledge distillation methods aim to improve either the distillation of the intermediate features or the distillation loss function of the logits. However, our SKD changes the logits of the teacher model without changing the distillation loss function. Therefore, we can use SKD in conjunction with existing knowledge distillation methods. Experimental results show that the combined method significantly improves upon the original methods, resulting in better-performing student models. 2\nTo summarize, the main contributions of our paper are as follows: " }, { "figure_ref": [], "heading": "Preliminaries", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Vanilla knowledge distillation", "publication_ref": [ "b14" ], "table_ref": [], "text": "The process of vanilla knowledge distillation (KD) [15] is shown in Figure 1(a). For the training data x with the label y in a dataset with K classes, the outputs of the teacher model and the student model are g t ∈ R K and g s ∈ R K , respectively. Using the softmax function yields the student's prediction p s = softmax (g s ) ∈ R K , and we can then compute the cross-entropy loss between the student's prediction and the ground-truth label:\nL CE = K i=1 y i log(p s i ).(1)\nUsing the softmax function with temperature, we obtain the softened teacher prediction p t = softmax (g t /T ) ∈ R K and the softened student prediction p s = softmax (g s /T ) ∈ R K . Through the temperature, the predictions become smoother over each class, so the distillation loss is better able to reflect the differences between the other classes in addition to the correct one. Then, we can compute the distillation loss between the softened predictions with the KL divergence:\nL KL = KL( p t || p s ) = K i=1 p t i log( p t i p s i ).(2)\nThe total loss of KD is:\nL total = αL CE + βL KL ,(3)\nwhere α and β are coefficients used to balance the two parts. Knowledge distillation optimizes the student model by optimizing this loss function." }, { "figure_ref": [], "heading": "Attention", "publication_ref": [ "b43" ], "table_ref": [], "text": "The attention used in our SKD is the standard self-attention in the Transformer [44]. First, the input data are encoded through a linear projection Linear 1 to obtain the corresponding query Q, key K, and value V of dimension D. Then, based on the query and key, the attention matrix A relating them is calculated:\nA = softmax(QK / √ D).(4\n) Then, the weighted sum of values is calculated based on the attention matrix A, and encoded through another linear projection Linear 2 to obtain the output of the self-attention: Output = Linear 2 (AV ).\n(5) Through self-attention, new representations of the data based on the relationships among these data are obtained." }, { "figure_ref": [], "heading": "Methods", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [ "fig_0" ], "heading": "Motivation", "publication_ref": [ "b23" ], "table_ref": [], "text": "In knowledge distillation, the capacity of the teacher model is greater than that of the student model, making it difficult for the student model to accurately simulate the output distribution of the teacher model. In real life, good teachers simplify complex knowledge before teaching it to their students. Inspired by this fact, we propose the student-friendly knowledge distillation (SKD) method, as shown in Figure 1(b). As it is difficult for the student model to generate complex and sharp outputs such as those of the teacher model [24], our SKD first softens the output of the teacher model via softening processing, making it easier for the student model to learn and more advantageous for the learning simplifier to handle.\nThe learning simplifier in SKD is used to modify the teacher output to reduce the difficulty of the student model to mimic the teacher model's output. The learning simplifier and the student model jointly optimize the distillation loss. The learning simplifier uses the real-time logits of the student model as its optimization objective. Therefore, the learning simplifier can transform the output of the teacher model, which is difficult for the student model to mimic, into a distribution that is more similar to the student model's output, thus reducing the difficulty of the student model to mimic the teacher model's output. By using SKD to make minor changes to the output of the teacher model, the student model can better mimic the teacher model, thereby improving the effectiveness of knowledge distillation." }, { "figure_ref": [ "fig_0" ], "heading": "Overall", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Our SKD performs softening processing and the learning simplifier on the output of the teacher model to obtain a new teacher distribution g SKD . The calculation process is shown in Figure 1(b).\nThe output of the teacher model g t ∈ R K is first softened to obtain the softened logit distribution:\ng t sof t = Softening(g t ).(6)\nThen, through the learning simplifier, the change in logits is obtained:\n∆ Simplif ier = Simplifier(g t sof t ).(7)\n∆ Simplif ier is added to the softened teacher logits g t sof t , and the output distribution of SKD is obtained:\ng SKD = ∆ Simplif ier + g t sof t .(8)\nUsing the softmax function with temperature, we obtain the softened prediction of SKD p SKD = softmax g SKD /T ∈ R K . Finally, similar to the original knowledge distillation method, the distillation loss L SKD can be calculated using the softmax function with temperature and the KL divergence, as in Eq. 2:\nL SKD = KL( p SKD || p s ).(9)\nTherefore, the total loss of SKD is:\nL total = L CE + αL SKD ,(10)\nwhere L CE is the cross-entropy loss between the student model's predictions and the true labels, as defined in Eq. (1). To facilitate parameter tuning, the coefficient of the cross-entropy loss is fixed at 1.0. α is the weight coefficient of the SKD loss, which adjusts the relative contributions of the distillation loss and the cross-entropy loss. The effect of the distillation loss is to make the student model mimic the output of the teacher model. The larger the value of α, the more the student model needs to pursue the output of the teacher model during training. However, if the gap between the teacher model and the student model is large, the student model will find it difficult to mimic the output of the teacher model. For models of the same type, the smaller the difference in performance between the teacher and student models, the larger the optimal value of α, which enables the student model to mimic the output of the teacher model better. However, the optimal value of α still needs to be determined through experiments. 1. Based on the results, we choose self-attention to implement our learning simplifier." }, { "figure_ref": [], "heading": "Softening processing", "publication_ref": [ "b14", "b23" ], "table_ref": [ "tab_2" ], "text": "The self-attention used in our learning simplifier focuses on the relationships among input data.\nTeacher models often have high confidence, resulting in the output distribution being dominated by the target class, with little difference in output distribution between similar classes. This makes it difficult for self-attention to learn the relationships among different data of the same class. In knowledge distillation, the logits are softened by temperature, which smooths the label distribution so that the logits can reflect the relationships among classes other than the target class [15]. Therefore, inputting the distribution softened by temperature into self-attention can improve the learning simplifier's ability to learn the relationships among different data, thereby enhancing the effectiveness of SKD.\nOn the other hand, using temperature to soften the logits of the teacher model is equivalent to using a higher temperature for the teacher model than for the student model, rather than using the same distillation temperature for both as in vanilla knowledge distillation; consequently, the output distribution of the teacher model is softer. Because models with more parameters tend to have sharper output distributions after training than models with fewer parameters [24], smaller student models more easily imitate softened distributions.\nWe conducted experiments to verify the effectiveness of using a temperature-scaled LogSoftmax function on the CIFAR-100 dataset, where the teacher model is ResNet32×4 and the student model is ResNet8×4. Table 2 shows the results. Softening the output of the teacher model with a LogSoftmax function with a temperature set to 4.0 leads to better results with our SKD." }, { "figure_ref": [], "heading": "Combination with other methods", "publication_ref": [], "table_ref": [], "text": "Notably, our SKD improves knowledge distillation by modifying the logits of the teacher model. As shown in 9, SKD uses the same distillation loss as vanilla KD. Therefore, SKD can be easily combined with other logit-based methods that modify the distillation loss. The modified logits in SKD can be used as the logits of the teacher model in other methods. Fusing SKD with other models can improve their effectiveness.\nOn the other hand, SKD only modifies the logits of the teacher model and does not change the structure of the intermediate layers in the model. Therefore, it does not conflict with feature-based methods. By adding the distillation losses, SKD is easily combined with feature-based methods to improve the performance of the original methods.\nAccording to our experiments in section 4.2, integrating SKD with the currently best-performing distillation methods from two categories results in state-of-the-art performance. " }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [ "b21", "b33" ], "table_ref": [], "text": "We conducted comprehensive experiments on image classification tasks on CIFAR-100 [22] and ImageNet [34]. The detailed experimental settings are given in Appendix A.1." }, { "figure_ref": [], "heading": "Comparison with state-of-the-art methods", "publication_ref": [ "b12", "b5", "b52" ], "table_ref": [ "tab_3", "tab_4", "tab_3", "tab_4", "tab_5" ], "text": "The results on the CIFAR-100 dataset are shown in Table 3 and Table 4, where Table 3 shows the results where the teacher and student models had the same architectures, and Table 4 shows the results where the teacher and student models had different architectures.\nComparing the experimental results of KD with those of SKD, SKD performs significantly better when using the same distillation loss function as KD. The largest improvement occurred when the teacher model was ResNet32×4 and the student model was ResNet8×4. The improvement reached 3.28%. This indicates that SKD can improve the learning performance of the student model simply by modifying the logits of the teacher model.\nOf the six experiments where the teacher and student models had the same architectures, SKD achieved the best performance compared to all feature-based and logit-based methods in five experiments. It ranked third in only one experiment where the feature-based method OFD [13] and Furthermore, SKD outperformed other state-of-the-art knowledge distillation methods, such as ReviewKD [6] and DKD [53], in 8 and 9 out of 11 teacher-student model combination experiments, respectively, achieving the best performance. Only in one teacher-student model combination did SKD not achieve a top-two result. This suggests that SKD can achieve the best distillation effect while keeping the design and training simple.\nThe results on the ImageNet dataset are shown in Table 5. Our SKD continued to perform significantly better than the classical KD. Moreover, compared with other distillation methods, SKD achieved the first and second-best results in two experiments based on top-1 accuracy and the second and third-best results in two experiments based on top-5 accuracy. This suggests that the performance of SKD is superior to that of most of the current best methods and that it achieved the best performance among logit-based methods." }, { "figure_ref": [], "heading": "Combination with other methods", "publication_ref": [ "b52", "b5" ], "table_ref": [ "tab_6" ], "text": "We combined SKD with the current best-performing logit-based method DKD [53] and the featurebased method ReviewKD [6] and performed experiments on the CIFAR-100 dataset. The experimental results are shown in Table 6. Using SKD in combination with the two other methods significantly improves the student model's accuracy, and the combined model achieved better performance than the two standalone methods. This strongly verifies the effectiveness of SKD and its compatibility with other knowledge distillation methods." }, { "figure_ref": [ "fig_1", "fig_1" ], "heading": "Analysis", "publication_ref": [ "b38" ], "table_ref": [ "tab_7", "tab_8" ], "text": "To elucidate the principles of SKD, we conducted analyses of five aspects of SKD: (1) the changes in the logits, (2) the distillation fidelity of the student model, (3) the visualization of the student model's output features, (4) the attention matrix in the learning simplifier (see Appendix A.3), and (5) the training efficiency of SKD (see Appendix A.3). The dataset used in the experiment of this section was CIFAR-100, the teacher model was ResNet32×4, and the student model was ResNet8×4.\nChanges in the logits By observing the changes in the logits before and after the application of SKD, we can see the changes in the learning objectives of the student model. Based on the attention matrix, we obtained the final output of the learning simplifier, which is the change in the logits. We calculated the average change in the target class and other classes of the data distribution on the training set, and the results are shown in Table 7. Compared with the value for other classes, the target class value is significantly reduced by the learning simplifier. This allows the student model to learn the relationships between other classes more effectively when using distillation loss for training, thereby improving the learning effectiveness of the student model. We also conducted experiments on the accuracy of the output of the teacher model before and after SKD was applied on the validation set, as shown in Table 8. We found that SKD did not change the accuracy of the teacher model, indicating that SKD did not improve the accuracy of the teacher's knowledge. Instead, SKD reduced the learning difficulty for the student model based on the relationships between classes in the output of the teacher model. This led to an improvement in the performance of knowledge distillation.\nTo study the changes in the output of the teacher model caused by SKD, we visualized the original teacher logit distribution and the SKD logit distribution. To make the visualization results clearer, we visualized the logits processed by the temperature-scaled LogSoftmax function. The visualization results are shown in Figure 2. From Figure 2, the distribution after being processed by SKD becomes smoother compared to the output of the teacher model. The value of the target class in the distribution is significantly lower than that of other classes. SKD makes the model output smoother and simpler for the student model, which makes it easier for the student model to learn. In addition, unlike the distillation temperature, SKD uses the learning simplifier to individually process each data point based on its similarity to other data in a batch. This allows SKD to obtain more finely tuned changes to the teacher logits compared to the distillation temperature, resulting in high distillation fidelity.\nComparison of distillation fidelity We use the average agreement between the predictions of the student model and the teacher model to measure the distillation fidelity [39]. A higher average agreement reflects a more faithful imitation of the student model. The calculation of the average agreement for n data points is as follows:\nAverage Agreement := 1 n n i=1 I{arg max j p s i,j = arg max j p t i,j }.(11)\nBy comparing the distillation fidelity using SKD and KD, we can determine whether SKD makes it easier for student models to mimic the knowledge of teacher models. The comparison results are shown in Table 9. The experimental results are consistent with our design idea of SKD; that is, our SKD helps student models to mimic teacher models more easily, significantly improving the distillation fidelity and consequently enhancing the effect of knowledge distillation." }, { "figure_ref": [ "fig_2" ], "heading": "Features visualization", "publication_ref": [ "b42", "b0", "b12", "b20", "b19", "b13", "b39", "b29", "b41", "b47", "b5", "b7", "b38", "b35", "b48", "b3", "b7", "b38", "b1", "b25", "b6", "b8", "b43", "b40", "b54", "b18", "b56", "b30", "b56", "b18" ], "table_ref": [], "text": "We also visualized the student model's features using t-SNE [43] as shown in Figure 3. The features of the student model trained with SKD are more compact within the same category, and the differences between different categories are more pronounced. This proves that SKD enables the student model to learn clearer relationships between categories and perform more accurate classification. Limitations SKD, as a logit-based knowledge distillation method, could not outperform state-ofthe-art feature-based methods on object detection tasks due to the lack of location knowledge in the logits. Besides, the relationships between different combinations of teacher-student models and the best value of the parameter α cannot be determined in SKD currently. We plan to find a method to determine the optimal α in future work.\nOn the other hand, feature-based methods enable the student to learn the features of the intermediate layers of the teacher model, thereby further boosting distillation performance. FitNet [1], OFD [13], and other methods [21,20,14] align the features of the student and teacher intermediate layers through the design of different conversion modules to achieve better knowledge transfer. CRD [40], RKD [30], and other methods [42,48] allow the student to learn the correlations among the teacher model's intermediate layer features to learn different aspects of the knowledge. In addition, Chen et al. [6] used multilevel distillation to learn multilevel knowledge and improve the effectiveness of distillation. While feature-based methods generally achieve better results, they require a more complex design and greater computational and storage costs than logit-based methods.\nIn addition, some research focuses on the principles of knowledge distillation [8,39,36,49,4]. Cho and Hariharan [8] believe that due to the large discrepancy in capacity between the student and teacher models, student models cannot imitate teacher models well. Stanton et al. [39] conducted in-depth research on the fidelity of student models imitating teacher models and summarized the reasons for low fidelity.\nThis paper focuses on logit-based methods. Previous methods focused on improving the distillation loss function or optimizing the distillation process, while our method focuses on the logits of the teacher model. By modifying the logits based on the correlations between the data and the student output, we aim to reduce the difficulty of learning of the student model to improve the distillation performance.\nAttention The attention mechanism was proposed in sequence models to capture the correlations of sequences without considering distance [2]. Self-attention computes internal attention within a sequence to obtain its representation [26,7]. Many attention-based models have since been applied in various fields, achieving excellent results [9,44,41,55].\nSome feature-based distillation methods incorporate attention into their approach [19,57,31]. Among them, CD [57] uses the attention mechanism to weight each channel of the intermediate-layer features of the teacher, thereby highlighting the information in the more important channels for the student. AFD [19] uses attention to select which intermediate layer features of the teacher model need to be distilled to the student model. However, attention-based methods have yet to be used in logit-based distillation methods." }, { "figure_ref": [], "heading": "A.3 More Analysis", "publication_ref": [], "table_ref": [], "text": "Attention matrix analysis The attention matrix is crucial to the attention mechanism. By visualizing the attention matrix in the learning simplifier, we can see how attention weights are distributed in a batch of data. In the CIFAR-100 dataset, there are a total of 20 superclasses, each of which contains five classes. Therefore, for a given class, there are four classes that are similar to it (i.e., members of the same superclass) and 95 classes that are different. We sorted the attention matrix by superclass, randomly selected three superclasses from a batch of data, and created an intuitive visualization of the attention matrix, as shown in Figure 4.\nIn the heatmap, the attention weights for data of similar classes are larger than those of other classes. This indicates that for similar data, the original output distribution of the teacher model is similar. When using SKD to modify the logits, more attention is given to data of similar classes to obtain the change in the logits of the teacher model, which allows SKD to process each data distribution separately.\nEfficiency analysis We analyzed the training efficiency of representative knowledge distillation methods to demonstrate the high efficiency of SKD. Since SKD targets the logits of the teacher model, it only requires simple processing after the teacher model generates the logit distribution, without the need for a complex knowledge transformation in the intermediate layer between the teacher and student models that feature-based methods employ. Therefore, the training efficiency of SKD is comparable to that of logit-based methods. As shown in Figure 5, our SKD achieves the best training effect, while the training time is close to that of the fastest method, KD." }, { "figure_ref": [], "heading": "A Appendix", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Experimental settings", "publication_ref": [ "b27", "b50", "b34", "b36", "b11", "b49", "b0", "b20", "b29", "b39", "b12", "b5", "b14", "b52", "b5", "b52", "b21", "b33", "b19", "b21", "b27", "b50", "b34", "b36", "b11", "b49", "b33" ], "table_ref": [], "text": "Our experiments included various commonly used models, such as ShuffleNet [28,51], MobileNet-V2 [35], VGG [37], ResNet [12], and WRN [50], as teacher and student models. Our experiments also compared feature-based methods, such as FitNet [1], AT [21], RKD [30], CRD [40], OFD [13], and ReviewKD [6], as well as logit-based methods, such as KD [15] and DKD [53]. These methods are representative knowledge distillation methods and include the state-of-the-art feature-based and logit-based methods, namely, ReviewKD [6] and DKD [53], respectively.\nWe compared our method with other state-of-the-art models on standard datasets, including the following: CIFAR-100 [22] is a commonly used image classification dataset consisting of 60,000 32x32 images across 100 categories, with 50,000 images in the training set and 10,000 in the test set.\nImageNet [34] is a large-scale image classification dataset with 1,000 categories. The dataset includes over 1.2 million training images and 50,000 test images.\nThe details of the experiments were kept the same as in DKD [20]. The specific training details are as follows:\nFor CIFAR-100 [22], we trained the models using the SGD optimizer for 240 epochs. The batch size used was 64. For ShuffleNet [28,51] and MobileNet-V2 [35], we used an initial learning rate of 0.01, while for VGG [37], ResNet [12], and WRN [50] models, we used an initial learning rate of 0.05. Then, during the training process, the learning rate decayed by a factor of 10 at the 150th, 180th, and 210th epochs. The weight decay and momentum of the optimizer were set to 5e-4 and 0.9, respectively. The weight of the cross-entropy loss in the distillation loss was set to 1.0, and the temperature was set to 4. The linear warm-up in the training process was set to 20 epochs. For our learning simplifier, we set the dimensions of q,k and v in the self-attention to 512. The learning rate of the parameters in SKD was set to 3e-5, and the weight decay of the optimizer (SGD) was set to 5e-4. The dropout ratio in the output layer was set to 0.5. The value of the alpha coefficient in the distillation loss was adjusted for different teacher-student combinations.\nFor ImageNet [34], we trained the models using the SGD optimizer for a total of 100 epochs. The batch size was 512. The initial learning rate was set to 0.2 and divided by 10 for every 30 epochs. The weight decay was set to 1e-4. The weight of the cross-entropy loss in the distillation loss was set to 1.0, and the temperature was set to 1. For the ImageNet dataset, the dimensions of q,k and v in the self-attention of SKD were set to 2048, while the other settings were the same as those for the CIFAR-100 dataset.\nOur experiments on CIFAR-100 are trained with 1 NVIDIA V100, and the experiments on ImageNet are trained with 8 NVIDIA V100." }, { "figure_ref": [], "heading": "A.2 Related work", "publication_ref": [ "b14", "b45", "b52", "b17", "b51", "b55", "b9", "b28", "b44", "b7", "b37", "b12", "b39", "b5", "b0", "b29", "b20", "b13", "b19", "b41", "b46", "b47", "b31", "b55", "b52", "b45", "b17", "b28", "b37" ], "table_ref": [], "text": "Knowledge distillation The concept of knowledge distillation for deep learning models was proposed by Hinton et al. [15]. It allows a small student model to be trained simultaneously with the ground-truth labels and soft labels generated by the teacher model, which is softened using the distillation temperature. Most knowledge distillation methods can be classified as logit-based methods [46,53,18,52,56,10,29,45,8,38] or feature-based methods [13,40,6,1,30,21,14,20,42,47,48,32].\nLogit-based methods optimize the distillation loss function of the logits to enable the student model to learn from the teacher model more effectively. Among them, WSL [56] weights the original distillation loss based on the relationships between the teacher and student logits to balance the bias-variance tradeoff during training. DKD [53], NKD [46], and DIST [18] analyze and optimize the original loss function, proposing new loss functions to flexibly control the teacher knowledge the student model needs to learn. TAKD [29] and DGKD [38] use an intermediate-sized \"teacher assistant\" model to help transfer the knowledge of the teacher model to the student model, thereby avoiding poor learning performance of the student model caused by large differences in the capacity between the teacher and student models. " }, { "figure_ref": [], "heading": "A.4 Broader Impacts", "publication_ref": [], "table_ref": [], "text": "Knowledge distillation is a basic deep neural network training method, which prevents us from giving specific application impacts. However, since the student model needs to learn from the teacher model in knowledge distillation, we need to additionally consider the model security of the teacher model when we consider the model security of the student model, in addition to common factors, such as data security issues. Because a teacher model with security flaws is likely to train a student model with similar security issues. Therefore, we need to check the security of the teacher model before performing knowledge distillation to avoid being attacked because of the teacher model." } ]
In knowledge distillation, the knowledge from the teacher model is often too complex for the student model to thoroughly process. However, good teachers in real life always simplify complex material before teaching it to students. Inspired by this fact, we propose student-friendly knowledge distillation (SKD) to simplify teacher output into new knowledge representations, which makes the learning of the student model easier and more effective. SKD contains a softening processing and a learning simplifier. First, the softening processing uses the temperature hyperparameter to soften the output logits of the teacher model, which simplifies the output to some extent and makes it easier for the learning simplifier to process. The learning simplifier utilizes the attention mechanism to further simplify the knowledge of the teacher model and is jointly trained with the student model using the distillation loss, which means that the process of simplification is correlated with the training objective of the student model and ensures that the simplified new teacher knowledge representation is more suitable for the specific student model. Furthermore, since SKD does not change the form of the distillation loss, it can be easily combined with other distillation methods that are based on the logits or features of intermediate layers to enhance its effectiveness. Therefore, SKD has wide applicability. The experimental results on the CIFAR-100 and ImageNet datasets show that our method achieves state-of-the-art performance while maintaining high training efficiency.
Student-friendly Knowledge Distillation
[ { "figure_caption": "Figure 1 :1Figure 1: Illustration of vanilla knowledge distillation (KD) and our student-friendly knowledge distillation (SKD). (a) KD uses the outputs of the teacher and student models to calculate the distillation loss. (b) Our SKD transforms the outputs of the teacher model to obtain the new SKD logits, which are then compared with the logits of the student model to calculate the distillation loss.The gradient obtained through backpropagation optimizes the student model and simultaneously optimizes the learning simplifier. Our SKD achieves better results than KD using the same loss function.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Visualization of the teacher logits and the SKD logits for 100 classes on a random image.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: t-SNE visualization of the student logits.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Comparison of different implementations of the learning simplifier.", "figure_data": "Learning Simplifier Top-1 (%)1-layer FC75.182-layer FC75.28Self-attention75.83", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Softening Processing Effectiveness.For the design of the learning simplifier, we considered using either fully connected (FC) layers or self-attention, both of which can construct output distributions that are closer to the student model based on the output of the teacher model. The student model mainly learns the relationships between categories in logit-based knowledge distillation methods. Unlike FC layers, which only consider the input of a single data point, self-attention can learn the relationships between each batch of data and weight their values according to the relationships to obtain the final output. We conducted experiments comparing the different implementation methods on the implementation methods on the CIFAR-100 dataset, where the teacher model is ResNet32×4 and the student model is ResNet8×4, and the results are shown in Table", "figure_data": "Softening Temperature Top-1 (%)-75.833.076.424.076.845.076.51", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on the CIFAR-100 dataset. Teacher and student models have the same architectures. All reported accuracy results are averaged over five trials. ∆ denotes the improvement of SKD over the KD method. The results marked in red and blue are the best and second-best results, respectively.", "figure_data": "TeacherResNet32×4 ResNet110 ResNet56 WRN-40-2 WRN-40-2 VGG13 79.55 74.31 72.34 75.61 75.61 74.64StudentResNet8×4 72.50ResNet32 ResNet20 WRN-16-2 WRN-40-1 VGG8 71.14 69.06 73.26 71.98 70.36Feature-based methodsFitNet [1]73.5270.9869.0273.5972.0871.37RKD [30]72.5071.9069.8172.9171.8070.43CRD [40]75.7373.7171.3175.6674.3673.90OFD [13]74.8872.7869.9675.5075.2373.30ReviewKD [6]75.6873.7371.2376.2875.1173.80Logits-based methodsKD [15]73.5673.4271.0875.0173.7573.43DKD [53]76.1373.7171.6475.5274.4374.57SKD76.8474.0671.7376.2974.5274.94∆+3.28+0.64+0.65+1.28+0.77+1.51", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Results on the CIFAR-100 dataset. Teacher and student models have different architectures. All reported accuracy results are averaged over five trials. ∆ denotes the improvement of SKD over the KD method. The results marked in red and blue are the best and second-best results, respectively.", "figure_data": "TeacherResNet32×4 79.55WRN-40-2 75.61ResNet50 79.34VGG13 74.64ResNet32×4 79.55StudentShuffleNet-V2 ShuffleNet-V1 MobileNet-V2 MobileNet-V2 71.82 70.50 64.60 64.60VGG8 70.36Feature-based methodsFitNet [1]74.2973.5463.1163.6671.72RKD [30]74.0873.2765.0564.9070.90CRD [40]76.0475.9469.5569.3673.65OFD [13]77.0976.6365.8165.2373.52ReviewKD [6]77.1977.4067.0769.0074.19Logits-based methodsKD [15]75.3775.5268.7368.0272.70DKD [53]76.9076.6570.4669.4174.32SKD76.9676.9669.9969.2574.59∆+1.59+1.44+1.26+1.23+1.89", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Results on the ImageNet dataset. The SKD results are averaged over three trials. The results of other methods are cited in[53]. ∆ denotes the improvement of SKD over the KD method. The results marked in red and blue are the best and second-best results, respectively.", "figure_data": "Feature-based methodsLogits-based methodsTeacher(Student)AT [21] OFD [13] CRD [40] ReviewKD [6] KD [15] DKD [53] SKD∆ResNet34Top-170.9670.8171.1771.6170.6671.7071.86 +1.20(ResNet18)Top-590.0189.9890.1390.5189.8890.4190.44 +0.56ResNet50Top-169.5671.2571.3772.5668.5872.0572.24 +3.66(MobileNet-V2) Top-589.3390.3490.4191.0088.9891.0590.56 +1.58", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Accuracy (%) of SKD combined with other methods. All reported accuracy results are averaged over five trials on the CIFAR-100 dataset. ∆ represents the difference in the accuracy before and after fusion with SKD.", "figure_data": "TeacherResNet32×4 VGG13 79.55 74.64ResNet32×4 79.55VGG13 74.64StudentResNet8×4 72.50VGG8 ShuffleNet-V2 MobileNet-V2 70.36 71.82 64.60ReviewKD [6]75.6873.8077.1969.00SKD+ReviewKD77.2075.0777.3569.82∆+1.52+1.27+0.16+2.75DKD [53]76.1374.5776.9069.41SKD+DKD76.6875.1577.5269.44∆+0.55+0.58+0.62+0.03", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "The outputs of the learning simplifier for different classes.", "figure_data": "Table 8: Accuracy before and after the applica-tion of SKD.Class∆SKD Top-1 (%) Top-5 (%)Target -0.38 Others +0.0279.55 79.5594.62 94.62", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Comparison results of the average agreement.", "figure_data": "Method Training Set Validation SetKD0.860.75SKD0.920.79", "figure_id": "tab_8", "figure_label": "9", "figure_type": "table" } ]
Mengyang Yuan; Bo Lang Fengnan Quan
[ { "authors": "Ballas Romero Adriana; Samira Nicolas; Chassang Ebrahimi; Gatta Antoine; Carlo; Yoshua", "journal": "Proc. ICLR", "ref_id": "b0", "title": "Fitnets: Hints for thin deep nets", "year": "2015" }, { "authors": "Dzmitry Bahdanau; Kyunghyun Cho; Yoshua Bengio", "journal": "", "ref_id": "b1", "title": "Neural machine translation by jointly learning to align and translate", "year": "2014" }, { "authors": "Cristian Buciluǎ; Rich Caruana; Alexandru Niculescu-Mizil", "journal": "", "ref_id": "b2", "title": "Model compression", "year": "" }, { "authors": "Keshigeyan Chandrasegaran; Ngoc-Trung Tran; Yunqing Zhao; Ngai-Man Cheung", "journal": "PMLR", "ref_id": "b3", "title": "Revisiting label smoothing and knowledge distillation compatibility: What was missing?", "year": "" }, { "authors": "Liang-Chieh Chen; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b4", "title": "Rethinking atrous convolution for semantic image segmentation", "year": "2017" }, { "authors": "Pengguang Chen; Shu Liu; Hengshuang Zhao; Jiaya Jia", "journal": "", "ref_id": "b5", "title": "Distilling knowledge via knowledge review", "year": "" }, { "authors": "Jianpeng Cheng; Li Dong; Mirella Lapata", "journal": "", "ref_id": "b6", "title": "Long short-term memory-networks for machine reading", "year": "2016" }, { "authors": "Hyun Jang; Bharath Cho; Hariharan", "journal": "", "ref_id": "b7", "title": "On the efficacy of knowledge distillation", "year": "" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly", "journal": "", "ref_id": "b8", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "Tommaso Furlanello; Zachary Lipton; Michael Tschannen; Laurent Itti; Anima Anandkumar", "journal": "PMLR", "ref_id": "b9", "title": "Born again neural networks", "year": "" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b10", "title": "Mask r-cnn", "year": "" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b11", "title": "Deep residual learning for image recognition", "year": "" }, { "authors": "Byeongho Heo; Jeesoo Kim; Sangdoo Yun; Hyojin Park; Nojun Kwak; Jin Young Choi", "journal": "", "ref_id": "b12", "title": "A comprehensive overhaul of feature distillation", "year": "" }, { "authors": "Byeongho Heo; Minsik Lee; Sangdoo Yun; Jin Young Choi", "journal": "", "ref_id": "b13", "title": "Knowledge transfer via distillation of activation boundaries formed by hidden neurons", "year": "" }, { "authors": "Geoffrey Hinton; Oriol Vinyals; Jeff Dean", "journal": "", "ref_id": "b14", "title": "Distilling the knowledge in a neural network", "year": "2015" }, { "authors": "Menglong Andrew G Howard; Bo Zhu; Dmitry Chen; Weijun Kalenichenko; Tobias Wang; Marco Weyand; Hartwig Andreetto; Adam", "journal": "", "ref_id": "b15", "title": "Mobilenets: Efficient convolutional neural networks for mobile vision applications", "year": "2017" }, { "authors": "Jie Hu; Li Shen; Gang Sun", "journal": "", "ref_id": "b16", "title": "Squeeze-and-excitation networks", "year": "" }, { "authors": "Tao Huang; Shan You; Fei Wang; Chen Qian; Chang Xu", "journal": "", "ref_id": "b17", "title": "Knowledge distillation from a stronger teacher", "year": "2022" }, { "authors": "Mingi Ji; Byeongho Heo; Sungrae Park", "journal": "", "ref_id": "b18", "title": "Show, attend and distill: Knowledge distillation via attention-based feature matching", "year": "" }, { "authors": "Jangho Kim; Seonguk Park; Nojun Kwak", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Paraphrasing complex network: Network compression via factor transfer", "year": "2018" }, { "authors": "Nikos Komodakis; Sergey Zagoruyko", "journal": "", "ref_id": "b20", "title": "Paying more attention to attention: improving the performance of convolutional neural networks via attention transfer", "year": "" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b21", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Solomon Kullback; Richard A Leibler", "journal": "The annals of mathematical statistics", "ref_id": "b22", "title": "On information and sufficiency", "year": "1951" }, { "authors": "Kimin Lee; Honglak Lee; Kibok Lee; Jinwoo Shin", "journal": "", "ref_id": "b23", "title": "Training confidence-calibrated classifiers for detecting out-of-distribution samples", "year": "2017" }, { "authors": "Tsung-Yi Lin; Piotr Dollár; Ross Girshick; Kaiming He; Bharath Hariharan; Serge Belongie", "journal": "", "ref_id": "b24", "title": "Feature pyramid networks for object detection", "year": "" }, { "authors": "Zhouhan Lin; Minwei Feng; Cicero Nogueira Dos Santos; Mo Yu; Bing Xiang; Bowen Zhou; Yoshua Bengio", "journal": "", "ref_id": "b25", "title": "A structured self-attentive sentence embedding", "year": "2017" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b26", "title": "Fully convolutional networks for semantic segmentation", "year": "" }, { "authors": "Ningning Ma; Xiangyu Zhang; Hai-Tao Zheng; Jian Sun", "journal": "", "ref_id": "b27", "title": "Shufflenet v2: Practical guidelines for efficient cnn architecture design", "year": "" }, { "authors": "Mehrdad Seyed Iman Mirzadeh; Ang Farajtabar; Nir Li; Akihiro Levine; Hassan Matsukawa; Ghasemzadeh", "journal": "", "ref_id": "b28", "title": "Improved knowledge distillation via teacher assistant", "year": "" }, { "authors": "Wonpyo Park; Dongju Kim; Yan Lu; Minsu Cho", "journal": "", "ref_id": "b29", "title": "Relational knowledge distillation", "year": "" }, { "authors": "Peyman Passban; Yimeng Wu; Mehdi Rezagholizadeh; Qun Liu", "journal": "", "ref_id": "b30", "title": "Alp-kd: Attention-based layer projection for knowledge distillation", "year": "" }, { "authors": "Baoyun Peng; Xiao Jin; Jiaheng Liu; Dongsheng Li; Yichao Wu; Yu Liu; Shunfeng Zhou; Zhaoning Zhang", "journal": "", "ref_id": "b31", "title": "Correlation congruence for knowledge distillation", "year": "" }, { "authors": "Kaiming Shaoqing Ren; Ross He; Jian Girshick; Sun", "journal": "Advances in neural information processing systems", "ref_id": "b32", "title": "Faster r-cnn: Towards real-time object detection with region proposal networks", "year": "2015" }, { "authors": "Olga Russakovsky; Jia Deng; Hao Su; Jonathan Krause; Sanjeev Satheesh; Sean Ma; Zhiheng Huang; Andrej Karpathy; Aditya Khosla; Michael Bernstein", "journal": "International journal of computer vision", "ref_id": "b33", "title": "Imagenet large scale visual recognition challenge", "year": "2015" }, { "authors": "Mark Sandler; Andrew Howard; Menglong Zhu; Andrey Zhmoginov; Liang-Chieh Chen", "journal": "", "ref_id": "b34", "title": "Mobilenetv2: Inverted residuals and linear bottlenecks", "year": "" }, { "authors": "Zhiqiang Shen; Zechun Liu; Dejia Xu; Zitian Chen; Kwang-Ting Cheng; Marios Savvides", "journal": "", "ref_id": "b35", "title": "Is label smoothing truly incompatible with knowledge distillation: An empirical study", "year": "2021" }, { "authors": "Karen Simonyan; Andrew Zisserman", "journal": "", "ref_id": "b36", "title": "Very deep convolutional networks for large-scale image recognition", "year": "2014" }, { "authors": "Wonchul Son; Jaemin Na; Junyong Choi; Wonjun Hwang", "journal": "", "ref_id": "b37", "title": "Densely guided knowledge distillation using multiple teacher assistants", "year": "" }, { "authors": "Samuel Stanton; Pavel Izmailov; Polina Kirichenko; Alexander A Alemi; Andrew G Wilson", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b38", "title": "Does knowledge distillation really work?", "year": "2021" }, { "authors": "Yonglong Tian; Dilip Krishnan; Phillip Isola", "journal": "", "ref_id": "b39", "title": "Contrastive representation distillation", "year": "2019" }, { "authors": "Hugo Touvron; Matthieu Cord; Matthijs Douze; Francisco Massa; Alexandre Sablayrolles; Hervé Jégou", "journal": "PMLR", "ref_id": "b40", "title": "Training data-efficient image transformers & distillation through attention", "year": "" }, { "authors": "Frederick Tung; Greg Mori", "journal": "", "ref_id": "b41", "title": "Similarity-preserving knowledge distillation", "year": "" }, { "authors": "Laurens Van Der Maaten; Geoffrey Hinton", "journal": "Journal of machine learning research", "ref_id": "b42", "title": "Visualizing data using t-sne", "year": "2008" }, { "authors": "Ashish Vaswani; Noam Shazeer; Niki Parmar; Jakob Uszkoreit; Llion Jones; Aidan N Gomez; Łukasz Kaiser; Illia Polosukhin", "journal": "Advances in neural information processing systems", "ref_id": "b43", "title": "Attention is all you need", "year": "2017" }, { "authors": "Chenglin Yang; Lingxi Xie; Chi Su; Alan L Yuille", "journal": "", "ref_id": "b44", "title": "Snapshot distillation: Teacher-student optimization in one generation", "year": "" }, { "authors": "Zhendong Yang; Zhe Li; Yuan Gong; Tianke Zhang; Shanshan Lao; Chun Yuan; Yu Li", "journal": "", "ref_id": "b45", "title": "Rethinking knowledge distillation via cross-entropy", "year": "2022" }, { "authors": "Zhendong Yang; Zhe Li; Mingqi Shao; Dachuan Shi; Zehuan Yuan; Chun Yuan", "journal": "Springer", "ref_id": "b46", "title": "Masked generative distillation", "year": "2022" }, { "authors": "Junho Yim; Donggyu Joo; Jihoon Bae; Junmo Kim", "journal": "", "ref_id": "b47", "title": "A gift from knowledge distillation: Fast optimization, network minimization and transfer learning", "year": "" }, { "authors": "Li Yuan; Francis Eh Tay; Guilin Li; Tao Wang; Jiashi Feng", "journal": "", "ref_id": "b48", "title": "Revisiting knowledge distillation via label smoothing regularization", "year": "" }, { "authors": "Sergey Zagoruyko; Nikos Komodakis", "journal": "", "ref_id": "b49", "title": "Wide residual networks", "year": "2016" }, { "authors": "Xiangyu Zhang; Xinyu Zhou; Mengxiao Lin; Jian Sun", "journal": "", "ref_id": "b50", "title": "Shufflenet: An extremely efficient convolutional neural network for mobile devices", "year": "" }, { "authors": "Ying Zhang; Tao Xiang; Timothy M Hospedales; Huchuan Lu", "journal": "", "ref_id": "b51", "title": "Deep mutual learning", "year": "" }, { "authors": "Borui Zhao; Quan Cui; Renjie Song; Yiyu Qiu; Jiajun Liang", "journal": "", "ref_id": "b52", "title": "Decoupled knowledge distillation", "year": "" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b53", "title": "Pyramid scene parsing network", "year": "" }, { "authors": "Daquan Zhou; Bingyi Kang; Xiaojie Jin; Linjie Yang; Xiaochen Lian; Zihang Jiang; Qibin Hou; Jiashi Feng", "journal": "", "ref_id": "b54", "title": "Deepvit: Towards deeper vision transformer", "year": "2021" }, { "authors": "Helong Zhou; Liangchen Song", "journal": "", "ref_id": "b55", "title": "Rethinking soft labels for knowledge distillation: A bias-variance tradeoff perspective", "year": "" }, { "authors": "Zaida Zhou; Chaoran Zhuge; Xinwei Guan; Wen Liu", "journal": "", "ref_id": "b56", "title": "Channel distillation: Channel-wise attention for knowledge distillation", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 260.87, 402.58, 243.13, 30.32 ], "formula_id": "formula_0", "formula_text": "L CE = K i=1 y i log(p s i ).(1)" }, { "formula_coordinates": [ 3, 230.36, 495.31, 273.64, 30.32 ], "formula_id": "formula_1", "formula_text": "L KL = KL( p t || p s ) = K i=1 p t i log( p t i p s i ).(2)" }, { "formula_coordinates": [ 3, 254.27, 540.16, 249.73, 9.65 ], "formula_id": "formula_2", "formula_text": "L total = αL CE + βL KL ,(3)" }, { "formula_coordinates": [ 3, 250.39, 642.04, 249.74, 17.88 ], "formula_id": "formula_3", "formula_text": "A = softmax(QK / √ D).(4" }, { "formula_coordinates": [ 4, 259.07, 392.19, 244.93, 12.69 ], "formula_id": "formula_4", "formula_text": "g t sof t = Softening(g t ).(6)" }, { "formula_coordinates": [ 4, 238.71, 431.4, 265.29, 12.69 ], "formula_id": "formula_5", "formula_text": "∆ Simplif ier = Simplifier(g t sof t ).(7)" }, { "formula_coordinates": [ 4, 246.21, 478.23, 257.79, 12.69 ], "formula_id": "formula_6", "formula_text": "g SKD = ∆ Simplif ier + g t sof t .(8)" }, { "formula_coordinates": [ 4, 256.78, 545.34, 247.22, 11.72 ], "formula_id": "formula_7", "formula_text": "L SKD = KL( p SKD || p s ).(9)" }, { "formula_coordinates": [ 4, 254.61, 584.5, 249.39, 9.65 ], "formula_id": "formula_8", "formula_text": "L total = L CE + αL SKD ,(10)" }, { "formula_coordinates": [ 8, 177.36, 657.71, 326.64, 30.32 ], "formula_id": "formula_9", "formula_text": "Average Agreement := 1 n n i=1 I{arg max j p s i,j = arg max j p t i,j }.(11)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b3", "b7", "b2", "b8", "b9", "b0", "b1", "b2", "b3", "b4", "b5", "b6", "b7", "b9", "b10", "b11" ], "table_ref": [], "text": "Benefited from the advancement of photography and sensor technologies, the accessibility and analysis of ultrahigh resolution (UHR) images has opened new horizons for the computer vision community, playing an increasingly important role in a wide range of applications, including but\nThe most commonly-used datasets in existing UHR segmentation methods include DeepGlobe [4], Inria Aerial [8] and Citysacpes [3]. According the definition of UHR medias [9,10], an image with at least 2048×1080 (2.2M) pixels are regarded as 2K high resolution media. An image with at least 3,840×1,080 (4.1M) pixels reaches the bare minimum bar of 4K resolution, and 4K ultra-high definition media usually refers to a minimum resolution of 3,840×2,160 (8.3M). However, except for Inria Aeral which reaches to 5,000×5,000 pixels, the average resolution of all other two datasets are below 2,500×2,500 (6.2M), thus actually they are not strictly UHR medias. Besides, DeepGlobe also adopts coarse annotations that result in numbers of noises. Although the utra-high resolution, Inria Aerial contains only 180 images in limited scenes, and only annotates one category of building, which is not sufficient to fully verify the performance of UHR segmentation methods and limits the development of the community. Therefore, a novel large-scale benchmark dataset covering a wide range of scenes with full fine-grained dense annotations is urgently needed to facilitate the field. To this end, the URUR dataset is proposed in the paper, in this meaning of Ultra-High Resolution dataset with Ultra-Rich Context. Firstly for the resolution, URUR contains 3,008 UHR images of size 5,120×5,120 (up to 26M), coming from a wide range of complex scenes in 63 cities. For annotations, there are 80 billion manually annotated pixels, including 2 million finegrained instances with 8 categories, which is of ultra-high context and far superior to all the existing UHR datasets. Visualization samples and detailed statistics are revealed in Figure 1 and Section 3.\nIn order to balance the memory occupation and accuracy when the image resolution grows to ultra-high, earlier Figure 1. The comparison between natural datasets (Pascal VOC [1], COCO [2], Cityscapes [3]), and representative UHR datasets (Deep-Globe [4], ISIC [5], UDD6 [6], UAVid [7], Inria Aerial [8] and URUR). As shown that UHR images (from b to g) cover a larger filed of view and contain more regions with very large contrast in both scale and shape, than natural images (a). Existing UHR datasets either adopt coarse annotations (b, d, e) or only annotate one category (c, f). The proposed URUR dataset (h) utilizes fine-grained dense annotations for whole 8 categories.\nworks for UHR segmentation utilize a two-branch globallocal collaborative network to preserve both global and local information, taking the globally down-sampled image and locally cropped patches as inputs respectively. The representative works include GLNet [10] and FCtL [11]. However, this type of framework requires multiple predictions on the patches thus the overall inference speed is very slow. To further achieve a better balance among accuracy, memory and inference speed, ISDNet [12] is proposed to integrate shallow and deep networks for efficient segmentation. The shallow branch has fewer layers and faster inference speed, its input does not need any downsampling or cropping. For the deep branch, the input image is directly down-sampled to ensure high inference speed. Then a heavy relation-aware feature (RAF) module is utilized to exploit the relationship between the shallow and deep feature. In this paper, we propose WSDNet, the evolution of ISDNet, to formulate a more efficient and effective framework for UHR segmentation. Specifically, multi-level Discrete Wavelet Transform (DWT) and Inverse Discrete Wavelet Transform (IWT) are naturally integrated to release computation burden while preserve more spatial details in the deep branch, thus RAF can be removed for higher inference speed. The Wavelet Smooth Loss (WSL) is also designed to reconstruct original structured context and texture distribution with the smooth constrain in frequency domain.\nOverall, the contributions of this paper are summarized as follows:\n• We introduce the URUR dataset, a novel large-scale dataset covering a wide range of scenes with full finegrained dense annotations, which is superior to all the exiting UHR datastes to our knowledge.\n• WSDNet is proposed to preserve more spatial details with multi-level DWT-IWT, and a Wavelet Smooth Loss is presented to reconstruct original structured context and texture distribution with the smooth constrain in frequency domain.\n• Statistics and experiments demonstrate the superiority of URUR and WSDNet. WSDNet achieves stateof-the-art balance among accuracy, memory and inference speed on several UHR datasets." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Generic Semantic Segmentation", "publication_ref": [ "b12", "b13", "b14", "b18", "b19", "b20", "b21", "b22", "b23", "b30", "b31", "b30", "b32", "b33" ], "table_ref": [], "text": "With the rapid development of deep learning [13][14][15][16][17], semantic segmentation have achieved remarkable progress. Most of generic semantic segmentation models are based on and aim to improve fully convolutional networks (FCN) [18]. They rely on large receptive field and finegrained deep features [19][20][21][22][23][24] or graph modules [25-29], which are not appropriate to directly apply to UHR images. Real-time segmentors are proposed to balance the computation cost and performance [30][31][32]. BiseNetV2 [31] achieved considerable performance benefited from its specially designed architectures(bilateral aggregation) and training strategies(booster training). However these methods usually rely on small receptive fields and feature chan-nel cutting techniques, which sacrifices the feature view. In addition, knowledge distillation frameworks are also utilized to produce efficient yet high-performance segmentation models [33,34]." }, { "figure_ref": [], "heading": "UHR Semantic Segmentation", "publication_ref": [ "b9", "b10", "b11" ], "table_ref": [], "text": "Many methods have been especially presented for UHR semantic segmentation [10-12, 35, 36]. CascadePSP [35] proposed to improve the coarse segmentation results with a pre-trained model to generate high-quality results. GLNet [10] firstly incorporated both global and local information deeply in a two-stream branch manner. Based on GLNet, FCtL [11] further exploited a squeeze-and-split structure to fuse multi-scale features information. For the sake of higher inference speed, ISDNet [12] directly processed the full-scale and down-sampled inputs by integrating shallow and deep networks, significantly accelerating the inference speed." }, { "figure_ref": [], "heading": "URUR Dataset", "publication_ref": [], "table_ref": [], "text": "The proposed URUR dataset is far superior to all the existing UHR datasets including DeepGlobe, Inria Aerial, UDD, etc., in terms of both quantity, context richness and annotation quality. In this section, we illustrate the processes of dataset construction and analyze them through a variety of informative statistics, as well as give detailed measures to protect privacy." }, { "figure_ref": [], "heading": "Dataset Summary", "publication_ref": [], "table_ref": [], "text": "The proposed URUR dataset contains 3,008 UHR images with size of 5,012×5,012, captured from 63 cities. The training, validation and testing set include 2,157, 280 and 571 UHR images respectively, with the approximate ratio of 7:1:2. All the images are exhaustively manually annotated with fine-grained pixel-level categories, including 8 classes of \"building\", \"farmland\", \"greenhouse\", \"woodland\", \"bareland\", \"water\", \"road\" and \"others\". Sample images are shown in Figure 1 (h). The number of images and annotations in the dataset is still growing." }, { "figure_ref": [], "heading": "Data Collection and Pre-processing", "publication_ref": [], "table_ref": [], "text": "The dataset is collected by several high-quality satellite image data sources for public use. This results in data from 63 cities which we then select about 20 scenes manually in each city, based on following standards:\n• Low Ambiguity: The objects in the selected scenes should not have much obvious semantic ambiguity in appearance.\n• High Diversity: Scenes with diverse types of categories, instances, times and weather should be more appropriate and meaningful in our task.\n• Privacy Protection: No information in the scenario should reveal anything about privacy, such as person, store name, etc.\nTherefore, the dataset has a high variation in camera viewpoint, illumination and scenario type. In addition, in order to enhance the diversity and richness of the dataset, multiple granular perspectives are set and collected for each scenario. As a result, we totally collect 752 images with size 10,240×10,240, which are then divided to 3,008 images with size 5,120×5,120." }, { "figure_ref": [], "heading": "Efficient Annotation", "publication_ref": [ "b3", "b7", "b4", "b36", "b5", "b6" ], "table_ref": [], "text": "Compared to natural images, annotating the UHR images is always a more tough job, since the objects to be labeled grow quadratically as the image resolution increases. This is why existing UHR datasets usually exploit coarse-grained annotations or annotate only one major category. In contrast, we are intended to adopt more fine-grained annotations for the whole categories in the proposed URUR dataset. Figure 1 shows an intuitive comparison and more details about dataset statistics will be presented on Section 3.4. As seen that the UHR datasets including DeepGlobe, Inria Aerial and URUR obviously contain more objects and instances than natural ones, such as Pascal VOC and COCO, while the objects are also smaller in scale. Moreover, one or more class pairs are often spatially mixed together, bringing great troubles to carefully distinguish them during annotation process. By contrast, URUR also contains more objects and richer context than other UHR datasets. In conclusion, the main challenge and time-consuming part of annotating fine-grained UHR images are not only reflected in the amounts of objects to be annotated caused by the excessively ultra-high image resolution, but also in the many chain problems caused by the ultra-rich image context among objects with drastically changing scales.\nFor both efficient and accurate annotation, each original UHR image with size of 5,120×5,120 is firstly cropped evenly into multiple patches with size of 1,000×1,000. We let the annotators annotate these image patches separately, after that their results are correspondingly merged to get the final annotations relative to the original UHR images. In this way, we ensure that each annotator only focus on a smaller image patch, which facilitates the annotation process and improves the accuracy of the annotation results. During cropping, neighboring patches have 120×1,000 pixels overlap region to guarantee the consistency of annotation results and avoid boundary vanishing. In order to further save manpower and speed up the whole process, a ISD-Net model is trained with the early manually annotated images and used to generate segmentation masks on the rest images. As a reference, annotators adjust the masks with the help of annotation tools developed by us. [4], Inria Aerial [8], ISIC [5], ERM-PAIW [37], UDD6 [6] and UAVid [7]. As shown that URUR is far superior to all of them in terms of both quantity, annotation quality, context richness and scene complexity. \"Img.\", \"Cls.\", \"Inst.\", \"Ave.\" denote \"Image\", \"Class/Category\", \"Instance\", \"Average\" respectively. For UDD6 and UAVid, the testing sets are not included since the annotations have not been open sourced. The resolution of images in ISIC is diverse and the largest is up to 6682×4401. Instances that are too small are not considered." }, { "figure_ref": [], "heading": "Dataset Statistics", "publication_ref": [ "b3", "b4", "b4", "b36", "b5", "b6" ], "table_ref": [ "tab_0", "tab_0" ], "text": "Table 1 shows the detailed statistics comparison between the proposed URUR dataset and exiting several main UHR datasets, including DeepGlobe [4], Inria Aerial [5], ISIC [5], ERM-PAIM [37], UDD [6] and UAVid [7]. First of all, for the most fundamental image statistics, URUR consists of 3,008 images with size of 5,120×5,120 and outperforms all other datasets on both image number and resolution. In concrete, except for ISIC and DeepGlobe, the image number of all other datasets are below 200. DeepGlobe contains 803 images but the resolution is only 2,448×2,448 (5.9M), which does not even reach the minimum threshold (8.3M) of UHR medias (illustrated in Section 1). For overall annotation, limited by manpower, the annotation paradigms of existing datasets are divided into two types: (1) using coarse annotation, or (2) only annotating one category. The first type includes DeepGlobe, UDD6 and UAVid. As samples shown in Figure 1 (b,d,e), a large area of land containing many farmlands and buildings has been directly annotated as bareland for simplify in DeepGlobe. The cars, persons and trees are directly roughly painted in UDD6 and UAVid. The second type includes Inria Aerial and ERM-PAIM, they adopt a fine-grained annotations but only annotate one category of buildings and roads respectively. ISIC is a medicine dataset about lesion segmentation. Although it has up to 2,596 images, there is only one category of lesion area being roughly annotated. By contrast, URUR totally annotates 78,852 million pixels, with 100% annotation density on 8 categories, and the total number of annotated instances are up to 2,058 thousand, which is far superior to all other datasets. More details about per image annotation statistics are also revealed. We count the average number of categories and instances per image, which can reflect the context richness and scene complexity in some degree. For a closer observation, we also randomly sample some regions and count the average categories and instances in them. As found in Table 1, although both DeepGlobe, UDD6 and UAVid contain multiple categories, their average category per image/region is very low because of the coarse annotations and relative-simple scenes. By contrast, URUR consists of high-density categories and instances in each image with complex scenes. The other meta information is also provided, such as number of cities for data collection.\nFinally, we design a quantitative measure metric, namely Scene Context Richness, to compare the overall scene complexity in datasets. Formally, it is defined as follows,\nR = - C c (O c ) 1 q • p c • log(p c ) (1\n)\nwhere R is the context richness, C is the number of categories, O c is the average number of object instances per region for category c, p c is the average probability of category c per region. Thus we can see that when the dataset contains more object instances and more diverse categories in each region, its overall context is richer thus scene complexity is higher. q is the temperature parameter to adjust the weight of instance number and set to 2 in our experiments. We randomly select some regions for all the datasets and calculate R, results show that URUR has the highest scene complexity (R = 0.883) and ultra-rich context." }, { "figure_ref": [], "heading": "Privacy Protection Statement", "publication_ref": [], "table_ref": [], "text": "For the most important thing, our dataset is only used for academic purposes to drive the development of UHR image analysis techniques. We have fully considered all the possibilities to avoid anything about privacy issues in the dataset collection stage. The source of dataset comes from satellites for public use and is not related to any sensitive information. Annotators are also asked to filter and discard the potentially sensitive information. Specifically, we ask an-Figure 2. The overview of the proposed WSDNet for UHR segmentation, which consists of a deep branch D (the lower branch) and a shallow branch S (the upper branch). In S, the input images is decomposed into two subbands with Laplacian pyramid, which are then concatenated and fed into a shallow network to extract full-scale spatial details. In D, the input image is down-sampled with two-level Discrete Wavelet Transform (DWT) and then fed into the deep network to harvest high-level category-wise context. Next the output with scale 1 32 of the original input is upsampled to 1 8 with two-level Invert Discrete Wavelet Transform (IWT). Finally the two branches are fused with multi-scale features and optimized with the base cross-entropy losses Lseg, auxiliary loss Laux, as well as a Wavelet Smooth Loss (WSL) to reconstruct the original input with the help of a super-resolution head. The modules within dot lines are removed during inference.\nnotators to cover up or discard any sensitive information in a scene, including time and address watermarks in videos, phone numbers, and addresses on the shops or walls. The primary purpose of this paper is to facilitate the development of this field to the community better. We try to provide a more large-scale, fine-grained and challenging dataset for future researches. All researchers who ask for the dataset should follow the Data Usage Protocol under the legal protection provided by us [38]." }, { "figure_ref": [], "heading": "WSDNet", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Network Architecture", "publication_ref": [ "b11", "b38", "b38", "b0", "b3" ], "table_ref": [], "text": "As shown in Figure 2, WSDNet consists of a deep branch D and a shallow branch S. S contains fewer layers without any down-sampling or cropping operations on input UHR image to harvest all spatial details while preserving high inference speed. Following ISDNet [12], the original input RGB image I is replaced with high-frequency residuals {H} n i=0 :\nH i = g i (I) -U (g i+1 (I))(2)\nwhere g(•) denotes Gaussian blur, U (•) denotes the upsampling operation. The original outputs are two feature maps with 1 8 and 1 16 of the original image, and the 1 16 feature map is then up-sampled to add to 1 8 feature map for final output. D is a deep network responsible for learning categorywise context, and input the 1 4 down-sampled UHR image for faster inference speed and lower memory occupation.\nInstead of naive down-sampling in ISDNet, we are intended to exploit an invertible downsampling operation to preserve the original image details with less information loss, and wavelet transform (DWT) is considered. Wavelet transform is a fundamental time-frequency analysis method that decomposes input signals step by step into different frequency subbands to address the aliasing problem. In particular, Discrete Wavelet Transform (DWT) [39] enables invertible down-sampling by transforming input image I into four discrete wavelet subbands I 1 , I 2 , I 3 , I 4 with four filters (f LL , f LH , f HL , f HH )\nI 1 = (f LL ⊗ I) ↓ 2, I 2 = (f LH ⊗ I) ↓ 2 I 3 = (f HL ⊗ I) ↓ 2, I 4 = (f HH ⊗ I) ↓ 2. (3\n)\nwhere ⊗ is the convolution operation. I 1 represents all lowfrequency information describing the basic object structure at coarse-grained level. I 2 , I 3 , I 4 include high-frequency information retaining the object texture details at fine-grained level [40]. In this way, various levels of image details are preserved in different subbands of lower resolution without information dropping. Although down-sampling operation is used, due to the good biorthogonal property of DWT, the original image I can be reconstructed by the Inverse Discrete Wavelet Transform (IWT) [39], i.e., I = IW T (I 1 , I 2 , I 3 , I 4 ). When integrated to CNN, the DWT-IWT paradigm is able to preserve more spatial and frequency information than ordinary downsampling methods. The subband images I 1 , I 2 , I 3 , I 4 can be further pro-cessed with DWT to produce the decomposition results. For two-level DWT, each subband image I b (b ∈ [1,4]) is decomposed into four subband images I b,1 , I b,2 , I b,3 , I b,4 . Recursively, the results of higher levels DWT can be attained.\nIn D, we integrate two-level DWT with CNN block to obtain the 1 4 down-sampled input image, followed by the deep network. The output is 1 32 feature map rich in high-level category-wise context, and then up-sampled to 1 8 feature map with two-level IWT. In this way, the output of D has the same size with the output of S, so they can be naturally fused and no extra special fusion module is required, such as the heavy RAF module in ISDNet. This further accelerates the inference and decreases the memory cost." }, { "figure_ref": [], "heading": "Wavelet Smooth Loss", "publication_ref": [], "table_ref": [], "text": "In order to further weaken the affect of down-sampled low-resolution input in D, a super-resolution head is added after the 1 8 output of D to reconstruct the original input. Instead of an ordinary super-resolution loss that formulates a hard reconstruction constrain, we propose the Wavelet Smooth Loss (WSL) to optimize the reconstruction process with a soft and smooth constrain, by reconstructing the super-resolution output I rec in frequency domain. More comprehensively, we apply L-level DWT to I and I rec respectively, and obtain their low-and high-frequency subbands. The L1 regularization, not the L2 regularization, is used to constrain the high-frequency subbands. Because we prefer to align the texture distribution between I and I rec , rather than specific frequency values, but gradient of L2 regularization is closely related to the values, while the gradient of L1 regularization is independent. This type of smooth constraint can make the texture distribution of the output from D consistent with the input, and avoid the over-fitting caused by the exact numerical alignment in L2 regularization.\nOn the contrary, since low-frequency subbands represent the basic objects structure details, we exploit L2 regularization to make the spatial structured details of the output fit the ones of input as closely as possible, driving D to preserve more spatial information. Overall, the WSL consists of the above two parts and is formulated as,\nL wsl = L l=1 4 l b=1 (λ 1 ||I l,b;1 -I rec l,b;1 || 2 + λ 2 4 i=2 ||I l,b;i -I rec l,b;i || 1 ).(4)\nwhere I l,b;1 denotes the low-frequency subband after l-th DWT, I l,b;i denotes the i-th high-frequency subband after lth DWT. I rec l,b;1 , I rec l,b;i and so on. λ 1 , λ 2 are the weights of the low-frequency and high-frequency constrains respectively." }, { "figure_ref": [], "heading": "Optimization", "publication_ref": [], "table_ref": [], "text": "In addition, the standard cross-entropy loss is also used for both the final segmentation results (L seg ) and the auxiliary segmentation head after D (L aux ). So the overall loss L is:\nL = L seg + λ 3 L aux + L wsl .(5)\nwhere λ 3 is the weight of L aux . Noted that both the reconstruction head and segmentation head in D are only used during training, and will be removed in inference stage, which are indicated with dot lines in Figure 2." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets and Evaluation Metrics", "publication_ref": [ "b3", "b9", "b10", "b7", "b9", "b10" ], "table_ref": [], "text": "We perform extensive experiments on DeepGlobe, Inria Aerial and URUR dataset to validate WSDNet. In addition to URUR, we describe the former two datasets as follows. DeepGlobe. The DeepGlobe dataset [4] has 803 UHR images (455, 207 and 142 for training, validation and testing respectively). Each image contains 2448 × 2448 pixels and seven classes of landscape regions, where one class called \"unknown\" is not considered in the evaluation. Following [10,11], we split images into training, validation and testing sets with 455, 207, and 142 images respectively. Inria Aerial. The Inria Aerial [8] has 180 UHR images (126, 27 and 27 for training, validation and testing respectively). Each image contains 5000 × 5000 pixels and is annotated with a binary mask for building/non-building areas. This datasets covers diverse urban landscapes, ranging from dense metropolitan districts to alpine resorts. Unlike Deep-Globe, it splits the training/test sets by city. We follow the protocol as [10,11] by splitting images into training, validation and testing sets with 126, 27, and 27 images, respectively. Evaluation Metrics. Intersection-over-Union (mIoU), F1 score, Accuracy and Frames-per-second (FPS) are used to study the effectiveness and inference speed." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b40", "b19", "b31", "b11", "b11" ], "table_ref": [], "text": "We adopt the mmsegmentation [41] toolbox as codebase and follow the default augmentations without bells and whistles. D and S can be any usual segmentation networks, here we exploit DeepLabV3+ [20] with ResNet18 and STDC [32] respectively. For fairness comparison, we set the same training settings as [12]: SGD with momentum 0.9 for all parameters are used, the initial learning rate is configured as 10 -3 with polynomial decay parameter of 0.9, batch size is 8 and the maximum iteration number are set to 40K, 80K and 160K on DeepGlobe, Inria Aerial and URUR respectively. In Equation 5, λ 1 = 1, λ 2 = 0.8, λ 3 = 0.1, L = 3. We use the command line tool \"gpustat\" to measure the GPU memory. Memory and Frames-per-second (FPS) are measured on a RTX 2080Ti GPU with a batch size of 1, which is also same as [12]." }, { "figure_ref": [], "heading": "Comparison with State-of-the-arts", "publication_ref": [ "b9", "b10" ], "table_ref": [ "tab_1", "tab_2", "tab_4" ], "text": "We compare WSDNet with representative generic and UHR segmentation methods. Since most of generic methods have not specially designed for UHR images, there are two inference paradigms: (1) Global Inference: inference model with the down-sampled global images. (2) Local Inference: inference model with the cropped patches by multiple times then merge their results. DeepGlobe. Although DeepGlobe is not strictly an UHR dataset, we still follow previous works and use it as a reference to validate the effectiveness of WSDNet. Compared with both generic and UHR models in Table 2, WSDNet achieves excellent balance between mIoU, F1, accuracy, memory and FPS. In concrete, due to multiple patch inferences, the overall inference speed of GLNet [10] and FCtL [11] is very low. Compared with ISDNet, WSDNet removes the heavy RAF module thus the inference speed is further increased from 27.7 to 30.3. Moreover, benefited from the DWT-IWT paradigm and WSL, the performance is also further improved. Inria Aerial. Inria Aerial is an actual UHR dataset with size 5,000×5,000 thus more convincing to prove the superiority. It is only annotated one category of building and Table 3 shows the comparisons. WSDNet also achieves the better balance among all metrics. URUR. Due to ultra-high resolution, ultra-rich fine-grained annotations and ultra-diversity of land cover types, URUR is the most challenging UHR datasets so far, compared to all other datasets. As shown in Table 4, WSDNet also outperforms existing methods by a very large margin on mIoU, while preserving higher inference speed." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [], "text": "In all ablation studies, we perform experiments on URUR test set to validate the effectiveness of each component." }, { "figure_ref": [], "heading": "Comparison of downsampling methods", "publication_ref": [ "b47" ], "table_ref": [ "tab_5" ], "text": "We compare the different types of downsampling methods in Table 5. The baseline type uses an ordinary uniform downsampling in the form of bilinear interpolation. We also attempt to realize the downsampling process by a multi-level CNN module with a combination of several convolution and pooling layers. Then an adaptive downsampling method based on deformable convolution [48] is also tried. Experimental results show DWT-IWT paradigm achieves the best performance on mIoU and considerable inference speed, proving that DWT-IWT paradigm can preserve higher performance for the input in deep branch than the ordinary down-sampling." }, { "figure_ref": [], "heading": "Effectiveness of WSL", "publication_ref": [ "b11" ], "table_ref": [ "tab_6" ], "text": "Table 6 shows the effectiveness of proposed WSL. The baseline is the cross-entropy loss L seg and auxiliary loss L aux . Then we add the ordinary super-resolution loss in [12] and the proposed WSL respectively. Experimental re- sults show WSL achieves highest performance, proving the effectiveness of the smooth constrain in frequency domain." }, { "figure_ref": [ "fig_0" ], "heading": "Quantitative Results", "publication_ref": [], "table_ref": [], "text": "To show the effectiveness of WSDNet intuitively, we visualize and compare the results of several methods in Figure 3." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The paper firstly proposes URUR, a large-scale dataset covering a wide range of scenes with full fine-grained dense annotations. It contains amounts of images with high enough resolution, a wide range of complex scenes, ultrarich context and fine-grained annotations, which is far superior to all the existing UHR datasets. Furthermore, WS-DNet is proposed to formulate a more efficient framework for UHR segmentation, where DWT-IWT paradigm is integrated to preserve more spatial details. Wavelet Smooth Loss (WSL) is designed to reconstruct original structured context and texture distribution. It is more concise, effective and stable than ordinary super-resolution loss. Exten- " }, { "figure_ref": [], "heading": "Broader Impact", "publication_ref": [], "table_ref": [], "text": "Ultra-high image analysis has broadened the field of AI and Computer Vision researches, as well as poses extreme demands and challenges for models about both accuracy, inference speed and memory cost. Our work pushes the boundaries of Ultra-high image analysis. The URUR dataset build a new standard UHR benchmark for the community, which will benefit a wide range of natural disaster prevention, land resources utilization and urban construction planning applications. The design of WSDNet can be generalized to the UHR \"Complicated Wild Scene Understanding\". Even with these achievements, we realize that our work is not meant to be perfect, and there are still unpredictable challenges in the real world, depending on the specific application forms. In addition, our method can still help the research of natural scenes, from a more holistic and fine-grained perspective." }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "This work was supported by the National Key R&D Program of China under Grant 2020AAA0103902, the Anhui Provincial Natural Science Foundation under Grant 2108085UD12, the JKW Research Funds under Grant 20-163-14-LZ-001-004-01, NSFC (No. 62176155), Shanghai Municipal Science and Technology Major Project, China (2021SHZDZX0102). We acknowledge the support of GPU cluster built by MCC Lab of Information Science and Technology Institution, USTC." } ]
With the increasing interest and rapid development of methods for Ultra-High Resolution (UHR) segmentation, a large-scale benchmark covering a wide range of scenes with full fine-grained dense annotations is urgently needed to facilitate the field. To this end, the URUR dataset is introduced, in the meaning of Ultra-High Resolution dataset with Ultra-Rich Context. As the name suggests, URUR contains amounts of images with high enough resolution (3,008 images of size 5,120×5,120), a wide range of complex scenes (from 63 cities), rich-enough context (1 million instances with 8 categories) and fine-grained annotations (about 80 billion manually annotated pixels), which is far superior to all the existing UHR datasets including DeepGlobe, Inria Aerial, UDD, etc.. Moreover, we also propose WSDNet, a more efficient and effective framework for UHR segmentation especially with ultra-rich context. Specifically, multi-level Discrete Wavelet Transform (DWT) is naturally integrated to release computation burden while preserve more spatial details, along with a Wavelet Smooth Loss (WSL) to reconstruct original structured context and texture with a smooth constrain. Experiments on several UHR datasets demonstrate its state-of-the-art performance. The dataset is available at https://github.com/jankyee/URUR.
Ultra-High Resolution Segmentation with Ultra-Rich Context: A Novel Benchmark
[ { "figure_caption": "Figure 3 .3Figure 3. Visual improvements on URUR dataset: (a) part of original UHR images, (b) ISDNet, (c) WSDNet, (d) Groundtruth. Our method produces more accurate and detailed results, which are indicated by dotted boxes", "figure_data": "", "figure_id": "fig_0", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "The detailed statistics comparison between URUR and existing UHR datasets, including DeepGlobe", "figure_data": "Image StatisticsOverall Annotated StatisticsPer Annotated StatisticsScene ComplexityUHR DatasetImg. ResolutionTypePixelsDensity Cls.Inst.Ave. Cls. per Img./RegionAve. Inst. per Img./RegionCitiesContextDeepGlobe803 2448×2448 coarse 4812M1.0821K3.9/1.817/4.930.398Inria Aerial180 5000×5000fine710M0.162138K2/0.8766/302100.367ISIC *2596 6682×4401 coarse247M0.0122.6K2/0.21/0.2-0.087ERM-PAIW334795×3014fine71M0.1520.3K2/0.61/0.6110.277UDD6141 4096×2160 coarse 1250M1.0621K5.8/3.499/4240.471UAVid140 3840×2160 coarse 1001M1.0822K6.6/4.193/5410.459URUR3008 5120×5120fine78852M1.081140K7.2/5.6379/201630.883", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-arts on DeepGlobe test set.", "figure_data": "Generic ModelsmIoU (%)↑F1 (%)↑Acc (%)↑Mem (M)↓FPS ↑Local InferenceU-Net [42]37.3--9491.26DeepLabv3+ [20]63.1--1279 1.60FCN-8s [18]71.882.6 87.6 1963 4.55Global InferenceU-Net [42]38.4--5507 3.54ICNet [30]40.2--25575.3PSPNet [19]56.6--62891.0DeepLabv3+ [20]63.5--3199 4.44FCN-8s [18]68.879.8 86.2 5227 7.91BiseNetV1 [43]53.0--1801 14.2DANet [44]53.8--68122.3STDC [32]70.3--2580 14.0UHR ModelsCascadePSP [35]68.579.7 85.6 3236 0.11PPN [45]71.9--1193 12.9PointRend [46]71.8--1593 6.25MagNet [47]72.9--1559 0.80MagNet-Fast [47] 71.8--1559 3.40GLNet [10]71.683.2 88.0 1865 0.17FCtL [11]72.883.8 88.3 3167 0.13ISDNet [12]73.384.0 88.7 1948 27.7WSDNet74.185.2 89.1 1876 30.3\"Acc\", \"Mem\" indicates \"Accuracy\", \"Memory\" respectively, thesame belowGeneric ModelsmIoU (%)↑F1 (%)↑Acc (%)↑Mem (M)↓FPS ↑DeepLabv3+ [20] 55.9--5122 1.67FCN-8s [18]69.181.7 93.6 2447 1.90STDC [32]72.4--7410 4.97UHR ModelsCascadePSP [35]69.481.8 93.2 3236 0.03GLNet [10]71.2--2663 0.05FCtL [11]73.784.1 94.6 4332 0.04ISDNet [12]74.284.9 95.6 4680 6.90WSDNet75.286.0 96.0 4379 7.80", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-arts on Inria Aerial test set", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Comparison with state-of-the-arts on URUR test set", "figure_data": "DownsamplingmIoU(%) FPSuniform downsampling45.17.65multi-level CNN45.85.62adaptive downsampling46.04.96multi-level DWT46.97.13", "figure_id": "tab_4", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Comparison with different down-sampling methods.", "figure_data": "Loss FunctionmIoU(%)baseline(L seg & L aux )45.2baseline + L sr & L sd45.9baseline + L wsl46.9", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Effectiveness of loss functions.", "figure_data": "", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" } ]
Deyi Ji; Feng Zhao; Hongtao Lu; Mingyuan Tao; Jieping Ye
[ { "authors": "Mark Everingham; Luc Van Gool; K I Christopher; John M Williams; Andrew Winn; Zisserman", "journal": "", "ref_id": "b0", "title": "The pascal visual object classes (voc) challenge", "year": "2010" }, { "authors": "Tsung-Yi Lin; Michael Maire; Serge Belongie; Lubomir Bourdev; Ross Girshick; James Hays; Pietro Perona; Deva Ramanan; C Lawrence Zitnick; Piotr Dollár", "journal": "", "ref_id": "b1", "title": "Microsoft coco: Common objects in context", "year": "2014" }, { "authors": "Marius Cordts; Mohamed Omran; Sebastian Ramos; Timo Rehfeld; Markus Enzweiler; Rodrigo Benenson; Uwe Franke; Stefan Roth; Bernt Schiele", "journal": "", "ref_id": "b2", "title": "The cityscapes dataset for semantic urban scene understanding", "year": "2016" }, { "authors": "K Demir; D Koperski; G Lindenbaum; J Pang; S Huang; F Basu; D Hughes; R Deepglobe Tuia; Raskar", "journal": "CVPRW", "ref_id": "b3", "title": "Deepglobe 2018: A challenge to parse the earth through satellite images", "year": "2018" }, { "authors": "Philipp Tschandl; Cliff Rosendahl; Harald Kittler", "journal": "Scientific data", "ref_id": "b4", "title": "The ham10000 dataset, a large collection of multi-source dermatoscopic images of common pigmented skin lesions", "year": "2018" }, { "authors": "Yu Chen; Yao Wang; Peng Lu; Yisong Chen; Guoping Wang", "journal": "Springer", "ref_id": "b5", "title": "Large-scale structure from motion with semantic constraints of aerial images", "year": "2018" }, { "authors": "Ye Lyu; George Vosselman; Gui-Song Xia; Alper Yilmaz; Michael Ying; Yang ", "journal": "ISPRS Journal of Photogrammetry and Remote Sensing", "ref_id": "b6", "title": "Uavid: A semantic segmentation dataset for uav imagery", "year": "2020" }, { "authors": "Emmanuel Maggiori; Yuliya Tarabalka; Guillaume Charpiat; Pierre Alliez", "journal": "IEEE", "ref_id": "b7", "title": "Can semantic labeling methods generalize to any city? the inria aerial image labeling benchmark", "year": "2017" }, { "authors": "Steven Ascher; Edward Pincus", "journal": "Penguin", "ref_id": "b8", "title": "The filmmaker's handbook: A comprehensive guide for the digital age", "year": "2007" }, { "authors": "Wuyang Chen; Ziyu Jiang; Zhangyang Wang; Kexin Cui; Xiaoning Qian", "journal": "", "ref_id": "b9", "title": "Collaborative global-local networks for memory-efficient segmentation of ultra-high resolution images", "year": "2019" }, { "authors": "Qi Li; Weixiang Yang; Wenxi Liu; Yuanlong Yu; Shengfeng He", "journal": "", "ref_id": "b10", "title": "From contexts to locality: Ultra-high resolution image segmentation via locality-aware contextual correlation", "year": "2021" }, { "authors": "Shaohua Guo; Liang Liu; Zhenye Gan; Yabiao Wang; Wuhao Zhang; Chengjie Wang; Guannan Jiang; Wei Zhang; Ran Yi; Lizhuang Ma; Ke Xu", "journal": "", "ref_id": "b11", "title": "Isdnet: Integrating shallow and deep networks for efficient ultra-high resolution segmentation", "year": "2022-06" }, { "authors": "Ian Goodfellow; Yoshua Bengio; Aaron Courville", "journal": "MIT press", "ref_id": "b12", "title": "Deep learning", "year": "2016" }, { "authors": "Cong Zhang; Hongsheng Li; Xiaogang Wang; Xiaokang Yang", "journal": "", "ref_id": "b13", "title": "Cross-scene crowd counting via deep convolutional neural networks", "year": "2015" }, { "authors": "Deyi Ji; Hongtao Lu; Tongzhen Zhang", "journal": "SPIE", "ref_id": "b14", "title": "End to end multiscale convolutional neural network for crowd counting", "year": "2019" }, { "authors": "Hang Zhang; Kristin Dana; Jianping Shi; Zhongyue Zhang; Xiaogang Wang; Ambrish Tyagi; Amit Agrawal", "journal": "", "ref_id": "b15", "title": "Context encoding for semantic segmentation", "year": "2018" }, { "authors": "Weitao Feng; Deyi Ji; Yiru Wang; Shuorong Chang; Hansheng Ren; Weihao Gan", "journal": "", "ref_id": "b16", "title": "Challenges on large scale surveillance video analysis", "year": "2018" }, { "authors": "Jonathan Long; Evan Shelhamer; Trevor Darrell", "journal": "", "ref_id": "b17", "title": "Fully convolutional networks for semantic segmentation", "year": "2015" }, { "authors": "Hengshuang Zhao; Jianping Shi; Xiaojuan Qi; Xiaogang Wang; Jiaya Jia", "journal": "", "ref_id": "b18", "title": "Pyramid scene parsing network", "year": "2017" }, { "authors": "Liang-Chieh Chen; Yukun Zhu; George Papandreou; Florian Schroff; Hartwig Adam", "journal": "", "ref_id": "b19", "title": "Encoder-decoder with atrous separable convolution for semantic image segmentation", "year": "2018" }, { "authors": "Yuhui Yuan; Xilin Chen; Jingdong Wang", "journal": "Springer", "ref_id": "b20", "title": "Objectcontextual representations for semantic segmentation", "year": "2020" }, { "authors": "Enze Xie; Wenhai Wang; Zhiding Yu; Anima Anandkumar; Jose M Alvarez; Ping Luo", "journal": "NeurIPS", "ref_id": "b21", "title": "Segformer: Simple and efficient design for semantic segmentation with transformers", "year": "2021" }, { "authors": "Sixiao Zheng; Jiachen Lu; Hengshuang Zhao; Xiatian Zhu; Zekun Luo; Yabiao Wang; Yanwei Fu; Jianfeng Feng; Tao Xiang; H S Philip; Li Torr; Zhang", "journal": "", "ref_id": "b22", "title": "Rethinking semantic segmentation from a sequence-to-sequence perspective with transformers", "year": "2021" }, { "authors": "Lanyun Zhu; Deyi Ji; Shiping Zhu; Weihao Gan; Wei Wu; Junjie Yan", "journal": "", "ref_id": "b23", "title": "Learning statistical texture for semantic segmentation", "year": "2021-06" }, { "authors": "Hanzhe Hu; Deyi Ji; Weihao Gan; Shuai Bai; Wei Wu; Junjie Yan", "journal": "Springer", "ref_id": "b24", "title": "Class-wise dynamic graph convolution for semantic segmentation", "year": "2020" }, { "authors": "Jie Zhou; Ganqu Cui; Shengding Hu; Zhengyan Zhang; Cheng Yang; Zhiyuan Liu; Lifeng Wang; Changcheng Li; Maosong Sun", "journal": "AI open", "ref_id": "b25", "title": "Graph neural networks: A review of methods and applications", "year": "2020" }, { "authors": "Deyi Ji; Haoran Wang; Hanzhe Hu; Weihao Gan; Wei Wu; Junjie Yan", "journal": "", "ref_id": "b26", "title": "Context-aware graph convolution network for target re-identification", "year": "2020" }, { "authors": "Felix Wu; Amauri Souza; Tianyi Zhang; Christopher Fifty; Tao Yu; Kilian Weinberger", "journal": "PMLR", "ref_id": "b27", "title": "Simplifying graph convolutional networks", "year": "2019" }, { "authors": "Haoran Wang; Licheng Jiao; Fang Liu; Lingling Li; Xu Liu; Deyi Ji; Weihao Gan", "journal": "IEEE Transactions on Image Processing", "ref_id": "b28", "title": "Ipgn: Interactiveness proposal graph network for human-object interaction detection", "year": "2021" }, { "authors": "Hengshuang Zhao; Xiaojuan Qi; Xiaoyong Shen; Jianping Shi; Jiaya Jia", "journal": "", "ref_id": "b29", "title": "Icnet for real-time semantic segmentation on high-resolution images", "year": "2018" }, { "authors": "Changqian Yu; Changxin Gao; Jingbo Wang; Gang Yu; Chunhua Shen; Nong Sang", "journal": "International Journal of Computer Vision", "ref_id": "b30", "title": "Bisenet v2: Bilateral network with guided aggregation for real-time semantic segmentation", "year": "2021" }, { "authors": "Mingyuan Fan; Shenqi Lai; Junshi Huang; Xiaoming Wei; Zhenhua Chai; Junfeng Luo; Xiaolin Wei", "journal": "", "ref_id": "b31", "title": "Rethinking bisenet for real-time semantic segmentation", "year": "2021" }, { "authors": "Yifan Liu; Ke Chen; Chris Liu; Zengchang Qin; Zhenbo Luo; Jingdong Wang", "journal": "", "ref_id": "b32", "title": "Structured knowledge distillation for semantic segmentation", "year": "2019" }, { "authors": "Deyi Ji; Haoran Wang; Mingyuan Tao; Jianqiang Huang; Xian-Sheng Hua; Hongtao Lu", "journal": "", "ref_id": "b33", "title": "Structural and statistical texture knowledge distillation for semantic segmentation", "year": "2022" }, { "authors": "Jihoon Ho Kei Cheng; Yu-Wing Chung; Chi-Keung Tai; Tang", "journal": "", "ref_id": "b34", "title": "Cascadepsp: Toward class-agnostic and very highresolution segmentation via global and local refinement", "year": "2020" }, { "authors": "Deyi Ji; Feng Zhao; Hongtao Lu", "journal": "", "ref_id": "b35", "title": "Guided patch-grouping wavelet transformer with spatial congruence for ultra-high resolution segmentation", "year": "2023" }, { "authors": "Gellért Máttyus; Shenlong Wang; Sanja Fidler; Raquel Urtasun", "journal": "", "ref_id": "b36", "title": "Enhancing road maps by parsing aerial images around the world", "year": "2015" }, { "authors": "Haoran Wang; Licheng Jiao; Fang Liu; Lingling Li; Xu Liu; Deyi Ji; Weihao Gan", "journal": "IEEE Transactions on Neural Networks and Learning Systems", "ref_id": "b37", "title": "Learning social spatio-temporal relation graph in the wild and a video benchmark", "year": "2021" }, { "authors": "Pengju Liu; Hongzhi Zhang; Kai Zhang; Liang Lin; Wangmeng Zuo", "journal": "", "ref_id": "b38", "title": "Multi-level wavelet-cnn for image restoration", "year": "2018" }, { "authors": "Ting Yao; Yingwei Pan; Yehao Li; Chong-Wah Ngo; Tao Mei", "journal": "", "ref_id": "b39", "title": "Wave-vit: Unifying wavelet and transformers for visual representation learning", "year": "2022" }, { "authors": "", "journal": "MMSegmentation Contributors", "ref_id": "b40", "title": "MMSegmentation: Openmmlab semantic segmentation toolbox and benchmark", "year": "2020" }, { "authors": "Olaf Ronneberger; Philipp Fischer; Thomas Brox", "journal": "Springer", "ref_id": "b41", "title": "Unet: Convolutional networks for biomedical image segmentation", "year": "2015" }, { "authors": "Changqian Yu; Jingbo Wang; Chao Peng; Changxin Gao; Gang Yu; Nong Sang", "journal": "", "ref_id": "b42", "title": "Bisenet: Bilateral segmentation network for real-time semantic segmentation", "year": "2018" }, { "authors": "Jun Fu; Jing Liu; Haijie Tian; Yong Li; Yongjun Bao; Zhiwei Fang; Hanqing Lu", "journal": "", "ref_id": "b43", "title": "Dual attention network for scene segmentation", "year": "2019" }, { "authors": "Tong Wu; Zhenzhen Lei; Bingqian Lin; Cuihua Li; Yanyun Qu; Yuan Xie", "journal": "", "ref_id": "b44", "title": "Patch proposal network for fast semantic segmentation of high-resolution images", "year": "2020" }, { "authors": "Alexander Kirillov; Yuxin Wu; Kaiming He; Ross Girshick", "journal": "", "ref_id": "b45", "title": "Pointrend: Image segmentation as rendering", "year": "2020" }, { "authors": "Chuong Huynh; Anh Tuan Tran; Khoa Luu; Minh Hoai", "journal": "", "ref_id": "b46", "title": "Progressive semantic segmentation", "year": "2021" }, { "authors": "Jifeng Dai; Haozhi Qi; Yuwen Xiong; Yi Li; Guodong Zhang; Han Hu; Yichen Wei", "journal": "", "ref_id": "b47", "title": "Deformable convolutional networks", "year": "2017" } ]
[ { "formula_coordinates": [ 4, 365.05, 422.49, 176.19, 30.03 ], "formula_id": "formula_0", "formula_text": "R = - C c (O c ) 1 q • p c • log(p c ) (1" }, { "formula_coordinates": [ 4, 541.24, 433.22, 3.87, 8.64 ], "formula_id": "formula_1", "formula_text": ")" }, { "formula_coordinates": [ 5, 115.87, 616.01, 170.5, 9.65 ], "formula_id": "formula_2", "formula_text": "H i = g i (I) -U (g i+1 (I))(2)" }, { "formula_coordinates": [ 5, 341.56, 517.77, 199.68, 24.6 ], "formula_id": "formula_3", "formula_text": "I 1 = (f LL ⊗ I) ↓ 2, I 2 = (f LH ⊗ I) ↓ 2 I 3 = (f HL ⊗ I) ↓ 2, I 4 = (f HH ⊗ I) ↓ 2. (3" }, { "formula_coordinates": [ 5, 541.24, 525.66, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 6, 85.06, 589.01, 201.3, 66.88 ], "formula_id": "formula_5", "formula_text": "L wsl = L l=1 4 l b=1 (λ 1 ||I l,b;1 -I rec l,b;1 || 2 + λ 2 4 i=2 ||I l,b;i -I rec l,b;i || 1 ).(4)" }, { "formula_coordinates": [ 6, 368.41, 141.83, 176.71, 9.65 ], "formula_id": "formula_6", "formula_text": "L = L seg + λ 3 L aux + L wsl .(5)" } ]
10.18653/v1/D17-1168
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b7", "b2", "b22", "b17", "b1", "b29", "b0", "b12" ], "table_ref": [], "text": "Scripts are forms of knowledge representations for ordered events directed by particular goals (Herman, 1997). As shown in Figure 1, to obtain a Ph.D degree (i.e., goal), one shall follow specific events step-by-step, including do the research, write the paper, etc. Such procedure knowledge not only provides a process of problem solving, but also benefits many real-world applications, such as narrative understanding (Chaturvedi et al., 2017), task bots (Peng et al., 2021), and diagnostic prediction (Zhang et al., 2020c). Therefore, the task of script generation is proposed to automatically generate events given a goal (Lyu et al., 2021).\nExisting works typically assume that events are sequentially arranged in a script, while we argue that this assumption leads to linear generation that is far from enough for comprehensively acquiring the representation about how events are organized towards a task goal. When humans compose a † Co-corresponding Author. Steps:\nFigure 1: An illustration of the hierarchical decomposition of the goal \"How to obtain a Ph.D. degree\". We use blue for the steps, and yellow for the subgoals. Conventional task focuses on steps only, while we highlight the breaking in the middle (red arrows), which refers to the switch between higher-level subgoals.\nscript, the underlying procedure of a task is often not a simple, flat sequence. As suggested by cognitive studies (Botvinick, 2008;Zhang and Norman, 1994), human problem solving often involves hierarchical decomposition of the task. That is to say, a more complicated task is often decomposed into subgoals, and each subgoal can be further decomposed into more fine-grained steps. For instance, the process of obtaining a Ph.D. degree can divide into subgoals like publishing research papers, passing the qualification exam, and defending the thesis.\n(Figure . 1). The subgoal publishing research papers thereof further consists of more fine-grained steps like doing the research, writing the paper, and passing peer review. Accordingly, a proper way of acquiring script knowledge should also hierarchically capture different levels of task subgoals.\nIn this paper, we propose to investigate subgoals as an intermediate between goals and steps. Given arXiv:2305.10907v1 [cs.CL] 18 May 2023 a goal, instead of generating steps in a linear manner, our task seeks to generate scripts at two levels. The first level consists of subgoals, and the second level is detailed steps, where each subgoal contains several steps. Such a new setting not only accords to people's cognitive process (Antonietti et al., 2000) but also investigates the model's abilities to understand knowledge from two aspects: problem decomposition and summarization. We propose three research questions to assist the investigation: 1) How to identify the subgoals? 2) How to effectively introduce subgoals to models to improve script generation? 3) Does the generated hierarchy align with human intuition?\nTo answer the first question, we construct a new dataset, namely \"Instructables\"1 , about D.I.Y projects involving goals, subgoals, and steps. Besides, we extend the existing wikiHow dataset (Zhang et al., 2020b) with the subgoals. To automatically obtain subgoals, we deploy a segmentation method that separates steps in training data. For each segment, we further leverage a promptbased fine-tuning (Lester et al., 2021) method to learn subgoal generation. We have designed quantitative metrics for evaluation, which verifies the reasonability of the two-level hierarchy of script.\nFor the second question, we build the benchmark with multiple baselines. The basic idea is to incorporate the goal, subgoals, and steps into one prompt template, and use special tokens to preserve structure information. Then, we finetune Pre-trained Language Models (PLMs) to generate in a top-down or interleaving manner. This allows the model to take a break to conclude on each subgoal before generating the succeeding steps. We have conducted extensive experiments. The results show that by including subgoals, the model generates scripts with better soundness, diversity, and making better sense to achieve the goal. In fact, given gold standard segmentation and subgoals, the improvement is more substantial, indicating space for improvement in our predicted subgoals.\nTo address the third question, we conduct human evaluation to assess the quality of both steps and subgoals, as well as the above two types of model abilities. We observe that the language model shows a weaker ability to decompose the goals than to summarize the steps. We have also analyzed the errors in detail and found that the models some-times generate repetitive subgoals and low-quality steps. The models still have much room for improvement in generating high-quality hierarchical scripts if the goal is too complicated.\nTo summarize, our work has three contributions: 1) We construct the dataset \"Instructables\" and extend the wikiHow dataset to investigate the subgoals as intermediate between goals and steps. 2) We build a benchmark with several baselines towards hierarchical script generation. 3) We conduct extensive evaluation to investigate the effects of subgoals qualitatively and quantitatively." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b18", "b23", "b3", "b17", "b21", "b11", "b33", "b8", "b16", "b15", "b26", "b5", "b6" ], "table_ref": [], "text": "Procedural Knowledge Acquisition Early research on the script and procedural knowledge is usually formulated as a classification or ranking problem. For example, Modi and Titov (2014) and Pichotta and Mooney (2016) predict a score for each given event to determine their relative order based on event representation learning. P2GT (Chen et al., 2020) and APSI (Zhang et al., 2020a) further analyze the intention of events and conduct a joint learning-to-rank for better ordering. Thanks to the success of the Pre-trained Language Model (PLM), recent work GOSC (Lyu et al., 2021) proposes to generate steps in a simple, flat manner for any goal. Another line of works (Pareti et al., 2014;Lagos et al., 2017) have attempted to establish a hierarchical structure among scripts by linking their steps. Given any event of goal A, Zhou et al. (2022) compute a similarity score to find the most semantically close goal B, so that all events of B can be regarded as detailed subevents of the given event at the lower level. This approach, although effective, has an exceptionally high demand on the dataset to cover a wide range of goals. In many cases, there is no reasonable goal for alignment, which results in a deviation in the meanings between the linked sentences. Therefore, we neither regard steps in a flat manner, nor link steps and goals with the retrieve-then-rerank approach. Instead, we propose a task and model targeting the inner hierarchy of a script during generation and are complementary to the above works.\nControlled NLG Script generation is a form of controlled text generation task (Hu et al., 2017) since the generated scripts are attributed to the given goal. To increase the controllability of text generation, research efforts investigate the ways of constrained decoding. NeuroLogic Decod-ing (Lu et al., 2021) improves controlled generation upon semantic constraints by enforcing predicate logic formula at the decoding stage. NeuroLogic A*esque Decoding (Lu et al., 2022) further incorporates a lookahead heuristic to estimate future constraint satisfaction. Controlled text generation tasks can take other forms like generating descriptions conditioned on subparts of a table (Wang et al., 2022). Another classic application of controlled text generation is storytelling, whereby stories are generated based on a prompt or a storyline. (Fan et al., 2018) generate hierarchical stories conditioned on a prompt that was generated first. (Fan et al., 2019) enhance the coherence among different levels of a hierarchical story with a verbattention mechanism. Unlike tasks like storytelling, script generation is not open-ended since it is goaloriented." }, { "figure_ref": [], "heading": "Task and Dataset", "publication_ref": [], "table_ref": [], "text": "In this section, we first formulate the new task setting and then introduce the new dataset that we constructed, namely Instructables." }, { "figure_ref": [], "heading": "Task Definition", "publication_ref": [ "b17" ], "table_ref": [], "text": "The original goal-oriented script generation (Lyu et al., 2021) (GOSC) focuses on generating a sequence of steps (or events) that accomplishes the given goal. In contrast, the proposed hierarchical script generation conducts hierarchical generation and models the script as multiple levels of events. Formally, given a goal g as input, it is to generate L levels of events as output, where the events at the 1-st to the (L -1)-th levels are called subgoals (s) and the events at the L-th (most fine-grained) level are called steps (t) . Within each level, the list of children events should fulfill the objective of their parent event. Note that the number of events at each level is not fixed, and the model is required to decide the number by itself. Based on our observation, two levels of events are sufficient for most scripts (i.e., L = 2) in reality. For example, both websites, wikiHow and Instructables, define several sections for each goal, and each section contains multiple steps. These task instruction resources are all organized in two levels. Thus, in the rest of the paper, we define L = 2.\nThis task inherently include 2 settings. The input for both settings are the goal g. Setting 1 takes the sequence of events from the lowest level of the hierarchy as output, which is the same as the GOSC task. Through this setting, we investigate whether including subgoals improve the traditional script generation task. Setting 2 takes the entire hierarchy of events as output. Through this new setting, we investigate the language model's ability to generate high-quality subgoals and steps." }, { "figure_ref": [], "heading": "Dataset", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "We use two datasets, wikiHow and Instructables, for the studied task. The wikiHow dataset (Zhang et al., 2020b) is a collection of how-to articles crawled from the wikiHow website. Each article describes methods to achieve a given goal in the title. The articles are written in sections, each of which comes with a few steps. In this work, we consider one section of steps as one segment and a section name as a subgoal. Due to the lack of resources, many research works on script use wiki-How as the only dataset.\nTo verify the model's generalizability over more than a single dataset, we construct another dataset based on Instructables2 -a website specializing in user-created do-it-yourself (DIY) projects. Instructables Construction The data construction process consists of two stages. The first is raw data preparation. We collect the content of each project according to its category (Circuits, Workshop, Craft, Cooking, Living, and Outside) using Scrapy. 3 Each project consists of a title showing the item that the author sought to make (i.e., a toy rocket) and the instructions to make this item. In most cases, authors write the instruction in a few sections, each with a step-by-step description. During crawling, We take each section name as a subgoal and every sentence as a step.\nThe second stage is filtering. The raw data is inconsistent in text style, section format, article length, etc. We hereby carry out seven steps to filter out noisy data. We remove: 1) Non-English projects using Python library langdetect. 4 2) The section on Supplies or Materials, which describes composed materials instead of events/actions. 3) The section numbers (e.g., \"section 3: draw a line\" -> \"draw a line\"). 4) Unnecessary spaces and characters like Line Feeder to maintain the human-readable text. 5) Projects with empty text content in any section since these sections are usually presented as figures or videos. 6) Projects with any overly lengthy section. stories or anecdotes to convey the rationale they came up with the project, and we find 128 words a good threshold to filter them out. 7) Projects that build the same item as others. We remove repeated articles about seen items to eliminate possible redundancy or anomalies in data distribution. Finally, we unify the format of project titles to make them consistent. By performing Part-of-Speech (POS) Tagging on titles, we prefix \"How to make\" to noun phrases (e.g., How to make Kung Pao Tofu), we recover the verb from stemming and prefix \"How to\" to verb phrases (e.g., How to Build a Toy Rocket), and we retain the How-to questions (e.g., How to Create a Puppet).\nDataset Statistics Table 1 shows the statistics of Instructables by category. In total, we obtain 107,408 scripts, 560,079 subgoals, 1,478,669 steps, and 26,813,397 words. We analyze the difference between Instructables and wikiHow in appendix A" }, { "figure_ref": [], "heading": "Method", "publication_ref": [], "table_ref": [], "text": "In this section, we present our series of methods to build the benchmark for the hierarchical script generation task. Note that we do not target a bestperforming model but aim at a reasonable baseline to shed light on future research. We first introduce a segmentation method that automatically segment the steps and generate their subgoals, in case there is no ground truth hierarchy available for training. Then, given the goal, subgoals, and steps, we introduce the proposed framework for training. Finally, given a goal, we detail the inference process." }, { "figure_ref": [], "heading": "Segmentation", "publication_ref": [ "b4", "b25" ], "table_ref": [], "text": "The dataset wikiHow and Instructables naturally manifest an easy-to-consume hierarchical layout, as we explained before. However, for better generalization, we do not assume that all sources of script data possess the privilege of a hierarchical layout with sections and subgoals. Formally, given an ordered list of steps, we seek to find segmentation points between steps to separate them into relatively concrete segments. Each segment should inherently represent a subgoal. To find these segmentation points in an unsupervised manner, we propose four methods. The first method finds low probability scores between consecutive steps via BERT next sentence prediction (Devlin et al., 2019). The second method measures the plausibility of a list of steps with perplexity and locates the abnormal ones. The third method applies clustering algorithm to group steps based on their sentence embeddings. The last method locates segmentation points upon multiple topics detected via fastclustering (Reimers and Gurevych, 2019).\nWe explain these methods in detail in appendix B." }, { "figure_ref": [], "heading": "Subgoal Labeling", "publication_ref": [ "b24" ], "table_ref": [], "text": "Given steps and their segmentation, we are to generate an event for each segment as their parent subgoal, where the subgoal is a high-level summarization of its children steps. Due to the lack of annotations, we perform the labeling in a selfsupervised manner. That is to say, we regard it as a dual problem of script generation. Given a goal and all the steps in a flat format, instead of training a model to generate steps, we fine-tune a T5-base model (Raffel et al., 2020) to generate the goal using the list of steps as inputs. Specifically, We convert the question-format goal into a verb phrase by removing the \"How to\" prefix, which is more suitable as a subgoal. Note that we did not include any additional training data but reused the training dataset for the script generation task. This practice ensures that the system is not leaked with any sentences in the development or testing data to gain unfair information at training time." }, { "figure_ref": [], "heading": "Hierarchical Generation", "publication_ref": [ "b12", "b27", "b27" ], "table_ref": [], "text": "Training Given a goal, we train a model to generate a varying number of subgoals, and each subgoal has its own steps. Thanks to the recent progress of prompt-based fine-tuning (Lester et al., 2021), we use the special token <section> to preserve the structure information (Yao et al., 2019) and take advantage of PLMs to generate such a two-level structure by re-formatting a prompt: Intuitively, there are two typical generation sequences for a multi-granular generation task, interleaving and top-down. The above template adopts an interleaving generation sequence. In terms of the auto-regressive decoding sequence, a subgoal is generated, followed by its affiliated steps, the next subgoal, and so on. We also propose a template incorporating a top-down generation sequence where all subgoals are generated first followed by all the steps. The prompt is as follows:\n[Goal], the subgoals are:\n[Subgoal], [Subgoal]. <section>, [steps]. <section>, [steps]\nFor both generation sequences, we add a special token <section> as a delimiter between two segments to denote the hierarchy information. Of course, for more levels, one can add more special tokens as delimiters between different levels. Inspired by Yao et al. (2019), we use a special token to delimit different parts of the structure instead of conducting a complex hierarchy decoding process. This technique leads to two benefits. First, it allows a sequential generation process that aligns well with the pre-training objective of PLM, therefore facilitating knowledge prompting from the PLM. Second, the hierarchy information improves longtext generation (e.g., sometimes there are many steps to decode) because the subgoal shortens the dependency between steps by providing a highlevel summarization. We leave the exploration for other long-text generation tasks in the future.\nInference At inference time, we feed the how-to question, which contains the goal, into the tuned PLM as input, with the prefix \"Ask question:\" as common practice for Question Answering tasks using the PLM. We fix the hyper-parameters the same as across training settings. The decoder generates subgoals and steps in an interleaving/top-down approach, and the output is the same as the prompt format we design for training sets. We leverage the special tokens in output as the beacon for extracting subgoals and subsequently expand the linear output into the hierarchical format." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "To evaluate the proposed framework for hierarchical script generation, we conduct extensive experiments on the presented wikiHow and Instructables datasets. We compare our proposed framework with prior strong baseline method and discuss the results ( § 5.2- § 5.3). We have also investigated the best segmentation method as a secondary experiment. ( § 5.4).We conduct ablation studies ( § 5.5)." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b17", "b17", "b32", "b20", "b14", "b13", "b17", "b17", "b24" ], "table_ref": [], "text": "Dataset Following the setup from Lyu et al. (2021), we randomly separate datasets into training and test sets using a 90/10% split, and hold out 5% of the training set as the development set. We perform both automatic and human evaluations to assess the quality of the generated script.\nMetrics For automatic evaluation metrics, we first follow prior works (Lyu et al., 2021) without considering the hierarchy information as our task setting 1 § 3.1 and report perplexity and BERTScore (Zhang et al., 2019) for steps. Perplexity is the exponential of log-likelihood of the sequence of steps assigned by the model. In addition, we compute three widely used generation metrics, BLEU-1 (Papineni et al., 2002), ROUGE-L (Lin, 2004) and Distinct-n metric (Li et al., 2016) by taking average over all testing data. For our experiment that investigates the best segmentation strategy, due to the lack of measurements, we define a metric \"segment distance\" and the details can be found in § 5.4. When evaluating the hierarchical scripts, we remove the subgoals and prompt template to ensure fair comparison.\nBaseline Given the different nature of the hierarchical script generation in contrast to the previous task, there is not a directly applicable baseline. Here, we choose a state-of-the-art model for conventional script generation, namely GOSC (Lyu et al., 2021), as a strong baseline. Note that we carefully re-implement the model according to the settings and parameters from (Lyu et al., 2021) and report a variance where the mT5-base model of GOSC is replaced as T5-base model (Raffel et al., 2020) to learn better on the English corpus, since GOSC was originally designed for a multilingual setting. Another baseline is a two-stage generation process. First generating the steps in GOSC man- Table 2: Performance of our method (marked as HSG) in top-down and interleaving approach on test set of wiki-How and Instructables datasets. We also report the cases where gold segments and subgoals used in the generation process as a performance upper bound. We abbreviate BERTScore, Perplexity, and Distinct-3 to Bert., Perp., and Dist.-3 respectively. We bold the best performance.\nner, then using these steps as subgoals to generate the actual steps." }, { "figure_ref": [], "heading": "Automatic Evaluation", "publication_ref": [], "table_ref": [], "text": "We report the average results of 3 runs of hierarchical script generation on both datasets in Table 2. We compare the quality of the generated text according to the four metrics ( § 5.1). For each dataset, we also report the case where ground truth segmentation and subgoals (section name) are provided as a performance upper bound (of the proposed prompt-based fine-tuning method, as better performance may be achieved by more advanced model).\nWe can observe that the results from both datasets are generally consistent. (1) The two-stage baseline is weaker than the basic baseline as indicated by most metrics. This baseline has an error aggregation problem whereby the steps generated could be irrelevant after two stages of generation.\n(2) Our method has outperformed the baseline in three metrics (Perplexity, BLEU-1, and ROUGE-L scores), indicating the effectiveness of our method in generating scripts with higher quality. (3) The improvement in Distinct-3 metric on both datasets indicates that our method is capable of generating texts of greater variation. With segmentation, we take a break in the middle of the generation process, and provides the model a chance to conclude on steps of the current subgoal, and refer to the information from the upper level of the hierarchy. The model thereafter generates the script with better quality and less repetition. (4) Between the top-down and interleaving approaches, the latter achieves slightly better or tied scores among almost all metrics for both datasets. The subgoals are in proximity to the corresponding steps for the interleaving approach, which better guides the step generation. (5) It is noteworthy that our method with predicted subgoals outperforms the gold segmentation and subgoals on perplexity for Instructables dataset, showing that our generated subgoals might be closer to natural language compared to using gold subgoals from Instructables. (6) Except for this, using gold segmentation and subgoal leads to better results in all other metrics on both datasets, manifesting an enormous potential of our method, which also indicates an area of improvement for the accuracy of our predicted segmentation and subgoals. We acknowledge that existing metrics are unable to directly measure how well a script fulfills a goal. Hence, we conduct human evaluations to complement automatic evaluation." }, { "figure_ref": [ "fig_1", "fig_2" ], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [], "text": "We conduct human evaluations to assess the quality of our hierarchical scripts for task setting 2 § 3.1. We evaluate the scripts on both the steps and the subgoals. The steps are assessed through direct comparison, where we ask the annotators to choose between scripts generated by our method (flattened) and by the baseline. In addition, we also evaluate the generated subgoals based on two criteria. The first criterion concerns whether the annotators consider the generated subgoals as valid components of the task goal, i.e., problem decomposition ability. In the context of the main goal, the second criterion concerns if the generated subgoal properly represents the associated steps, i.e., summarization ability. We provide more details (e.g., guideline and annotators) of human evaluation in appendix D.\nStep Evaluation From Figure 2, the scripts generated by our method were more favored by annotators over the baseline scripts in both wikiHow (59%) and Instructables (70%) test sets than the baseline scripts, excluding 7% and 9% draw cases. In addition, we realize that the proportion of favored scripts is higher on the Instructables dataset than on wikiHow. Such results are due to the highquality wikiHow scripts generated by the baseline method. Our method has a greater improvement on Instructables, similarly, attributes to the low-quality scripts from the baseline, mainly because Instructables' scripts are more difficult and longer, which requires the ability of problem decomposition. In extreme cases, we observed empty or very short outputs for Instructables using the baseline method, which did not appear in the scripts generated by our method. Further, we analyze more typical examples and mistakes in case study (appendix E).\nSubgoal Evaluation From Figure 3, regarding the question of whether subgoals are helpful to achieve the goal, 70% of the subgoals are given credit by the annotators for the wikiHow dataset, while this percentage is 58% for the Instructables dataset. For the other question assessing whether generated subgoal well-represents the associated steps, the percentage of positive responses for the wikiHow dataset (76%) also surpasses that of the Instructables dataset (62%). The results from these two questions accord with each other that the subgoals generated for the Instructables dataset are of worse quality than that of wikiHow. From another perspective, comparing the results between two questions, we find that the generated subgoals have a weaker degree of association with the goals than with the generated steps. The results demonstrate a great challenge on complex task decomposition." }, { "figure_ref": [], "heading": "Segmentation", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To better understand the best segmentation strategy for our task, we assess distinct segmentation techniques and aim to find the one with the closest segment structure to the ground truth. In order to measure the affinity between predicted and gold segmentation points, we propose the metric \"segment distance\" in light of the metric \"edit distance\" in quantifying string similarity. Instead of calculating the number of minimal editing scripts, \"segment distance\" calculates the least number of steps to shift m predicted segmentation points to the actual ones, where m = min(p -1, g -1), p is the number of predicted segments, and g is the number of segments in ground truth. In addition, we impose of penalty score P = k * d for difference of number of segments d = |p -g| as a metric that encourages accurate estimation of number of segments. The k value is set between 3 and 4 as a fair penalization.\nAs a baseline, we take the average number of segments N (closet integer) from the dataset and carry out an N -equal splits on each script. This simple approach is in fact a strong baseline since most scripts have a limited number of steps in dif-Figure 4: ROUGE-L Score on test set categorize by number of segments from \"1\" to \"5 or more\". Green color represents the baseline method. Blue color represents our method. Yellow color represents our method using gold segmentation and subgoals.\nferent segments. This baseline inherently maintains a small penalty score P . We evaluate the segmentation performance on 1,000 random scripts.\nIn Table 3, we report the results of segmentation experiment on wikiHow dataset as a representation, where a smaller average segment distance indicates a structurally similar segmentation to the gold standard. We report two settings on k = 3 and 4 respectively. On the one hand, the baseline method is demonstrated to be strong since they produce comparable results with NSP and Perplexity methods. On the other hand, the Clustering and Topic Detecting methods outperform the baselines. Topic Detecting produces the best score of 3.82 when P = 3, and 4.69 when P = 4. As such, we chose it as our segmentation method for this work." }, { "figure_ref": [ "fig_3" ], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [], "text": "Scripts: Long vs. Short On wikiHow, about 25% of the scripts are written in one whole segment without subgoals. We are interested to see whether our method can improve the generation of these scripts as well. We extend this question by navigating the impact of our solution on scripts of different lengths and with different number of subgoals. The dividing of the segments can vary according to the complexity of the goal, and the author's writing style -some authors prefer to write scripts all in one segment. We categorize the test set according to number of segments in ground truth scripts into \"1\", \"2\", \"3\", \"4\" and \"5 or above\", where the number of steps are averaged as 9.5, 13.3, 16.1, 20.8 and 27.9, respectively. We select ROUGE-L as a representative metric, and Blue color represents our method using segmentation separated with special tokens. Yellow color represents our method using both segmentation and subgoals. plot a graph of performance with respect to the number of segments in Figure 4. From the result, it is evident that the improvement is outstanding with long scripts (more segments) in test set. Overall, the baseline scripts showcase a downward trend as the number of segments increases, since decoding is often more challenging when the texts are longer -this is a common difficulty for long text generation. Our methods tackle this problem by taking a break in the middle and providing room for adjustment at decoding time with segmentation and subgoals. Consequently, hierarchical generation manifests a rising trend, especially with gold segment and subgoals. Another interesting phenomenon is that the performance improves, although not significantly, for the single segment scripts. Although the authors do not divide segments when composing scripts, the tasks may still inherently manifest a hierarchical structure, as solving a task naturally falls into different stages.\nSubgoals: Are these Necessary Since the automatic evaluation showcases an improvement in the quality of generated scripts, we hereby further investigate if such improvement is caused by the segmentation only. We experiment by formatting the training data scripts with special token at position of a new segment, but not adding the subgoals. We explored two settings. The first setting uses gold segmentation, and the second setting uses the predicted segmentation. We evaluated the scripts using the metrics in § 5.1. The result in Figure 5 shows that under both settings, simply with a few special tokens separating the steps in training dataset, the generated text improves in quality. However, the results are still worse than those with subgoals, explaining the significance of including subgoals in training data. In addition, for human, subgoals explainability for why models choose to make specific segments during script generation." }, { "figure_ref": [ "fig_4" ], "heading": "Case Study and Error Analysis", "publication_ref": [], "table_ref": [], "text": "We present two example scripts generated using our method with the wikiHow dataset. Through these two examples, We analyze the common errors and the typical mistakes encountered. We put two more example scripts from Instructables dataset in appendix E since they are rather long. These Instructables scripts manifest not only the common errors in this section but also a third typical mistake.\nRepetitive Subgoals According to our observation, the pervasive problem is that the generated subgoals are repeats of one another or the goal. This error appears in examples in Figure 6. One cause of this error is the inaccurate segmentation in the training dataset, which raises the difficulty of the subgoal prediction. Frequently, the subgoal labeled to a segment is no other but this segment's main goal. The reason is that the segment is a fraction of the script used as input when training the subgoal predictor. Moreover, the frequent occurrence of repetitive subgoals in the training dataset may seem like a pattern for the generation model, generating more scripts with repetitive subgoals. A revised loss function that penalizes repetition among goals and subgoals is a possible solution.\nIrrelevant or Low-Quality Steps Another mistake with the generated scripts resides in the quality of the steps. For instance, in Figure 6, the model possibly mistakes \"black powder\" for \"black pepper\" and generates steps related to cooking. This mistake could originate from the lack of weaponrelated knowledge in the training dataset. Figure 7 shows the script of \"go green,\" with two subgoals. Despite the reasonable subgoals generated, the steps under \"reduce carbon footprint\" are irrelevant. The correct interpretation of \"go green\" is about environmental-friendly measures, while the steps discuss \"green and healthy lifestyle.\" Since both wikiHow and Instructables use pictures to supplement text descriptions, a multi-modal approach may reduce ambiguity in the goal interpretation." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work studies a new task of hierarchical script generation. To facilitate the research on this task, we contribute a new dataset Instructables to supplement the existing wikiHow resource, gathering over 100,000 hierarchical scripts. We further propose a method to build the benchmark, which learns to segment steps before script generation and concludes the subgoal each segment represents. Then, we finetune a T5-base model using prompts that combine subgoals and steps with special tokens. Experiment results from both automatic and human evaluation show that scripts generated by our method are in better quality than the baseline. Meanwhile, the gap towards the performance upper-bound still indicates much room for improvement. In the future, we are interested in exploring the idea of hierarchical generation in dealing with paragraphs in document generation." }, { "figure_ref": [], "heading": "A Comparison between wikiHow and Instructables", "publication_ref": [], "table_ref": [], "text": "In this section we compare the features and statistics between dataset Instructables and wikiHow in detail.\nThe two datasets are different in multiple aspects. In terms of content, Instructables includes innovative thoughts on building concrete objects (e.g., toy rocket), While wikiHow incorporates daily-life experiences for possibly abstract concepts (e.g., live healthily). In terms of language style, Instructables is subjective (e.g., I built it with ...), and wikiHow is relatively objective with the use of imperatives (e.g., Take a good sleep). Regarding domain, Instructables involves six domains like circuits and craft, while wikiHow spans over 19 domains like arts and sports.\nFor Instructables dataset, on average, there are 5.2 subgoals for each script, 2.6 steps per subgoal, and 18.1 words per step. We also collect this statistics for wikiHow dataset, which includes 112,451 scripts, 278,680 subgoals, 2,057,088 steps, and 12,281,074 words. For wikiHow, there are 2.5 subgoals for each script, 7.4 steps per subgoal, and 6.0 words per step on average. The average sentence length of Instructables is much longer due to its narrative-based language style. Compared to wikiHow, which focuses on daily-life experiences, Instructables is more challenging since many items to build are highly professional and complicated (i.e., a Blind Assist Ultrasonic Navigator), which also explains the reason for the large average number of subgoals per script." }, { "figure_ref": [], "heading": "B Segmentation Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we formally explain the algorithms and implementations of each segmentation method in detail." }, { "figure_ref": [], "heading": "B.1 Next Sentence Prediction", "publication_ref": [ "b4" ], "table_ref": [], "text": "We separate two consecutive steps if their continuity is predicted as negative via next sentence prediction -the two steps are talking about different topics. Given a list of ordered steps, we concatenate every two consecutive steps as [CLS]step1[SEP]step2[SEP] and calculate the probability score using BERT-base (Devlin et al., 2019) model. Specifically, we assume that a higher probability score indicates that the latter step is more rational than the previous one. We determine K lowest probability scores corresponding to K Algorithm 1 Finding segmentation points with topic detecting. N is the number of steps in the script. x and y are start and end positions of subset. S is the list of segmentation points we look for." }, { "figure_ref": [], "heading": "Require: N ≥ 3", "publication_ref": [], "table_ref": [], "text": "At least 3 steps in a script x ← 0 y ← 2 S ← list while y < N do if topicN umber(x, y) < 2 then y ← y + 1 else if topicN umber(x, y) ≥ 2 then S ← y -2\nx ← y -1 y ← y + 1 end if end while segmentation points. In experiments, we heuristically find the best K between 2 to 3." }, { "figure_ref": [], "heading": "B.2 Perplexity", "publication_ref": [], "table_ref": [], "text": "Another approach is to measure the plausibility of a list of steps with perplexity. Assume that for a list of steps [S x to S y ], the gold segment position is between S i and S i+1 (x < i < y), separating the list into two segments [S x to S i ] and [S i+1 to S y ]. The perplexity of [S x to S i+1 ] should be greater than that of [S x to S i ] since an additional sentence not belonging to the segment makes it less natural. Similarly, the perplexity of [S i to S y ] should be greater than that of [S i+1 to S y ]. We iterate from i = 0 to i = N -1 (number of steps), and mark the i as a segmentation point if it satisfies both perplexity requirements." }, { "figure_ref": [], "heading": "B.3 Agglomerative Clustering", "publication_ref": [ "b19", "b25" ], "table_ref": [], "text": "Instead of looking for segmentation points, we apply hierarchical agglomerative clustering (HAC) (Müllner, 2011) to group steps based on their sentence embeddings using SentenceBert (Reimers and Gurevych, 2019). Specifically, we merge two steps if their euclidean distance falls below a threshold while maintaining variance within all clusters minimized. Since HAC does not guarantee consecutive steps in the same cluster (e.g., if steps 1,2,4,5 are in cluster A, step 3 could be in step B), we make adjustment by recursively sending each step to the cluster with most of its neighbours, and sort the steps in the end." }, { "figure_ref": [], "heading": "B.4 Topic Detecting", "publication_ref": [], "table_ref": [], "text": "While NSP compares topics locally between 2 steps, we design this method to detect topics glob-ally among multiple steps. As shown in algorithm 1, starting with the first two steps, we add one step each time. A segmentation point is marked before the new step if more than one topic is detected. The topic detecting is implemented using fastclustering5 , which calculates cosine-similarity scores among the steps based on their sentence embeddings. Assuming that steps that share a topic have higher similarity scores, steps are assigned to the same community if their scores are above a threshold. In practice, we find 0.65 a reasonable threshold." }, { "figure_ref": [], "heading": "C Model Configuration", "publication_ref": [], "table_ref": [], "text": "We fine-tune the T5 model from the Hugging-Face service6 . We use Adam (Kingma and Ba, 2014) for optimization with the learning rate of 1e-4. We set the batch size 16 to fit the memory of one NVIDIA Tesla v100 GPU. The number of epochs is limited to 3 for models to converge within a reasonable running time. Training takes around 6 hours to finish on wikiHow and 12 hours on Instructables. We choose the model with the highest development performance to be evaluated on the test set." }, { "figure_ref": [ "fig_5", "fig_6" ], "heading": "D Human Evaluation Details", "publication_ref": [], "table_ref": [], "text": "In this section, we explain in detail our human evaluation settings. For step evaluation, each question provides a goal and two scripts, asking the annotator which script better achieves the goal. Three options are provided, \"A is better\", \"B is better\", and \"Not sure\". Note that we flatten the script generated by our method and randomize the positions (A or B) of scripts in different questions to prevent the annotators from possibly identifying which script is ours from non-content information. For subgoal evaluation, we evaluate the association between 1) goal and subgoal and 2) subgoal and step. For goal-subgoal evaluation, each question provides a goal and a list of subgoals generated, asking the annotators if the subgoals are valid components of the goal and assist in achieving the goal. For subgoal-step evaluation, each question provides a goal, a subgoal, and a list of steps, asking the annotators if the subgoal is representative of the steps considering the goal.Three options are provided, \"Yes\", \"No\", and \"Can't decide\". For each criterion (step, goal-subgoal, and subgoal-step), we randomly generate 100 questions from the test set of each dataset (wikiHow, Instructables), giving a total of 600 questions. We employ four human annotators in total. All annotators are graduate students and native or proficient speakers of English. All annotators possess adequate real-life knowledge and experiences to make reasonable judgments about the provided goals and have no potential conflicts of interest in this work. Each set of questions is answered by two different annotators, and for any disagreement, a third annotator will provide the final answer.\nWe hereby present the screenshots of the human evaluation questions. Figure 8 corresponds to the questions which compare the script generated by our method with the baseline. Figure 9 and 10 correspond to the questions which evaluate the quality of generated subgoals Since each question is answered by two annotators, we report Inter Annotator Agreement (IAA) by providing the number of questions (out of 100) that a third annotator is not required. For the question \"Which script better achieves the goal\", the IAA are 72 for wikiHow and 81 for Instructa- bles. For the question \"Are the subgoals helpful to achieve the goal\", the IAA are 78 for wikiHow and 50 for Instructables. For the question \"Is the subgoal representative of the steps\", the IAA are 69 for wikiHow and 70 for Instructables." }, { "figure_ref": [ "fig_9" ], "heading": "E More Case Studies", "publication_ref": [], "table_ref": [], "text": "In this section we further analyze two hierarchical scripts generated using our method from Instructables dataset.\nThe example in Figure 11 also manifests the two common errors presented in § 6, repetitive subgoals and low-quality steps. To elaborate, the steps ambiguously mention the concepts of processing the woods, painting, and assembling the How to Make Wooden Toy Trucks Input:" }, { "figure_ref": [ "fig_10" ], "heading": "Output:", "publication_ref": [ "b15" ], "table_ref": [], "text": "Subgoal: a Wooden Toy Truck 1. The first step is to cut the wood into pieces.\n2. You can use any size you want but I used 2x4's for this project.\n3. Once you have all of your pieces cut out, it's time to glue them together. 4. Now that everything is dry, we are going to start painting! 5. We will be painting the sides of the trucks with white paint and then adding some more details on each side so they don't look too messy. 6. After applying the paint, let it dry completely before moving onto the next step.\nSubgoal: a Wooden Truck 1. This part is pretty simple: just take one piece of wood and put it in between the two pieces of wood. 2. Make sure not to cover the whole thing or else it won't hold its shape well. wheel and truck body. However, the overall quality of the steps are rather low and does not lead to the fulfillment of the goal. A goal like this with complicated building steps may still be challenging for models at T5-base level. Another problem observed is inaccurate segmentation or subgoal. Each subgoal is supposed to be a good summary of the corresponding steps. The example in Figure 12 shows an incorrect subgoal, \"Yummy Eggs\", while the steps are about preparation. The \"egg\" possibly comes from the keyword \"breakfast\" at decoding time. The second subgoal is not perfect either, whereby the last three steps discuss the idea of \"baking\" instead of \"making pizza crust\"; hence they should be separated into a third segment. This error manifests as an area of improvement in models' ability to understand the big picture of the event. It might be helpful to revise the decoding strategy for beam search in the future. By adding heuristics (Lu et al., 2022) in beam search, it is possible to improve the constrained generation process to look into the goal globally instead of locally." }, { "figure_ref": [], "heading": "F More Qualitative Examples", "publication_ref": [], "table_ref": [], "text": "Aside from the examples shown in section appendix E, We provide 3 examples scripts generated and compare them with the ground truth. The outputs on the left hand side are generated with our method and the scripts on the right hand side are How to Make Breakfast Pizza Input:" }, { "figure_ref": [], "heading": "Output:", "publication_ref": [], "table_ref": [], "text": "Subgoal: Yummy Eggs 1. You will need: 1 cup of flour 2 tablespoons of baking powder 1/2 teaspoon of salt 3/4 cup of water (I used my hands) Salt to taste Pinch of pepper 2. To make the dough you will need a large bowl, a small mixing bowl, an electric mixer, a pizza cutter, some scissors Subgoal: Easy Pizza Crust 1. Add your eggs and mix them together. 2. I like to use an electric mixer because it is easier to work with.\n3. It's also easy to do when you are making this crust. 4. If you don't have one, just put some on top of each other so that they won't get stuck in the pan. 5. This makes for a nice crisper crust. 6. Now let's start by adding our toppings; cheese, tomato sauce, or whatever else you would like. 7. For mine I added about 14 cup of milk at a time. 8. Once everything is mixed up we can begin cutting into slices. 9. When all the pieces are cut out take off the foil from the oven and place onto a plate. 10. After placing the slicer over the pizza then flip it over and repeat steps 2-5 until you reach the desired thickness. 11. Take care not to burn yourself while doing this step. " }, { "figure_ref": [], "heading": "Acknowledgement", "publication_ref": [], "table_ref": [], "text": "We appreciate the reviewers for their insightful comments and suggestions. Xinze, Yixin, and Aixin were supported under the RIE2020 Industry Alignment Fund -Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). Yixin is supported by the Singapore Ministry of Education (MOE) Academic Research Fund (AcRF) Tier 1 grant. Muhao Chen was supported by the National Science Foundation of United States Grant IIS 2105329, an Amazon Research Award and a Cisco Research Award." }, { "figure_ref": [], "heading": "Limitation", "publication_ref": [], "table_ref": [], "text": "The dataset Intructables, same as wikiHow, has a two level hierarchy of goal-subgoals-steps. Such structure is because of the writing habits of human authors. Due to the lack of datasets with deeper nested hierarchy, our work does not investigate the cases when there are multiple levels of subgoals. In addition, this work focuses on the presentation of task and dataset, and does not explore the performance of more advanced language models on the task. Relevant studies can be conducted in future works." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [], "text": "Our method is capable of generating large number of hierarchical scripts. Learning knowledge from different sources of information, the model might be misused into generating unsafe contents upon asking inappropriate questions. For now this seems unlikely since there is no offensive content in out collected dataset." } ]
Goal-oriented Script Generation is a new task of generating a list of steps that can fulfill the given goal. In this paper, we propose to extend the task from the perspective of cognitive theory. Instead of a simple flat structure, the steps are typically organized hierarchically -Human often decompose a complex task into subgoals, where each subgoal can be further decomposed into steps. To establish the benchmark, we contribute a new dataset, propose several baseline methods, and set up evaluation metrics. Both automatic and human evaluation verify the high-quality of dataset, as well as the effectiveness of incorporating subgoals into hierarchical script generation. Furthermore, We also design and evaluate the model to discover subgoal, and find that it is a bit more difficult to decompose the goals than summarizing from segmented steps.
Take a Break in the Middle: Investigating Subgoals towards Hierarchical Script Generation
[ { "figure_caption": "Figure 2 :2Figure 2: Stacked bar chart showing the result of human evaluation on question related to steps. Blue, grey and orange color respectively indicates the percentage that our script is preferred, baseline script is preferred, and cannot decide.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Results on subgoals. Blue and grey color respectively indicates the percentage of \"Yes\" and \"No\"", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: BLEU-1 and ROUGE-L scores for generated scripts. Green color represents the baseline method.Blue color represents our method using segmentation separated with special tokens. Yellow color represents our method using both segmentation and subgoals.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 7 :7Figure 7: The \"How to go green\" script (from wiki-How)", "figure_data": "", "figure_id": "fig_4", "figure_label": "7", "figure_type": "figure" }, { "figure_caption": "Figure 8 :8Figure 8: Screenshot of a question shown to the annotators, asking them to select the script that achieves the goal better from the two.", "figure_data": "", "figure_id": "fig_5", "figure_label": "8", "figure_type": "figure" }, { "figure_caption": "Figure 9 :9Figure 9: Screenshot of a question shown to the annotators, asking them to judge if the subgoals generated are valid components of the given goal, and are helpful in achieving the goal.", "figure_data": "", "figure_id": "fig_6", "figure_label": "9", "figure_type": "figure" }, { "figure_caption": "Figure 10 :10Figure 10: Screenshot of a question shown to the annotators, asking them to judge if the generated subgoals are representative of the steps, considering the provided goal.", "figure_data": "", "figure_id": "fig_7", "figure_label": "10", "figure_type": "figure" }, { "figure_caption": "3. Next, add another piece of wood around the top of the truck where the wheels go. 4. It should now look like the picture above.", "figure_data": "", "figure_id": "fig_8", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 11 :11Figure 11: The \"How to make wooden toy trucks\" script (from Instructables)", "figure_data": "", "figure_id": "fig_9", "figure_label": "11", "figure_type": "figure" }, { "figure_caption": "Figure 12 :12Figure12:The \"How to make breakfast pizza\" script (from Instructables)", "figure_data": "", "figure_id": "fig_10", "figure_label": "12", "figure_type": "figure" }, { "figure_caption": "Many authors post Number of total scripts, subgoals, and steps in dataset Instructables by category.", "figure_data": "CategoryScripts SubgoalsStepsCircuits22,437109,917282,685Workshop 16,99194,554257,248Craft24,874137,471365,244Cooking12,91669,633189,371Living23,204114,113291,682Outside6,98634,39192,439TOTAL107,408 560,079 1,478,669", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "An example prompt for the goal How to learn Web Design is as below:To learn Web Design, <section> with Finding Web Design Resources, check online for web design courses and tutorials. Look into taking a class at a local college or university... <section> With Mastering HTML, familiarize yourself with basic HTML tags. Learn to use tag attributes...", "figure_data": "", "figure_id": "tab_1", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Segment distances of the proposed segmentation methods on wikiHow dataset.", "figure_data": "", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Xinze Li; Yixin Cao; Muhao Chen; Aixin Sun
[ { "authors": "Alessandro Antonietti; Sabrina Ignazi; Patrizia Perego", "journal": "British Journal of Educational Psychology", "ref_id": "b0", "title": "Metacognitive knowledge about problem-solving methods", "year": "2000" }, { "authors": "M Matthew; Botvinick", "journal": "Trends in cognitive sciences", "ref_id": "b1", "title": "Hierarchical models of behavior and prefrontal function", "year": "2008" }, { "authors": "Snigdha Chaturvedi; Haoruo Peng; Dan Roth", "journal": "", "ref_id": "b2", "title": "Story comprehension for predicting what happens next", "year": "2017" }, { "authors": "Muhao Chen; Hongming Zhang; Haoyu Wang; Dan Roth", "journal": "Association for Computational Linguistics", "ref_id": "b3", "title": "What are you trying to do? semantic typing of event processes", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b4", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Angela Fan; Mike Lewis; Yann Dauphin", "journal": "Association for Computational Linguistics", "ref_id": "b5", "title": "Hierarchical neural story generation", "year": "2018" }, { "authors": "Angela Fan; Mike Lewis; Yann Dauphin", "journal": "Association for Computational Linguistics", "ref_id": "b6", "title": "Strategies for structuring story generation", "year": "2019" }, { "authors": "David Herman", "journal": "PMLA/Publications of the Modern Language Association of America", "ref_id": "b7", "title": "Scripts, sequences, and stories: Elements of a postclassical narratology", "year": "1997" }, { "authors": "Zhiting Hu; Zichao Yang; Xiaodan Liang; Ruslan Salakhutdinov; Eric P Xing", "journal": "", "ref_id": "b8", "title": "Toward controlled generation of text", "year": "2017" }, { "authors": " Pmlr", "journal": "", "ref_id": "b9", "title": "", "year": "" }, { "authors": "P Diederik; Jimmy Kingma; Ba", "journal": "", "ref_id": "b10", "title": "Adam: A method for stochastic optimization", "year": "2014" }, { "authors": "Nikolaos Lagos; Matthias Gallé; Alexandr Chernov; Ágnes Sándor", "journal": "IOS Press", "ref_id": "b11", "title": "Enriching how-to guides with actionable phrases and linked data", "year": "2017" }, { "authors": "Brian Lester; Rami Al-Rfou; Noah Constant", "journal": "Association for Computational Linguistics", "ref_id": "b12", "title": "The power of scale for parameter-efficient prompt tuning", "year": "2021" }, { "authors": "Jiwei Li; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "Association for Computational Linguistics", "ref_id": "b13", "title": "A diversity-promoting objective function for neural conversation models", "year": "2016" }, { "authors": "Chin-Yew Lin", "journal": "Association for Computational Linguistics", "ref_id": "b14", "title": "ROUGE: A package for automatic evaluation of summaries", "year": "2004" }, { "authors": "Ximing Lu; Sean Welleck; Peter West; Liwei Jiang; Jungo Kasai; Daniel Khashabi; Le Ronan; Lianhui Bras; Youngjae Qin; Rowan Yu; Noah A Zellers; Yejin Smith; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b15", "title": "NeuroLogic a*esque decoding: Constrained text generation with lookahead heuristics", "year": "2022" }, { "authors": "Ximing Lu; Peter West; Rowan Zellers; Le Ronan; Chandra Bras; Yejin Bhagavatula; Choi", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "Neuro-Logic decoding: (un)supervised neural text generation with predicate logic constraints", "year": "2021" }, { "authors": "Qing Lyu; Li Zhang; Chris Callison-Burch", "journal": "Association for Computational Linguistics", "ref_id": "b17", "title": "Goal-oriented script construction", "year": "2021" }, { "authors": "Ashutosh Modi; Ivan Titov", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Inducing neural models of script knowledge", "year": "2014" }, { "authors": "Daniel Müllner", "journal": "", "ref_id": "b19", "title": "Modern hierarchical, agglomerative clustering algorithms", "year": "2011" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "Association for Computational Linguistics", "ref_id": "b20", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Paolo Pareti; Benoit Testu; Ryutaro Ichise; Ewan Klein; Adam Barker", "journal": "Springer", "ref_id": "b21", "title": "Integrating know-how into the linked data cloud", "year": "2014" }, { "authors": "Baolin Peng; Chunyuan Li; Jinchao Li; Shahin Shayandeh; Lars Liden; Jianfeng Gao", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b22", "title": "Soloist: Building task bots at scale with transfer learning and machine teaching", "year": "2021" }, { "authors": "Karl Pichotta; Raymond J Mooney", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Using sentence-level LSTM language models for script inference", "year": "2016" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b24", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Nils Reimers; Iryna Gurevych", "journal": "Association for Computational Linguistics", "ref_id": "b25", "title": "Sentencebert: Sentence embeddings using siamese bertnetworks", "year": "2019" }, { "authors": "Fei Wang; Zhewei Xu; Pedro Szekely; Muhao Chen", "journal": "", "ref_id": "b26", "title": "Robust (controlled) table-to-text generation with structure-aware equivariance learning", "year": "2022" }, { "authors": "Liang Yao; Chengsheng Mao; Yuan Luo", "journal": "", "ref_id": "b27", "title": "Kgbert: Bert for knowledge graph completion", "year": "2019" }, { "authors": "Hongming Zhang; Muhao Chen; Haoyu Wang; Yangqiu Song; Dan Roth", "journal": "", "ref_id": "b28", "title": "Analogous process structure induction for sub-event sequence prediction", "year": "2020" }, { "authors": "Jiaje Zhang; Donald A Norman", "journal": "Cognitive science", "ref_id": "b29", "title": "Representations in distributed cognitive tasks", "year": "1994" }, { "authors": "Li Zhang; Qing Lyu; Chris Callison-Burch", "journal": "", "ref_id": "b30", "title": "Reasoning about goals, steps, and temporal ordering with WikiHow", "year": "2020" }, { "authors": "Tianran Zhang; Muhao Chen; Alex At Bui", "journal": "Springer", "ref_id": "b31", "title": "Diagnostic prediction with sequence-of-sets representation learning for clinical events", "year": "2020" }, { "authors": "Tianyi Zhang; Varsha Kishore; Felix Wu; Kilian Q Weinberger; Yoav Artzi", "journal": "", "ref_id": "b32", "title": "Bertscore: Evaluating text generation with bert", "year": "2019" }, { "authors": "Shuyan Zhou; Li Zhang; Yue Yang; Qing Lyu; Pengcheng Yin; Chris Callison-Burch; Graham Neubig", "journal": "Association for Computational Linguistics", "ref_id": "b33", "title": "Show me more details: Discovering hierarchies of procedures from semi-structured web data", "year": "2022" } ]
[ { "formula_coordinates": [ 5, 85.38, 337.25, 189.73, 17.74 ], "formula_id": "formula_0", "formula_text": "[Subgoal], [Subgoal]. <section>, [steps]. <section>, [steps]" } ]
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b21", "b33", "b42", "b18", "b7", "b4", "b44", "b9", "b35", "b24", "b50", "b35", "b24", "b29", "b2", "b2", "b27", "b52", "b28", "b28" ], "table_ref": [], "text": "Self-supervised representation learning for natural images has continued to make vast progress in past years [21,33,42,18,8,5], quickly approaching, and in cases with significant data preprocessing surpassing supervised learning performance [44]. The advantage of self-supervision lies in the ability to leverage the large quantities of data that exist in the world without human annotations to learn high-quality representations. However, most established self-supervised visual learning methods typically project representations on Euclidean or Hyperspherical manifolds, and in some cases disregarding the underlying hyperbolic structure of the data.\nWhen attempting to capture hyperbolic data structures, zero and positive curvature spaces exhibit some inherent implications as opposed to negative curvature hyperbolic space, most notably the inability to embed hierarchical semantic relationships between points in space, a well-established principle of learning good representations [10]. Although there has been debate to what extent natural images exhibit underlying hyperbolic structure of semantics, recent works have demonstrated via empirical metrics the presence of latent hierarchical tree-like structures in standard computer vision datasets [35,24], and subsequently shown the capabilities of hyperbolic representations to excel in these settings [50,35,24]. Many of these advancements in hyperbolic learning have been seen in metric and prototype learning settings, specifically for few-shot learning, where the highly separable semantic hierarchies lead to better-performing few-shot classifiers [29]. Self-supervision via prototype learning has also demonstrated state-of-the-art performance in few-shot and low-shot tasks [3], therefore we leverage the hierarchical learning capabilities of hyperbolic prototype learning in the self-supervised setting for improved few-shot learning.\nIn this work, we propose the use of hyperbolic representation spaces in Self-Supervised Learning (SSL) to more appropriately embed the natural semantic class hierarchies presented in the data. We first demonstrate the capability of hyperbolic learning on a leading low-shot learning method of Masked Siamese Networks (MSNs) [3]. Here we project the output Euclidean representation onto the Poincaré ball and use the Poincaré distance in gyrovector space in place of the cosine similarity in the probability computation of the codes. We empirically show that such a conversion to hyperbolic space can lead to an improvement in representation quality for few-shot downstream tasks. Importantly and unlike previous methods [27,52], we propose the use of fully hyperbolic projection networks, projecting the output of the encoder to hyperbolic space to ensure the hyperbolicity we aim to learn in the representations is utilised in downstream tasks.\nIn addition, we propose a new self-supervised method based on MSNs that leverages the advancements in hyperbolic prototype learning [28] where instead of continually learning prototypes, we place the prototypes on the ideal boundary of the Poincaré ball of hyperbolic space. We train our network to produce good hyperbolic representations through a new loss function based on the Busemann distance metric [28]. We empirically demonstrate improvements over the Euclidean baseline and our hyperbolic conversion on few-shot and extreme low-shot learning tasks. Furthermore, we show that our hyperbolic methods are competitive with other Euclidean methods through standard self-supervised linear evaluation and transfer learning benchmarks.\nTo summarise, the main contributions of the paper are the following:\n• We propose a hyperbolic reformulation of the MSN clustering-based loss function, Hyperbolic Masked Siamese Networks (HMSN). • We utilise ideal prototypes that lie on the ideal boundary of the Poincaré ball to encourage full utilisation of the space. We introduce Hyperbolic Masked Siamese Networks with Ideal Prototypes (HMSN-IP), based on the MSN method, employing the Busemann prototype loss from metric learning as a measure of distance between embeddings and the ideal prototypes. • We propose to project Euclidean representations to the Poincaré ball of Hyperbolic space at the output of the encoder and present the use of hyperbolic projection heads as a solution to preserve hyperbolic structure in the output of the Euclidean encoder for downstream tasks. • We empirically demonstrate that both our propositions outperform the Euclidean counterparts on few-shot and low-shot learning tasks in fewer embedding dimensions, whilst remaining competitive in linear evaluation tasks." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b38", "b47", "b34", "b5", "b33", "b37", "b7", "b4", "b30", "b21", "b7", "b14", "b15", "b56", "b16", "b51", "b1", "b18", "b2", "b16", "b18", "b2", "b55", "b48", "b2", "b32", "b20", "b2", "b26", "b29", "b35", "b41", "b0", "b43", "b54", "b36", "b19", "b6", "b10", "b35", "b24", "b50", "b24", "b18", "b25", "b49", "b28", "b24", "b27", "b52" ], "table_ref": [], "text": "Self-Supervised Representation Learning. In self-supervised learning, we consider a set of unlabelled images D which we aim to learn representations for use in downstream tasks. We pre-train on D and then adapt the representations via a supervised task using a set of images S and their corresponding labels where here S << U . The most successful methods to learn good representations employ view-invariant joint-embedding architectures [38,47,34,6,33,37,8,5] which aim to predict the embedding of a view from another view of the same image. There exist a number of methods to train joint embedding predictive architectures, non-contrastive, which maximise the information content of the embeddings [30,21,8], and distillation, in which the outputs of one branch of the Siamese join embedding architecture act as a target for the other branch [15,16,56,17,51,2]. The latter is the focus of this work, primarily the methodologies DINO [18] and its later derivation MSN [3], which utilise discrete cluster prototypes to quantise the output representations.\nClustering approaches have excelled with the use of vision transformers achieving near-to or stateof-the-art performance in most self-supervised benchmarks [17,18,3]. More recently there has been greater thought placed into masking strategies of these approaches with the aim to learn better representations through prediction or invariance to the missing regions [55,48,3,32,20]. These approaches are particularly of interest in this work as a result of their exceptional performance in lowshot and extreme low-shot training settings [3]. As such, given our aims to improve low-shot learning, we base our loss and hyperbolic representation space reformulations on the leading architectural designs, specifically MSN.\nHyperbolic Learning. The advocation for learning representations or embeddings in non-Euclidean space in deep learning has, in recent years, increased rapidly. Hyperbolic reformulations of deep learning layers across both intermediate [26] and classification layers [29,35], as well as whole architectural propositions [41] have been proposed with improved performance and computational efficiency. Hyperbolic deep learning has seen great success in tasks where the representation of tree-like structures is beneficial like natural language [1,43,54] and graph neural networks [36,19,7]. The application of hyperbolic deep learning in vision is still however foundational, yet vast work has been undertaken in visual metric learning [11,35,24], with [50] performing hierarchical unsupervised similarity based metric learning, [24] extending the DINO [18] architecture with hyperbolic contrastive learning for metric learning. The latter which projects the output embedding space to hyperbolic space further motivates our decision to base our work on MSN given its known capabilities, albeit not in a self-supervised setting.\nMoreover, hyperbolic metric learning and prototype learning approaches have demonstrated their capabilities in few-shot and zero-shot [25,49,28] learning tasks, outperforming Euclidean embedding methods by some margins. Given the connections between metric learning, and prototype learning to self-supervision, there exhibit clear enablers between the domains. The work [24] explores these, initially investigating hyperbolic self-supervised learning before re-evaluating it as a metric learning approach given improved performance in this domain. Contrastive self-supervision has also been addressed in [27,52] which proposes a number of hyperbolic reformulations of prominent SSL and contrastive objectives. Our work aims to further explore the use of hyperbolic embedding space for self-supervised learning, advocating for its use to help provide greater insights and representation quality for all tasks while leveraging its strong performance in few-shot and low-shot learning.\n3 Prerequisites" }, { "figure_ref": [], "heading": "Hyperbolic Learning: The Poincaré Ball Model", "publication_ref": [ "b13", "b26", "b35", "b50", "b40", "b45", "b26", "b39", "b21", "b7" ], "table_ref": [], "text": "Hyperbolic space D d is the unique simply connected d-dimensional Riemannian manifold of constant negative curvature, where curvatures measure the deviation from flat Euclidean geometry. The constant negative curvature of the hyperbolic space, although analogous to the Euclidean sphere, presents some significant differences in geometric properties. As such hyperbolic space cannot be isometrically embedded into Euclidean space, yet there exist a number of conformal models of hyperbolic geometry [14] employing hyperbolic metrics providing a subset of Euclidean space. In this work, we employ the Poincaré ball model for hyperbolic geometry given its wide adoption in computer vision and unique properties ideal for embedding between euclidean and hyperbolic representations.\nThe Poincaré ball model\n(D d c , g Dc ) is defined by the manifold D d c = {x ∈ R d : c x 2 < 1} with the Riemannian metric g Dc = (λ c x ) 2 g E = 2 1 -c x 2 2 I d(1)\nwhere g E = I d is the Euclidean metric tensor and λ c x = 2 1-c x 2 is the conformal factor with c, a hyperparameter, controlling the curvature and radius of the ball. The conformal factor scales the local distances which approach infinity near the boundary of the ball, providing the unique property of space expansion. Such space expansion of hyperbolic spaces makes them continuous analogues of trees, given volumes of an object with diameter r scale exponentially with r. Thus, when referring to a tree with branching factor k, there are O(k l ) nodes at level l, where l serves as a discrete analogue of the radius. This is the fundamental property which the advocating work [26,35,50] and ours takes advantage of, allowing for the efficient embedding of natural hierarchies [40].\nOur approach employs encoders that operate in Euclidean space, and as such, we need to define a bijection from Euclidean embeddings of the encoder to the Poincaré ball of hyperbolic space. To achieve this we apply an exponential map exp c v (x) : R d → D d c on Euclidean vector x with some fixed base point v ∈ D d c which we set v to be the origin, simplifying the exponential map and measures of distance which will be defined later. The exponential map is as follows,\nexp c v (x) = v⊕ c tanh √ c λ c v x 2 x √ c x(2)\nwith its inverse logarithm map given by\nlog c v (x) = 2 √ cλ c v arctanh √ c -v⊕ c x -v⊕ c x -v⊕ c x .(3)\nGiven the change in geometry, hyperbolic spaces do not allow for standard vector space operations, as such we employ gyrovector formalism for standard operations such as addition, subtraction, multiplication [45,26]. Therefore, from Eq. 2, ⊕ c is defined as the gyrovector or Möbius addition of a pair of points x, y\n∈ D d c v⊕ c w = (1 + 2c v, w + c w 2 )v + (1 -c v 2 )w 1 + 2c v, w + c 2 v 2 w 2 . (4\n)\nLeading from gyrovector formalism is the notion of distance, vital for self-supervised losses where typically the Euclidean cosine similarity and distance, are employed [39,21,8]. On the Poincaré ball of hyperbolic space we define the distance between x, y ∈ D d c as follows:\ndist D (x, y) = 2 √ c arctanh √ c -x ⊕ c y ,(5)\nwhich with c = 1 the geodesic is recovered, a vital concept given cosine similarity is analogous to sphere geodesic distance, whereas c → 0 the Euclidean distance is produced." }, { "figure_ref": [], "heading": "Self-Supervised Learning: Masked Siamese Networks", "publication_ref": [ "b2", "b2", "b23", "b2", "b2" ], "table_ref": [], "text": "In this work, we use the Masked Siamese Network (MSN) [3] as a base for our hyperbolic implementation due to its leading performance as a few-shot learner in self-supervision, its computational efficiency, and established clustering based loss formulation. This therefore provides us with the best opportunity for baseline comparison when striving for improved low-shot performance when employing hyperbolic representation space.\nIn MSN data augmentations are applied to image x i to produce the target view x + i and a set of M ≥ 1 anchor views x i,1 , x i,2 , . . . , x i,M where i is the index of the sample mini-batch of images B ≥ 1. The anchor views x i,m are subsequently patched into N × N non-overlapping regions and masked randomly or via a focal scheme [3] denoted by xi,m . The encoders f θ and fθ are identical although differently parameterised trunks of the ViT [23] outputting representations corresponding to the [CLS] token. The anchor views xi,m processed by the anchor encoder which is parameterised by θ produce the representations z i,m ∈ R d while the target views x + i processed by the target encoder parameterised by θ produce the representations z + i ∈ R d . The target encoder is not directly updated by the optimisation process with gradients only computed with respect pot the anchor predictions, rather θ are updated via an exponential moving average of the anchor encoder. Each encoder is trained with a 3-layer non-linear projection head g θ (•) and gθ(•) with batch-normalisation at the input and hidden layers, which is later discarded during evaluation.\nThe metric which drives invariance between views is the soft distribution over a set of K > 1 learnable prototypes of dimension d denoted by q ∈ R K×d . The distribution is computed as the cosine similarity between the prototypes q and the L 2 -normalized anchor and target views pairs where for the anchor view representation z i,m the prediction distribution p i,m ∈ ∆ K is given by\np i,m := softmax z i,m • q τ .(6)\nThe same formulation applies to the target view representations z + i substituting the anchor views to produce target predictions p + i ∈ ∆ K . The temperature τ ∈ (0, 1) is always chosen to be larger for the anchor predictions (τ + < τ ) to encourage sharper target predictions producing confident low entropy anchor predictions which have been shown to provably discourage collapsing solutions [3].\nThe network is trained by the cross-entropy loss H(p + i , p i,m ) to penalise differing predictions of views that originate from the same image. This cross-entropy loss is regularised by mean entropy maximisation to encourage the use of the full set of prototypes, which maximises the entropy of the mean anchor predictions H(p). The overall objective to be minimised when optimising over θ and q is given by Eq.7 where λ controls the weight of the mean entropy maximisation regularisation.\n1 M B B i=1 M m=1 H(p + i , p i,m ) -λH(p)(7)\nFor a more detailed description of MSN, we refer the reader to [3], and for implementation details we refer to the supplementary material." }, { "figure_ref": [], "heading": "Hyperbolic Masked Siamese Networks", "publication_ref": [ "b24", "b27", "b52", "b8", "b31" ], "table_ref": [], "text": "Learning of hyperbolic embeddings under the MSN framework can most simply be achieved by mapping the euclidean output embeddings of the network to the Poincaré ball model, via Eq.2 followed by the substitution of the euclidean vector operations of the objective function Eq.7 for hyperbolic gyrovector equivalents. This approach to hyperbolic reformulation has shown to be an effective method of learning hyperbolic visual representations [24] and in contrastive self-supervision [27,52]. We therefore first follow this methodology to examine the capabilities of hyperbolic selfsupervision, we refer to the reformulation as Hyperbolic Masked Siamese Network (HMSN). We begin by projecting the anchor and target representations to the Poincaré ball model by the exponential map Eq.2, and initialising the prototypes q normally on the same hyperbolic space. The standard euclidean cosine similarity in Eq. 6 to compute the prediction metric p i,m , is substituted for the geodesic distance of Eq.5. The reformulation of Eq. 6 results in the following prediction:\np D i,m := softmax dist D (z i,m , q) τ .(8)\nThe overall objective function in Eq.7 remains identical substituting p i,m for p D i,m although is trained by Riemannian Adam [9] as we are directly optimising jointly the prototypes in hyperbolic space and the euclidean parameters θ. We initialise the prototypes to be normal with a small standard deviation (0.01) centred around the origin for improved stability early in training. We also clip the Euclidean representations before projection to the Poincaré ball model as in [31] to assist in vanishing gradients when backpropagating from the hyperbolic space to the euclidean space as embeddings tend towards the boundary of hyperbolic space during training. " }, { "figure_ref": [ "fig_2", "fig_3" ], "heading": "Hyperbolic Masked Siamese Networks with Ideal Prototypes", "publication_ref": [ "b29", "b29", "b12", "b28", "b31", "b2" ], "table_ref": [], "text": "Learning prototypes by HMSN exhibits behaviour that results in the under utilisation of the hyperbolic embedding space. During training, the learnable prototypes tend towards the origin and converge to a region that is significantly distant from the boundary of the space, this phenomena is visualised in Figure 2 and Figure 4. This in-turn restricts the embedding space resulting in uncertain and less abstract positioning. A solution is to encourage the prototypes to lie closer to the boundary through additional regularisation term maximising euclidean norm, or to provide some prior to the embeddings to place them more akin to the hierarchies we aim to capture [29]. However, The former is naive and the latter requires annotations from human observers, not feasible in the self-supervised setting. To address this we place the prototypes at ideal points of the Poincaré ball.\nThe ideal points, I d , are positioned prior to training based on separation on the unit hypersphere S d for d ≥ 3 while positioned uniformly on S d when d = 2 given ideal points of the hyperbolic space D d are homeomorphic to S d [29]. As the set of ideal points lies on the boundary of the hyperbolic space, the geodesic distance Eq.5 from an ideal point to any point in hyperbolic space is infinite. Therefore, to measure the assignment of a hyperbolic embedding to an ideal prototype the Busemann loss function is used. In the Poincaré ball model, the Busemann function is given by Eq.9.\nb q (z i,m ) = log q -z i,m 2 (1 -z i,m 2 )(9)\nThe Busemann function [13] is considered to be a distance measured to infinity defined in any space.\nAs with the hyperbolic reformulation in Section 4 we can substitute the cosine similarity for the Busemann function and position the prototypes at ideal points to produce the following prediction.\np I i,m := softmax -b q (z i,m ) τ . (10\n)\nAn important distinction from the work in [28] is that our function does not require a penalty term to penalise the overconfidence of the embeddings. Instead, the temperature τ scaling of the Softmax in Eq. 10 increases the magnitude of the hyperbolic embedding as τ decreases [31]. As a result, the embeddings are prevented from approaching the boundary of the ball as the Softmax is sharpened and certainty increases. In practice, tuning τ for performance whilst ensuring the embeddings do not lie on the boundary -the cause of vanishing gradients -is non-trivial. Instead, we clip the euclidean representations before the exponential mapping as done in 4. To avoid collapsed representations we introduce an entropy term to encourage unique prototype assignment [3], the resulting objective is, \n1 M B B i=1 M m=1 H(p I+ i , p I i,m ) -λH(p I ) + βH(p I i,m ).(11)" }, { "figure_ref": [ "fig_4", "fig_4" ], "heading": "Hyperbolic Projection Head", "publication_ref": [ "b24", "b27", "b52", "b11", "b41", "b46" ], "table_ref": [], "text": "Most notable approaches to hyperbolic self-supervised learning propose solutions that are akin to the reformulation procedure aforementioned [24,27,52]. However, all such methods fail to address that the representations used for downstream tasks remain euclidean. We instead project the representations to hyperbolic space at the output of the encoder trunk before the projection head, g θ (•), rather than at the output of the projection head as done by prior methods (Figure 5). The motivation is that the projection head is present during training and then removed for downstream tasks which is when we intend to most effectively utilise the structure captured in hyperbolic space.\nRecent works [12] which examine the role of the projection head have identified its role to remove the training tasks overfitting bias, which when removed results in representations that are significantly more generalisable to downstream tasks.\nGiven that we propose to project to the hyperbolic space before the projection head, the Multi-Layer Perception (MLP) comprised of three fully-connected layers and their non-linear activation must remain hyperbolic to preserve the embedding structure. To achieve this, we employ a Hyperbolic projection head comprised of three Hyperbolic fully-connected layers, F D θ , define in [41](Figure 5b) followed by a hyperbolic ReLU non-linearity [46] with exception to the final layer. We examine in more detail the effect of the projection head in Section 7.3." }, { "figure_ref": [ "fig_2" ], "heading": "Experimentation", "publication_ref": [ "b2", "b41", "b2", "b27", "b52" ], "table_ref": [ "tab_0", "tab_1", "tab_1" ], "text": "To examine the quality of hyperbolic representations, a number of standardised benchmark tasks are performed. We first evaluate the representations learnt by the ViT encoder on the ImageNet-1K dataset under linear evaluation, followed by few-shot evaluation on ImageNet-1K using only 1% of the labelled training images per class1 as per [3]. Given our representations lie in hyperbolic space we cannot directly compare using Euclidean classifiers, as such, we employ a hyperbolic multi-linear regression classifier with implementation identical to that reported in [41]. For all methods, we pre-train with a batch size of 10242 , producing views identically to [3] with 1 anchor, 1 random mask, and 10 focal mask views. We train a hyperbolic linear classifier on the labelled ImageNet-1K training set on the representations produced by of our frozen pre-trained, self-supervised hyperbolic vision transformer. Table 1 reports the top-1 linear evaluation accuracies (%) of both our proposed method compared against other leading approaches on the ImageNet-1K validation set, the results are the average of 3 randomly initialised runs. The hyperbolic reformulation (HMSN) performs marginally worse that its euclidean baseline albeit with fewer embedding dimensions 64 instead of 256. On the other hand, the Hyperbolic Masked Siamese Networks with Ideal Prototypes (HMSN-IP) performs comparatively to the MSN baseline showing a 0.1% difference with the same reduction in embedding dimensions. Encouraging, a performance drop is not observed in HMSN-IP given the fixed ideal prototypes. We note that training HMSN results in uniformly distributed prototypes akin to the ideal prototypes, albeit positions far closer to the origin restricting the representation space (Figure 2). The ability to learn representations from unlabelled data that are of high enough quality to be used in downstream tasks with very few labelled examples is the key motivator behind self-supervised learning. Moreover, our design decisions to employ hyperbolic representation space to learn hierarchies and as such represent semantic concepts in a more structured manner are driven by the goal of improving few-shot learning. As with the linear evaluation, we pre-train our encoder on the ImageNet-1K dataset, freezing the weights and training a linear classifier on top using a subset of the ImageNet-1K labelled training set. The performance of our self-supervised models by performing linear evaluation on very few labelled examples for each class is reported in Table 2.\nIn the standard low-shot benchmark 1% of the ImageNet-1K labels are employed for linear evaluation (approximately 13 images per class), the results are presented in Table 2 alongside alternative competitive self-supervised methods. Our hyperbolic reformulation (HMSN) outperforms its Euclidean counterpart with a 0.4% performance improvement with the ViT-S/16. The extension, Hyperbolic Masked Siamese Networks with Ideal Prototypes (HMSN-IP) sees a further 1.0% improvement over the hyperbolic reformulation, HMSN. The approaches described in 4 and 5 both make the important distinction from previous works [27,52] regarding the projection head in the training procedure. Typically the projection head is disregarded from the reformulation of euclidean SSL methods into hyperbolic ones, where embeddings are typically hyperbolic and representations remain Euclidean for comparison in downstream tasks (visually depicted in 5). Therefore, hyperbolic properties are lost when utilising the representations in downstream tasks." }, { "figure_ref": [], "heading": "Projection Head", "publication_ref": [ "b2" ], "table_ref": [ "tab_2" ], "text": "Table 3 reports the linear evaluation top-1 accuracy on the ImageNet-1K validation set for a pre-trained ViT-S/16 by HMSN-IP loss for 100 epochs3 with either a Euclidean or Hyperbolic projection head.\nThe downstream linear evaluation top-1 accuracy are given for both a Euclidean linear evaluation procedure as described in [3] and the Hyperbolic linear evaluation procedure (further details given in Supplementary Material). The results clearly demonstrate that when evaluating using a downstream hyperbolic classifier the HMSN-IP with the hyperbolic projection head produces representations that are of a higher hyperbolicity compared to Euclidean counterpart. An important and unique property of the negative curvature hyperbolic space is the exponentially expanding volume with respect to distance from the origin. This results in a representation space that exhibits the volume necessary for separability at far fewer dimensions. To access this, we pre-train ViT-S/16 with baseline MSN and HMSN-IP on ImageNet-1K for 100 epochs under different output dimensions." }, { "figure_ref": [ "fig_5" ], "heading": "Embedding Dimensions", "publication_ref": [ "b11" ], "table_ref": [], "text": "We report in Figure 6 the linear evaluation top-1 accuracy on the ImageNet-1K test of both Euclidean and Hyperbolic classifiers for each given dimension and projection head. We can see the expected increase in performance when the Euclidean projector dimensions are increased (blue and orange bars), this expected result aligns with previous investigations of the projection head [12]. For the Hyperbolic projector and hyperbolic classifier setting (red), we observe steady increase in accuracy until a significant drop-off occurred at 128 dimensions." }, { "figure_ref": [ "fig_5", "fig_1" ], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "This work investigates the hyperbolic self-supervised learning, introducing a hyperbolic extension to the Masked Siamese Network model where we empirically show improved downstream performance in hyperbolic classifiers for linear evaluation, transfer learning, and low-shot learning. We further improve on this by introducing a method that instead uses prototypes placed on the ideal boundary of the Poincaré ball model. We empirically demonstrate that this method improves low-shot downstream task performance over the standard hyperbolic reformulation. Both our proposed methods outperform or perform competitively to their Euclidean counterparts but do so at fewer embedding dimensions (Figure 6) whilst exhibiting clear semantic class hierarchies (Figure 1b).\nLimitations & Broader Impact Our work aims to produce better representations of images in setting where data annotations are scarce, it can therefore be seen how such methods can lead to more accurate or informative models for a number of downstream tasks with positive societal impact. However, as is the case with all vision systems, there is potential for exploitation and security concerns and one should take into consideration AI misuse when extending our method.\nHyperbolic self-supervision can improve the compactness of these representations, therefore providing promising research directions in applications such as data transmission and compression via SSL. Importantly, the improved interpretability due to uncertainty proxy of learned representations by embedding latent tree-like hierarchies leads to exciting new SSL understanding. However, computing in the hyperbolic space introduces challenges regarding matrix and vector operations and as such, there exists implementation and computational difficulties compared to Euclidean approaches. In practice, we do not find these significantly impactful at the presented scale, regardless, future work or extensions should take care in their case." } ]
Hyperbolic manifolds for visual representation learning allow for effective learning of semantic class hierarchies by naturally embedding tree-like structures with low distortion within a low-dimensional representation space. The highly separable semantic class hierarchies produced by hyperbolic learning have shown to be powerful in low-shot tasks, however, their application in self-supervised learning is yet to be explored fully. In this work, we explore the use of hyperbolic representation space for self-supervised representation learning for prototype-based clustering approaches. First, we extend the Masked Siamese Networks to operate on the Poincaré ball model of hyperbolic space, secondly, we place prototypes on the ideal boundary of the Poincaré ball. Unlike previous methods we project to the hyperbolic space at the output of the encoder network and utilise a hyperbolic projection head to ensure that the representations used for downstream tasks remain hyperbolic. Empirically we demonstrate the ability of these methods to perform comparatively to Euclidean methods in lower dimensions for linear evaluation tasks, whilst showing improvements in extreme few-shot learning tasks.
HMSN: Hyperbolic Self-Supervised Learning by Clustering with Ideal Prototypes
[ { "figure_caption": "Figure 1 :1Figure 1: Depiction of the 2D embeddings of the STL-10 validation dataset. The learnt embeddings of our proposed hyperbolic MSN with ideal prototypes. The red points represent the prototypes, the dotted line is the boundary of the Poincaré ball. For (a) natural semantic class clusters form at individual prototypes. Neighbouring prototypes capture similar semantic class sub-features (b), this is observed clearly with fire trucks being separately clustered to trucks (purple points), and large grilled cars (light-red points) being positioned in a similar manner. Seaplanes are positioned alongside boats (blue points) rather than airplanes (dark red) yet lie closer to the origin given there ambiguity.", "figure_data": "", "figure_id": "fig_1", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Distance of prototypes from origin during training. Mean Euclidean norm of Prototypes during training of Euclidean MSN baseline (blue), Hyperbolic MSN (orange), and Hyperbolic MSN with ideal prototypes (grey).", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: 2D visualisation of learnable prototype positioning. During early stages of HMSN training.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Visualisation of the projection head architectures: (a) The Euclidean projection head projects the euclidean embeddings to the Poincaré ball before the computation of the loss. (b) The Hyperbolic projector receiving Poincaré ball hyperbolic representations from the encoder.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Embedding Dimensions. Linear evaluation accuracy on the Imagenet validation set training for Hyperbolic (D) and Euclidean (R) classifiers and projectors.", "figure_data": "", "figure_id": "fig_5", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Linear Classification on ImageNet-1K. Top-1 accuracy for linear models trained on frozen features from different self-supervised methods.", "figure_data": "MethodArch.Params. Epochs Dims. Top-1 (%)SimCLR v2 [22] RN5024M800204871.7BYOL [30]RN5024M1000204874.4Barlow-T [53]RN5024M1000819273.2VICReg [8]RN5024M1000819273.2DINO [18]ViT-S/16 22M800204877.0iBOT [55]ViT-S/16 22M800819277.9MSN [3]ViT-S/16 22M80025676.9HMSN (ours)ViT-S/16 22M8006476.0HMSN-IP (ours) ViT-S/16 22M8006476.8", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Low-shot Linear Evaluation on ImageNet-1K. Top-1 Accuracy for linear models trained on frozen features from different methods, fine-tuning only uses 1% of the labels.", "figure_data": "MethodArch.Params. Dims. Top-1 (%)Barlow-Twins [53] RN5024M819255.0SimCLR v2 [22]RN5024M204857.9PAWS [4]RN5024M204866.5DINO [18]ViT-S/16 22M204864.5iBOT [55]ViT-S/16 22M819265.9MSN [3]ViT-S/16 22M25667.2HMSN (ours)ViT-S/16 22M6467.6HMSN-IP (ours)ViT-S/16 22M6468.77.2 Low-Shot Linear Evaluation", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Hyperbolic", "figure_data": "and Euclidean Projec-tion Heads. Linear evaluation accuracy on theImagenet-1K validation set training for both Hy-perbolic (D) and Euclidean (R) classifiers.Euclidean58.152.0Hyperbolic48.166.2", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" } ]
Aiden Durrant; Georgios Leontidis
[ { "authors": "R Aly; S Acharya; A Ossa; A Köhn; C Biemann; A Panchenko", "journal": "", "ref_id": "b0", "title": "Every child should have parents: a taxonomy refinement algorithm based on hyperbolic term embeddings", "year": "2019" }, { "authors": "Y M Asano; C Rupprecht; A Vedaldi", "journal": "", "ref_id": "b1", "title": "Self-labelling via simultaneous clustering and representation learning", "year": "2019" }, { "authors": "M Assran; M Caron; I Misra; P Bojanowski; F Bordes; P Vincent; A Joulin; M Rabbat; N Ballas", "journal": "", "ref_id": "b2", "title": "Masked siamese networks for label-efficient learning", "year": "2022" }, { "authors": "M Assran; M Caron; I Misra; P Bojanowski; A Joulin; N Ballas; M Rabbat", "journal": "", "ref_id": "b3", "title": "Semisupervised learning of visual features by non-parametrically predicting view assignments with support samples", "year": "2021" }, { "authors": "M Assran; Q Duval; I Misra; P Bojanowski; P Vincent; M Rabbat; Y Lecun; N Ballas", "journal": "", "ref_id": "b4", "title": "Self-supervised learning from images with a joint-embedding predictive architecture", "year": "2023" }, { "authors": "P Bachman; R D Hjelm; W Buchwalter", "journal": "", "ref_id": "b5", "title": "Learning representations by maximizing mutual information across views", "year": "2019" }, { "authors": "G Bachmann; G Bécigneul; O Ganea", "journal": "PMLR", "ref_id": "b6", "title": "Constant curvature graph convolutional networks", "year": "2020" }, { "authors": "A Bardes; J Ponce; Y Lecun", "journal": "", "ref_id": "b7", "title": "Vicreg: Variance-invariance-covariance regularization for self-supervised learning", "year": "2021" }, { "authors": "G Bécigneul; O.-E Ganea", "journal": "", "ref_id": "b8", "title": "Riemannian adaptive optimization methods", "year": "2018" }, { "authors": "Y Bengio; A Courville; P Vincent", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b9", "title": "Representation learning: A review and new perspectives", "year": "2013" }, { "authors": "Y Bi; B Fan; F Wu", "journal": "", "ref_id": "b10", "title": "Beyond mahalanobis metric: cayley-klein metric learning", "year": "2015" }, { "authors": "F Bordes; R Balestriero; Q Garrido; A Bardes; P Vincent", "journal": "", "ref_id": "b11", "title": "Guillotine regularization: Improving deep networks generalization by removing their head", "year": "2022" }, { "authors": "H Busemann", "journal": "Courier Corporation", "ref_id": "b12", "title": "The geometry of geodesics", "year": "2012" }, { "authors": "J W Cannon; W J Floyd; R Kenyon; W R Parry", "journal": "Flavors of geometry", "ref_id": "b13", "title": "Hyperbolic geometry", "year": "1997" }, { "authors": "M Caron; P Bojanowski; A Joulin; M Douze", "journal": "", "ref_id": "b14", "title": "Deep clustering for unsupervised learning of visual features", "year": "2018" }, { "authors": "M Caron; P Bojanowski; J Mairal; A Joulin", "journal": "", "ref_id": "b15", "title": "Unsupervised pre-training of image features on non-curated data", "year": "2019" }, { "authors": "M Caron; I Misra; J Mairal; P Goyal; P Bojanowski; A Joulin", "journal": "", "ref_id": "b16", "title": "Unsupervised learning of visual features by contrasting cluster assignments", "year": "" }, { "authors": " Curran Associates; Inc", "journal": "", "ref_id": "b17", "title": "", "year": "2020" }, { "authors": "M Caron; H Touvron; I Misra; H Jégou; J Mairal; P Bojanowski; A Joulin", "journal": "", "ref_id": "b18", "title": "Emerging properties in self-supervised vision transformers", "year": "2021" }, { "authors": "I Chami; Z Ying; C Ré; J Leskovec", "journal": "Advances in neural information processing systems", "ref_id": "b19", "title": "Hyperbolic graph convolutional neural networks", "year": "2019" }, { "authors": "H Chang; H Zhang; J Barber; A Maschinot; J Lezama; L Jiang; M.-H Yang; K Murphy; W T Freeman; M Rubinstein", "journal": "", "ref_id": "b20", "title": "Muse: Text-to-image generation via masked generative transformers", "year": "2023" }, { "authors": "T Chen; S Kornblith; M Norouzi; G Hinton", "journal": "PMLR", "ref_id": "b21", "title": "A simple framework for contrastive learning of visual representations", "year": "2020" }, { "authors": "X Chen; H Fan; R Girshick; K He", "journal": "", "ref_id": "b22", "title": "Improved baselines with momentum contrastive learning", "year": "2020" }, { "authors": "A Dosovitskiy; L Beyer; A Kolesnikov; D Weissenborn; X Zhai; T Unterthiner; M Dehghani; M Minderer; G Heigold; S Gelly", "journal": "", "ref_id": "b23", "title": "An image is worth 16x16 words: Transformers for image recognition at scale", "year": "2020" }, { "authors": "A Ermolov; L Mirvakhabova; V Khrulkov; N Sebe; I Oseledets", "journal": "", "ref_id": "b24", "title": "Hyperbolic vision transformers: Combining improvements in metric learning", "year": "2022" }, { "authors": "P Fang; M Harandi; L Petersson", "journal": "", "ref_id": "b25", "title": "Kernel methods in hyperbolic spaces", "year": "2021" }, { "authors": "O Ganea; G Bécigneul; T Hofmann", "journal": "Advances in neural information processing systems", "ref_id": "b26", "title": "Hyperbolic neural networks", "year": "2018" }, { "authors": "S Ge; S Mishra; S Kornblith; C.-L Li; D Jacobs", "journal": "", "ref_id": "b27", "title": "Hyperbolic contrastive learning for visual representations beyond objects", "year": "2022" }, { "authors": "M Ghadimi Atigh; M Keller-Ressel; P Mettes", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b28", "title": "Hyperbolic busemann learning with ideal prototypes", "year": "2021" }, { "authors": "M Ghadimiatigh; J Schoep; E Acar; N Van Noord; P Mettes", "journal": "", "ref_id": "b29", "title": "Hyperbolic image segmentation", "year": "2022" }, { "authors": "J.-B Grill; F Strub; F Altché; C Tallec; P Richemond; E Buchatskaya; C Doersch; B Pires; Z Guo; M Azar", "journal": "", "ref_id": "b30", "title": "Bootstrap your own latent: A new approach to self-supervised learning", "year": "2020" }, { "authors": "Y Guo; X Wang; Y Chen; S X Yu", "journal": "", "ref_id": "b31", "title": "Clipped hyperbolic classifiers are super-hyperbolic classifiers", "year": "2022" }, { "authors": "K He; X Chen; S Xie; Y Li; P Dollár; R Girshick", "journal": "", "ref_id": "b32", "title": "Masked autoencoders are scalable vision learners", "year": "2022" }, { "authors": "K He; H Fan; Y Wu; S Xie; R Girshick", "journal": "", "ref_id": "b33", "title": "Momentum contrast for unsupervised visual representation learning", "year": "2020" }, { "authors": "R D Hjelm; A Fedorov; S Lavoie-Marchildon; K Grewal; P Bachman; A Trischler; Y Bengio", "journal": "", "ref_id": "b34", "title": "Learning deep representations by mutual information estimation and maximization", "year": "2018" }, { "authors": "V Khrulkov; L Mirvakhabova; E Ustinova; I Oseledets; V Lempitsky", "journal": "", "ref_id": "b35", "title": "Hyperbolic image embeddings", "year": "2020" }, { "authors": "Q Liu; M Nickel; D Kiela", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b36", "title": "Hyperbolic graph neural networks", "year": "2019" }, { "authors": "I Misra; L V D Maaten", "journal": "", "ref_id": "b37", "title": "Self-supervised learning of pretext-invariant representations", "year": "2020" }, { "authors": "A V D Oord; Y Li; O Vinyals", "journal": "", "ref_id": "b38", "title": "Representation learning with contrastive predictive coding", "year": "2018" }, { "authors": "P H Richemond; J.-B Grill; F Altché; C Tallec; F Strub; A Brock; S Smith; S De; R Pascanu; B Piot", "journal": "", "ref_id": "b39", "title": "Byol works even without batch statistics", "year": "2020" }, { "authors": "R Sarkar", "journal": "Springer", "ref_id": "b40", "title": "Low distortion delaunay embedding of trees in hyperbolic plane", "year": "2011" }, { "authors": "R Shimizu; Y Mukuta; T Harada", "journal": "", "ref_id": "b41", "title": "Hyperbolic neural networks++", "year": "2020" }, { "authors": "Y Tian; D Krishnan; P Isola", "journal": "Springer", "ref_id": "b42", "title": "Contrastive multiview coding", "year": "2020" }, { "authors": "A Tifrea; G Bécigneul; O.-E Ganea", "journal": "", "ref_id": "b43", "title": "Poincar\\'e glove: Hyperbolic word embeddings", "year": "2018" }, { "authors": "N Tomasev; I Bica; B Mcwilliams; L Buesing; R Pascanu; C Blundell; J Mitrovic", "journal": "", "ref_id": "b44", "title": "Pushing the limits of self-supervised resnets: Can we outperform supervised learning without labels on imagenet?", "year": "2022" }, { "authors": "A A Ungar", "journal": "Synthesis Lectures on Mathematics and Statistics", "ref_id": "b45", "title": "A gyrovector space approach to hyperbolic geometry", "year": "2008" }, { "authors": "M Van Spengler; E Berkhout; P Mettes", "journal": "", "ref_id": "b46", "title": "Poincar\\'e resnet", "year": "2023" }, { "authors": "Z Wu; Y Xiong; S X Yu; D Lin", "journal": "", "ref_id": "b47", "title": "Unsupervised feature learning via non-parametric instance discrimination", "year": "2018" }, { "authors": "Z Xie; Z Zhang; Y Cao; Y Lin; J Bao; Z Yao; Q Dai; H Hu", "journal": "", "ref_id": "b48", "title": "Simmim: A simple framework for masked image modeling", "year": "2022" }, { "authors": "Y Xu; L Mu; Z Ji; X Liu; J Han", "journal": "Neurocomputing", "ref_id": "b49", "title": "Meta hyperbolic networks for zero-shot learning", "year": "2022" }, { "authors": "J Yan; L Luo; C Deng; H Huang", "journal": "", "ref_id": "b50", "title": "Unsupervised hyperbolic metric learning", "year": "2021" }, { "authors": "X Yan; I Misra; A Gupta; D Ghadiyaram; D Mahajan", "journal": "", "ref_id": "b51", "title": "Clusterfit: Improving generalization of visual representations", "year": "2020" }, { "authors": "Y Yue; F Lin; K D Yamada; Z Zhang", "journal": "", "ref_id": "b52", "title": "Hyperbolic contrastive learning", "year": "2023" }, { "authors": "J Zbontar; L Jing; I Misra; Y Lecun; S Deny", "journal": "", "ref_id": "b53", "title": "Barlow twins: Self-supervised learning via redundancy reduction", "year": "2021" }, { "authors": "Y Zhang; X Wang; C Shi; X Jiang; Y F Ye", "journal": "IEEE Transactions on Big Data", "ref_id": "b54", "title": "Hyperbolic graph attention network", "year": "2021" }, { "authors": "J Zhou; C Wei; H Wang; W Shen; C Xie; A Yuille; T Kong", "journal": "", "ref_id": "b55", "title": "ibot: Image bert pre-training with online tokenizer", "year": "2021" }, { "authors": "C Zhuang; A L Zhai; D Yamins", "journal": "", "ref_id": "b56", "title": "Local aggregation for unsupervised learning of visual embeddings", "year": "2019" } ]
[ { "formula_coordinates": [ 4, 108, 73.58, 396, 62.02 ], "formula_id": "formula_0", "formula_text": "(D d c , g Dc ) is defined by the manifold D d c = {x ∈ R d : c x 2 < 1} with the Riemannian metric g Dc = (λ c x ) 2 g E = 2 1 -c x 2 2 I d(1)" }, { "formula_coordinates": [ 4, 212.99, 298.97, 291.01, 23.93 ], "formula_id": "formula_1", "formula_text": "exp c v (x) = v⊕ c tanh √ c λ c v x 2 x √ c x(2)" }, { "formula_coordinates": [ 4, 196.53, 347.23, 307.47, 25.16 ], "formula_id": "formula_2", "formula_text": "log c v (x) = 2 √ cλ c v arctanh √ c -v⊕ c x -v⊕ c x -v⊕ c x .(3)" }, { "formula_coordinates": [ 4, 188.67, 410.18, 311.46, 42.49 ], "formula_id": "formula_3", "formula_text": "∈ D d c v⊕ c w = (1 + 2c v, w + c w 2 )v + (1 -c v 2 )w 1 + 2c v, w + c 2 v 2 w 2 . (4" }, { "formula_coordinates": [ 4, 500.13, 437.42, 3.87, 8.64 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 4, 217.1, 499.03, 286.9, 23.28 ], "formula_id": "formula_5", "formula_text": "dist D (x, y) = 2 √ c arctanh √ c -x ⊕ c y ,(5)" }, { "formula_coordinates": [ 5, 247.79, 194.62, 256.21, 22.38 ], "formula_id": "formula_6", "formula_text": "p i,m := softmax z i,m • q τ .(6)" }, { "formula_coordinates": [ 5, 233.08, 356.04, 270.92, 30.32 ], "formula_id": "formula_7", "formula_text": "1 M B B i=1 M m=1 H(p + i , p i,m ) -λH(p)(7)" }, { "formula_coordinates": [ 5, 233.95, 607.91, 270.05, 22.38 ], "formula_id": "formula_8", "formula_text": "p D i,m := softmax dist D (z i,m , q) τ .(8)" }, { "formula_coordinates": [ 6, 245.2, 474.72, 258.8, 24.8 ], "formula_id": "formula_9", "formula_text": "b q (z i,m ) = log q -z i,m 2 (1 -z i,m 2 )(9)" }, { "formula_coordinates": [ 6, 240.26, 562.51, 259.59, 22.31 ], "formula_id": "formula_10", "formula_text": "p I i,m := softmax -b q (z i,m ) τ . (10" }, { "formula_coordinates": [ 6, 499.85, 569.57, 4.15, 8.64 ], "formula_id": "formula_11", "formula_text": ")" }, { "formula_coordinates": [ 6, 202.27, 703.14, 301.73, 30.32 ], "formula_id": "formula_12", "formula_text": "1 M B B i=1 M m=1 H(p I+ i , p I i,m ) -λH(p I ) + βH(p I i,m ).(11)" } ]
10.18653/v1/2022.dialdoc-1.17
2023-11-05
[ { "figure_ref": [ "fig_0" ], "heading": "Introduction", "publication_ref": [ "b35", "b19", "b14", "b13", "b32", "b30", "b1", "b49", "b36", "b33" ], "table_ref": [], "text": "Goal-oriented dialogue, focusing on assisting users in achieving their goals through natural language interactions, has made significant progress in recent years (Peng et al., 2021;He et al., 2022). Nonetheless, these systems frequently encounter constraints in providing information that extends beyond what can be obtained from particular databases or domains. To address this issue, researchers have proposed the goal-oriented Document-Grounded Dialogue (DocGD) task (Feng et al., 2020(Feng et al., , 2021)), which leverages documents as the external knowledge source to support dialogue systems in meeting the diverse information needs of users.\nRecently, task-specific pre-training has shown an extraordinary ability to boost performances 𝑑 𝑑 𝑒 𝑟 𝑐 on downstream tasks, by mitigating the gaps between pre-training and fine-tuning due to different data distributions and training objectives (Mengge et al., 2020;Liu et al., 2021;Bai et al., 2022). Nonetheless, the study of pre-training for DocGD is hindered due to the difficulty of modeling the causal effects, which is a prominent characteristic of DocGD. As shown in Figure 1, the task requires the model to identify evidence in a document based on the dialogue context, and then utilize the grounding evidence to generate a corresponding response. This process involves the interplay of four variables that are causally connected. To attain precise modeling of causal effects during pre-training, two challenges must be overcome: (1) the scarcity of large-scale and causally-complete DocGD datasets for pre-training, as opposed to dialogue generation tasks that have the advantage of utilizing conversational data from various social media sources (Zhang et al., 2020), (2) the traditional likelihood objective (e.g., Raffel et al. (2020)) being insufficient to capture the causal relationships among variables.\nFor the first challenge, we propose a novel strategy for building a DocGD pre-training corpus. We define a dataset as causally-complete if it includes all the variables related to a task and encompasses all reasonable causal relationships among these variables. Our strategy involves two steps. Firstly, we transform Wikipedia documents into dialogues by generating pseudo-user utterances and modifying evidence in the documents to serve as agent responses. Secondly, we extract grounding documents embedded in URLs and insert virtual evidence to supplement dialogues from Reddit. Both steps guarantee that the datasets are causallycomplete, and they complement one another, as the former has authentic evidence with synthetic dialogues, while the latter possesses authentic dialogues with synthetic evidence.\nTo tackle the second challenge, we propose a causally-perturbed pre-training strategy that enhances the modeling of causality in our pre-training datasets. Our approach entails introducing causal interventions to both the document and evidence variables while optimizing the total effect of responses for different causes. The total effect comprises the natural direct effect (NDE) and total indirect effect (TIE) (Niu et al., 2021). In essence, the NDE quantifies the impact of irrelevant sentences in the supporting document, while the TIE captures the influence of evidence (detailed explanations in §3.3). Our objective is twofold: to enhance the model's resilience to perturbations in irrelevant sentences by minimizing the NDE, and to promote reliance on evidence in generating dialogue responses by maximizing the TIE. To achieve this, we retain relevant evidence while perturbing the remaining parts of the document to improve response consistency when using two versions, thus reducing the NDE. Additionally, we eliminate evidence from the document while preserving other information, subsequently decreasing the likelihood of generating original responses, thus maximizing the TIE.\nOverall, we refer to the two aforementioned strategies jointly as Causal Document-Grounded Dialogue (CausalDD). We thoroughly conduct experiments and analyses on three DocGD benchmark datasets. Our results, obtained through fullysupervised, few-shot, low-resource, and zero-shot scenarios, and evaluated by both automatic and human assessment, convincingly demonstrate the effectiveness of our pre-training corpus construction and causally-perturbed pre-training strategy. Especially, CausalDD even outperforms GPT-3.5." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Causal Inference For NLP", "publication_ref": [ "b18", "b22", "b12" ], "table_ref": [], "text": "Causal Inference is a statistical modeling tool that has been applied in explanatory analysis to better understand the relationships between variables (Glymour et al., 2016;Kuang et al., 2020;Feder et al., 2022). In the context of named entity recognition, Zeng et al.2020 sought to eliminate spurious correlations between context and entity tokens by replacing entities with counterfactual tokens. Wu et al.2020 similarly utilized counterfactual samples in sentiment classification, replacing causal terms with their antonyms. Our research endeavors to explore DocGD from a causal perspective, presenting the causal relationships among DocGD variables for the first time." }, { "figure_ref": [], "heading": "Document-Grounded Dialogue", "publication_ref": [ "b31", "b45", "b48", "b37", "b5", "b11", "b14", "b13", "b15", "b27", "b29", "b16", "b48" ], "table_ref": [], "text": "Goal-oriented dialogue generation grounded in documents is a challenging and realistic task (Ma et al., 2020;Yu et al., 2022;Zhang et al., 2023). Researchers have increasingly utilized documents in a more flexible manner to improve the fluency and informativeness of model-generated responses, including in tasks such as Machine Reading Comprehension, Convention Question Answering, and the focus of this paper, DocGD. To support the development of models for these tasks, various datasets have been proposed, including CoQA (Reddy et al., 2019), QuAC (Choi et al., 2018), DoQA (Campos et al., 2020), Wizard (Dinan et al., 2018), Doc2dial (Feng et al., 2020), MultiDoc2Dial (Feng et al., 2021) and Doc2bot (Fu et al., 2022).\nHowever, the high annotation requirements for document-grounded dialogues have limited the scale of available annotated data. To address this issue, Li et al. (2020) express the document knowledge as latent variables and devise a variational approach to achieve zero-resource knowledgegrounded dialogue generation. Li et al. (2021) homogenize different sources of knowledge (e.g., dictionaries, or knowledge graphs) into a unified representation to alleviate reliance on a single source. Gao et al. (2022) develop a prompt-connected multi-task learning to unify DocGD tasks. Zhang et al. (2023) propose coarse-to-fine knowledge selection to improve knowledge retrieval among Exercise is good for both you and your pet, Varble said. That's easily achieved with a dog: Nearly every dog would benefit from at least two walks a day, or a good chase after a ball or Frisbee, she said. By throwing a paper roll, you can give your cat an opportunity to chase and play. Interactive toys don't have to be expensive…" }, { "figure_ref": [], "heading": "Reddit WikiDialog", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Inpainter", "publication_ref": [], "table_ref": [], "text": "In parental-supervised diets, students also usually ingest the proper propotion… This is because when students go off to college, they face an independence … More than 60 percent of college students routinely eat fatty foods instead of fruits and vegetables, research shows.\nHow does the freshman 15 relate to eating habits?\nWhat is the cause of this? Do people tent to eat healthier or less healthy when whey are away from home?" }, { "figure_ref": [], "heading": "Article: Freshman 15", "publication_ref": [ "b36" ], "table_ref": [], "text": "In parental-supervised diets, students also usually ingest the proper proportion of foods from the different dietary groups; once removed from the parental dinner table, many college students... This is because when students go off to college, they facd an independence that they usually have not experienced before. Research has shown that over 60 percent of college students commonly ingest fatty over fruits and vegetables. multiple documents. These approaches omit pretraining on DocGD and merely initialize the parameters with general language models such as T5 (Raffel et al., 2020). Thus, how to effectively pre-train the DocGD model is still an open problem.\nIn this paper, we give the answer from the perspective of causal inference and demonstrate that causal pre-training is effective in various DocGD settings." }, { "figure_ref": [], "heading": "Methodology", "publication_ref": [], "table_ref": [], "text": "In this section, we first take a casual-effect perspective on the DocGD task in §3.1. We then propose two strategies to overcome the challenges discussed in §1: (1) a dataset construction strategy in §3.2 for building a causally-complete pre-training corpus;\n(2) a causally-perturbed pre-training strategy in §3.3 for better modeling causality of DocGD." }, { "figure_ref": [ "fig_0" ], "heading": "A Causal-Effect Perspective on DocGD", "publication_ref": [ "b14", "b16" ], "table_ref": [], "text": "The DocGD task is commonly formulated as a sequential process comprising two sub-tasks: knowledge grounding and response generation (Feng et al., 2020;Gao et al., 2022). In knowledge grounding, a text segment denoted as e, is identified within the supporting document d, based on the dialogue context c. This segment serves as the evidence for generating the subsequent response r.\nConsequently, four variables c, d, e, r are causally interrelated: The causal paths d → r and c → r directly influence the response, while an indirect effect occurs through the intermediary variable e. See Fig. 1 for an example causal graph." }, { "figure_ref": [], "heading": "Causally-complete Dataset Construction", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "While task-specific pre-training has been extensively researched in various domains and proven effective, there is a lack of research on pre-training for DocGD. The challenge lies in constructing a pre-training DocGD corpus that captures causal relationships among relevant variables. Merging a dialog corpus with unverified documents without careful consideration of causality can result in missing variables or weak causal connections. As a consequence, models may learn spurious features, such as generating responses solely based on the dialogue context c while ignoring the document d. Our analysis ( §4.7) demonstrates that this can significantly degrade performance.\nTo overcome this challenge, we propose a pretraining data construction strategy that utilizes high-quality documents from Wikipedia to create a causally-complete dataset with virtual dialogues. We further complement the corpus by leveraging real-world dialogues from Reddit to construct another dataset with virtual external knowledge. (Refer to data statistics in Table 1)." }, { "figure_ref": [], "heading": "Causally-complete WikiDialog", "publication_ref": [], "table_ref": [], "text": "Wikipedia offers a wealth of excellent articles, often authored or edited by experts who have invested considerable time and effort into ensuring clarity, accuracy, and addressing readers' queries. A distinctive feature of Wikipedia is that each page is dedicated to describing a specific entity. Therefore, when a user inquires about one entity from an agent, the corresponding page can be considered as the source of information for the agent's response.\nWe utilize this property to convert Wikipedia into two-person DocGD. Given a page, denoted as d = (e 1 , e 2 , . . . , e m ), consisting of m sentences, each sentence is treated as evidence e, representing an agent's response in an m-round di-\nDialog Context c Response r < l a t e x i t s h a 1 _ b a s e 6 4 = \" N T E w Q 5 v P + V a a y i Y A E i G I h c x k x 0 w = \" > A A A C A X i c b V D L S s N A F J 3 4 r P U V d S O 4 C R b B V U m k q M v i A 1 y I V L A P a E K Y T C f t 0 M m D m R u x h L j x V 9 y 4 U M S t f + H O v 3 H S Z q G t B y 4 c z r m X e + / x Y s 4 k m O a 3 N j e / s L i 0 X F o p r 6 6 t b 2 z q W 9 s t G S W C 0 C a J e C Q 6 H p a U s 5 A 2 g Q G n n V h Q H H i c t r 3 h e e 6 3 7 6 m Q L A r v Y B R T J 8 D 9 k P m M Y F C S q + / a A Y Y B w T y 9 z t z U B v o A 6 c 3 F Z Z a 5 e s W s m m M Y s 8 Q q S A U V a L j 6 l 9 2 L S B L Q E A j H U n Y t M w Y n x Q I Y 4 T Q r 2 4 m k M S Z D 3 K d d R U M c U O m k 4 w 8 y 4 0 A p P c O P h K o Q j L H 6 e y L F g Z S j w F O d + b 1 y 2 s v F / 7 x u A v 6 p k 7 I w T o C G Z L L I T 7 g B k Z H H Y f S Y o A T 4 S B F M B F O 3 G m S A B S a g Q i u r E K z p l 2 d J 6 6 h q H V d r t 7 V K / a y I o 4 T 2 0 D 4 6 R B Y 6 Q X V 0 h R q o i Q h 6 R M / o F b 1 p T 9 q L 9 q 5 9 T F r n t G J m B / 2 B 9 v k D O G m X Z Q = = < / l a t e x i t > L NDE < l a t e x i t s h a 1 _ b a s e 6 4 = \" U B u M d Q w 3 O I x E S / N Z k M + r Z u o r g 5 w = \" > A A A C A X i c b V D L S s N A F J 3 4 r P U V d S O 4 C R b B V U m k q M u i C A o u K v Q F T Q i T 6 a Q d O n k w c y O W E D f + i h s X i r j 1 L 9 z 5 N 0 7 a L L T 1 w I X D O f d y 7 z 1 e z J k E 0 / z W F h a X l l d W S 2 v l 9 Y 3 N r W 1 9 Z 7 c t o 0 Q Q 2 i I R j 0 T X w 5 J y F t I W M O C 0 G w u K A 4 / T j j e 6 z P 3 O P R W S R W E T x j F 1 A j w I m c 8 I B i W 5 + r 4 d Y B g S z N P b z E 1 t o A + Q N m + u s s z V K 2 b V n M C Y J 1 Z B K q h A w 9 W / 7 H 5 E k o C G Q D i W s m e Z M T g p F s A I p 1 n Z T i S N M R n h A e 0 p G u K A S i e d f J A Z R 0 r p G 3 4 k V I V g T N T f E y k O p B w H n u r M 7 5 W z X i 7 + 5 / U S 8 M + d l I V x A j Q k 0 0 V + w g 2 I j D w O o 8 8 E J c D H i m A i m L r V I E M s M A E V W l m F Y M 2 + P E /\na J 1 X r t F q 7 q 1 X q F 0 U c J X S A D t E x s t A Z q q N r 1 E A t R N A j e k a v 6 E 1 7 0 l 6 0 d + 1 j 2 r q g F T N 7 6 A + 0 z x 9 J P J d w < / l a t e x i t > L TIE < l a t e x i t s h a 1 _ b a s e 6 4 = \" P F a alogue. To complete the dialogue, we employ a dialogue inpainter (Dai et al., 2022a) (see A.1) to generate missing user utterances, which are interleaved with the agent's responses. The inpainter generates a pseudo session s inpainter = (u 1 , e 1 , u 2 , e 2 , ..., u m , e m ), where u i is the i-th generated utterance for the user, and e i is used as the agent's response. In our causally-complete WikiDialog, we first copy the first two turns from the inpainter. For the third turn, we randomly select one turn from the remaining turns as the utterance and grounding knowledge. To enhance naturalness, we employ a well-trained paraphrase model (see §A.2) to rewrite the evidence e 3 into the agent's response r 3 , creating the dialogue sequence s = (u 1 , e 1 , u 2 , e 2 , u 3 , r 3 ). The dialogue context c = (u 1 , e 1 , u 2 , e 2 , u 3 ), while the response r 3 is a paraphrased rendition of the evidence e 3 . Remark. Why is the above construction causally complete? First, the evidence e is an exact sentence in the document d, and the dialogue context c is generated by the dialogue inpainter based on e, so the evidence e can be uniquely determined in d by c, that is, e is the effect of {c, d}. Considering response r is a paraphrase of e, so e is the direct cause of r, and the causal paths c → r and d → r can also be implicitly established.\nu B M v / N W b M r v i 6 e c I 0 0 2 o V X G g = \" > A A A C A 3 i c b V D L S s N A F J 3 4 r P U V d a e b Y B F c l U S K u i x a 0 I W L C v Y B T Q i T 6 a Q d O n k w c y O W E H D j r 7 h x o Y h b f 8 K d f + O k z U J b D 1 w 4 n H M v 9 9 7 j x Z x J M M 1 v b W F x a X l l t b R W X t / Y 3 N r W d 3 b b M k o E o S 0 S 8 U h 0 P S w p Z y F t A Q N O u 7 G g O P A 4 7 X i j y 9 z v 3 F M h W R T e w T i m T o A H I f M Z w a A k V 9 + 3 A w x D g n l 6 k 7 m p D f Q B 0 k Z E r h p Z 5 u o V s 2 p O Y M w T q y A V V K D p 6 l 9 2 P y J J Q E M g H E v Z s 8 w Y n B Q L Y I T T r G w n k s a Y j P C A 9 h Q N c U C l k 0 5 + y I w j p f Q N P x K q Q j A m 6 u + J F A d S j g N P d e Y X y 1 k v F / / z e g n 4 5 0 7 K w j g B G p L p I j / h B k R G H o j R Z 4 I S 4 G N F M B F M 3 W q Q I R a Y g I q t r E K w Z l + e J + 2 T q n V a r d 3 W K v W L I o 4 S O k C H 6 B h Z 6 A z V 0 T V q o h Y i 6 B E 9 o 1 f 0 p j 1 p L 9 q 7 9 j F t X d C K m T 3 0 B 9 r n D 8 k C m E M = < / l a t e x i t > L DocGD 1. 加上数学符号 2. Caption" }, { "figure_ref": [], "heading": "Causally-complete Reddit", "publication_ref": [], "table_ref": [], "text": "Despite having a causally-complete dataset like WikiDialog, the generated virtual dialogues may not fully align with the distribution of real-world human conversations. To address this issue, we propose supplementing the pre-training corpus with diverse and realistic conversations. We consider Reddit, a popular online platform for person-toperson discussions, as a valuable source of dialogue context (c) and response (r), but lacking the document (d) and evidence (e) for DocGD. We observe that many submissions on Reddit contain URLs. These URLs often lead to web pages such as news articles that provide specific information related to the discussed topics. Therefore, we can crawl Reddit submissions that include URLs, using the content pointed to by the URL as the document (d), while utilizing the conversations and replies under the submission as c and r respectively.\nRemark. Why is the above construction causally complete? First, the evidence e is a paraphrase of response r, so a causal relationship can be established between r and e. Given that r is a natural response to the dialogue context c on Reddit, causal paths c → r and c → e exist. Furthermore, through the random insertion of evidence e into the document d, d can thus become the cause of e, and then the cause path d → r could also be established." }, { "figure_ref": [], "heading": "Causal Pre-Training Framework", "publication_ref": [ "b16" ], "table_ref": [], "text": "DocGD-specific Pre-Training To better capture the interdependence between evidence e and response r, as described by Gao et al. (2022), we adopt the most efficient and straightforward finetuning method to sequentially generate the e and r based on the dialogue context c and associated document d, instead of retrieving evidence and feeding it to the model to generate responses in a pipeline manner. We align our pre-training task with finetuning by optimizing the following objective:\nL DocGD = - (d,c,e,r)∈C log(p θ (e; r|d; c)) (1)\nwhere C is the constructed causally-complete corpora in §3.2, θ is optimized during pre-training." }, { "figure_ref": [], "heading": "Causally-perturbed Pre-Training", "publication_ref": [ "b39", "b38", "b33", "b16" ], "table_ref": [], "text": "To facilitate the causal modeling of DocGD, we propose a causally-perturbed pre-training strategy by introducing causal perturbations to variables of DocGD and evaluating the outcomes under different causes.\nHere, we utilize a common measurement, causal effect, to compare two potential outcomes for the same variable under two distinct treatment conditions (Rubin, 1978;Robins, 1986). Supposed that the random variable X assigned with the observed value x, X = x, represents \"under no-treatment condition\" and X = x * represents \"under treatment condition\" (Niu et al., 2021). The total effect (TE) of treatment X = x on the variable Y compares two hypothetical situations X = x and X = x * , which is denoted as: TE = Y x * -Y x . In DocGD, we aim to estimate the effect of the document d on the identification of evidence e and the generation of response r. We denote X = x to represent the original document d (under no-treatment condition), and X = x * to represent applying perturbations to the document (under treatment condition). We use Y to denote the generated sequence of evidence and response. More precisely, we further divide the document d into two parts, the sentence e where the evidence span lies and the other sentences {d\\e} outside the evidence scope, i.e., d = e ∪ {d\\e}. Hence, the total effect in DocGD can be written as:\nTE = Y {d\\e} * ,e * -Y {d\\e},e(2)\nWe adopt the decomposition in Niu et al. ( 2021) and adapt it to our causal DocGD scenario. Concretely, TE can be decomposed into the sum of the natural direct effect (NDE) and total indirect effect (TIE). NDE expresses the increase in the outcome Y when {d\\e} changes to {d\\e} * :\nNDE = Y {d\\e} * ,e -Y {d\\e},e(3)\nTIE is the difference between TE and NDE: Total Pretraining Objective Overall, our pretraining objective is the sum of standard DocGD loss and our newly proposed causally-perturbed losses as follows:\nTIE = Y {d\\e} * ,e * -Y {d\\e} * ,e(4)\nL = L DocGD + L NDE + L TIE\n(7) After pre-training, we fine-tune the obtained model on downstream datasets by optimizing L DocGD in Eq. 1 following Gao et al. (2022)." }, { "figure_ref": [], "heading": "Experiments", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Datasets", "publication_ref": [ "b14", "b13", "b14", "b13", "b41", "b15", "b15" ], "table_ref": [], "text": "We evaluate the effectiveness of our CausalDD for DocGD in both English and Chinese. For English, we use the two causally-complete datasets constructed in §3.2 for pre-training and evaluate the performances on two goal-oriented documentgrounded dialogue datasets: Doc2dial (Feng et al., 2020) and MultiDoc2dial (Feng et al., 2021). Doc2dial (Feng et al., 2020) contains 3,474 dialogues with 44,149 turns for training and 661 dialogues with 8,539 turns for evaluation2 . Multi-Doc2dial (Feng et al., 2021) contains 4,796 dialogues with an average of 14 turns grounded in 488 documents, with multiple documents supporting each dialogue in four different domains.\nFor Chinese, we utilize a translation model (Wei et al., 2022) to translate the English pre-training data into Chinese, and evaluate the performance on a Chinese DocGD dataset Doc2bot (Fu et al., 2022). Doc2bot (Fu et al., 2022) has samples of Chinese conversations that are natural, coherent, and grounded in diverse documents. Although the translation model may impact the quality of Chinese pre-training data, we have observed significant improvements in our approach across three Chinese pre-trained backbones. We leave on constructing better Chinese pre-training data for future work. " }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b36", "b50", "b44", "b40" ], "table_ref": [], "text": "For pre-training, we use the pre-trained T5-base and more powerful T5-large (Raffel et al., 2020) to initialize CausalDD for English DocGD. For Chinese DocGD, we try three types of initialization: T5-Mengzi (Zhang et al., 2021), mT5 (Xue et al., 2021), and T5-Randeng (Wang et al., 2022).\nCausalDD is pre-trained on 4 80G NVIDIA A100 GPUs with a maximum learning rate of 1e-5 and a warm-up ratio of 0.1 for one epoch. The batch size per iteration is set to 8, and we use the AdamW optimizer with parameters beta1 = 0.9, beta2 = 0.98, and epsilon = 1e-6. " }, { "figure_ref": [], "heading": "Baselines", "publication_ref": [ "b16", "b36" ], "table_ref": [], "text": "We compare CausalDD with several strong baselines, including UniGDD (Gao et al., 2022) and many commonly measured methods specific to each dataset. UniGDD utilizes the pre-trained T5 model (Raffel et al., 2020) as the initialization and optimizes L DocGD in Eq. 1 on downstream datasets, which also serves as our most relevant baseline. Furthermore, we meticulously design the instruction to assess the performance of GPT-3.5. See more baselines and details in Appendix B.2." }, { "figure_ref": [], "heading": "Metrics", "publication_ref": [ "b16", "b14", "b13", "b34" ], "table_ref": [], "text": "In concurrence with prevalent measurements (Gao et al., 2022;Feng et al., 2020Feng et al., , 2021)), we utilize the metrics of Exact Match (EM) and token-level F1 for the identification of evidence and BLEU (Papineni et al., 2002) for the generation of responses." }, { "figure_ref": [], "heading": "Main Results", "publication_ref": [ "b25" ], "table_ref": [ "tab_1", "tab_2", "tab_7" ], "text": "Fully-Supervised As shown in Tables 2,3, 8, and 4, our proposed CausalDD method outperforms the baseline model on all evaluation metrics in both English and Chinese datasets. The improvements of CausalDD over other baselines are statistically significant with p-value < 0.05 under t-test. Regardless of the language or different T5 parameter initialization, CausalDD consistently surpasses the strongest baseline UniGDD.\nInitialized with a larger T5, CausalDD large further enhances the performance compared to CausalDD, albeit with a slight decrease in BLEU score. To investigate this phenomenon, we compute the distinct scores with Dist-n following Li et al. (2015) to evaluate generated responses on the Doc2Dial dataset, shown in Table 7. We discover that the large model tends to generate more diverse responses, which may differ from the expressions of manually annotated answers. Furthermore, we observe that the popular large language model GPT-3.5 performs poorly on the existing DocGD datasets, despite including task instructions and cases in its prompt. Upon analyzing GPT-3.5's predictions, we find that its underperformance stems from a failure to adhere strictly to the given document's content for generating responses and providing evidence. Instead, it suffers from a severe hallucination issue, generating irrelevant content. We present the results of our human evaluation in Section 4.9.\nWe also observe that the model's improvements are more significant on Doc2bot in Table 4. We speculate that it's because the training data of Doc2bot is smaller than that of Doc2dial and Mul-tiDoc2dial, and our model obtains better initialization for DocGD through causal pre-training, thus being more data-efficient for fine-tuning." }, { "figure_ref": [], "heading": "Few-Shot & Low-Resource", "publication_ref": [], "table_ref": [ "tab_5", "tab_10" ], "text": "To verify the above speculation, we further conduct experiments on three datasets under few-shot and low-resource settings. We consider few-shot settings with only 5, 50, and 100 training examples and low-resource settings with 1%, 5%, and 10% of the original training datasets to train the models. The results are shown in Table 5, 13 and 6 for knowledge identification and response generation tasks, respectively (see more results in Appendix B.3). We can notice that the improvements of CausalDD are more significant in scenarios with such a small amount of training data. Specifically, an average improve- ment of 9.7 points is achieved in these settings, indicating that our method is more effective with limited human-annotated data, which is particularly important for real-world applications.\nZero-Shot We evaluate the performances on three datasets under the zero-shot setting. Results in Table 9 indicate that CausalDD achieves superior performances over UniGDD without any training samples of downstream tasks. This verifies the high quality of our constructed causally-complete corpora, and the good initialization provided by CausalDD for downstream fine-tuning." }, { "figure_ref": [], "heading": "Ablation Studies", "publication_ref": [], "table_ref": [ "tab_11" ], "text": "To evaluate the contribution of each component in CausalDD, we conduct a series of ablation studies on two language datasets: Doc2dial and Dot2Bot. The baseline for these studies is pretraining CausalDD exclusively on the causallycomplete WikiDialog dataset. We then assess the impact of adding additional components, including (1) supplementing WikiDialog with our constructed Reddit dataset (+Reddit), (2) minimizing the NDE loss in Eq. 5 during pre-training (+NDE), and (3) maximizing the TIE loss in Eq. 6 (+TIE).\nResults in Table 10 indicate that: (1) introducing a causally-complete Reddit containing real-world dialogues enhance the ability of the model to identify knowledge and generate better responses; (2) optimizing NDE to enhance the consistency of the model outputs with different support documents can enhance the robustness of the model; (3) optimizing TIE to prevent the normal output of the model when removing evidence from documents increases the model's reliance on the grounding evidence. These results validate that each component has a positive effect on CausalDD, leading to its better capability of modeling causal relationships among DocGD variables. To assess the effectiveness of our created complementary datasets, we also carry out a case study that compares the responses of CausalDD trained with various pre-training data (Appendix B.4)." }, { "figure_ref": [], "heading": "Effects of Causally-complete Data", "publication_ref": [], "table_ref": [ "tab_12" ], "text": "The creation of causally-complete pre-training data is one of the contributions of this paper. But (1) is causally-complete data really necessary for DocGD pre-training? (2) what problems would arise if part of the causality in the pre-training data was missing? To address these two questions, we build a causally-incomplete pre-training dataset by removing the introduced evidence e from the previously-built Reddit dataset (i.e, the document d = {d\\e}). Then pre-training task is to generate responses r based solely on documents and dialogue context c, without identifying knowledge first. We also pre-train a model using a causally-complete Reddit dataset for comparison. The results of Table 11 indicate that performance degrades when pre-training data cannot adequately model causal connections. The comparison with UniGDD (i.e., initialized with the original pretrained T5) demonstrates that causally-incomplete pre-training introduces bias, resulting in a discrepancy between pre-training and fine-tuning." }, { "figure_ref": [ "fig_6", "fig_6", "fig_6" ], "heading": "Other Benefits of Causal Pre-training", "publication_ref": [], "table_ref": [], "text": "In addition to overall performance improvement, we also observe some additional benefits brought by causal pre-training: (1) faster convergence speed (Figure 4(a)), our model achieves good results in the first epoch due to having more DocDGspecific initialization parameters compared to general pre-training; (2) better modeling of dialog history (Figure 4(b)), we find better performance across all turns when we divided the Doc2bot test set based on the number of turns in the dialog history; (3) a better ability to ground complex evidence in Doc2bot, many samples require the model to ground multiple relevant segments in the document, and as the number of relevant evidence increases, CausalDD still shows better performance compared to UniGDD (Figure 4(c))." }, { "figure_ref": [], "heading": "Human Evaluation", "publication_ref": [], "table_ref": [ "tab_13" ], "text": "To evaluate the performance of CausalDD against strong baselines, we randomly select 100 evaluation instances in Doc2Dial and request five human annotators to perform pairwise comparisons on two factors: (1) Relevance: indicating which is more pertinent and relevant to the user's inquiry, and (2) Informativeness: determining which answer is more informative. See Table 12 Our method exhibits an apparent edge over UniGDD in two aspects when compared to baselines, highlighting the ability of CausalDD to effectively leverage rich document text to generate more suitable responses by capturing the causal relationship among variables. While GPT-3.5 can produce more informative responses, it depicts less consistency with the document and user's inquiry, implying the presence of hallucination issues." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we demonstrate that modeling complete causal relationships among variables (i.e., documents, dialogue contexts, evidence, and responses) is necessary for pre-training for documentgrounded dialogue task (DocGD). We propose a strategy for creating causally-complete pretraining datasets and design a causally-perturbed pre-training strategy to model the causality of DocGD. To the best of our knowledge, this is the first work that analyzes DocGD from a causal perspective. Extensive experiments and analyses verify that our causal pre-training method CausalDD significantly improves performance under fullysupervised, few-shot, and low-resource settings, while also accelerating convergence and enhancing the ability to handle complex cases." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [ "b28", "b51" ], "table_ref": [], "text": "Despite the fact that CausalDD has demonstrated its superior performance on three benchmarks, it still has a few limitations. Firstly, the pre-training data we construct is generated by models such as the dialogue inpainter and paraphrase model. Despite the large size of our causal-complete datasets, the data quality is slightly inferior to manually annotated data. We will also consider constructing data corpus through large language models like Li et al. (2023); Zhao et al. (2023). Secondly, there are other tasks such as knowledge graph-grounded dialogue, and our proposed pre-training data construction strategy may not be applicable. Lastly, the effectiveness of task-specific pre-training will decrease as the amount of labeled data increases, so if a large amount of DocGD labeled data is provided, the performance gains brought from our approach may be marginal." }, { "figure_ref": [], "heading": "Ethical Statement", "publication_ref": [ "b3" ], "table_ref": [], "text": "This paper constructs a new pre-training dataset for DocGD, and we discuss some related ethical considerations here. First, in regards to intellectual property, the Wikipedia corpus and Reddit dump used as data sources are both freely available for research use, with the Wikipedia corpus shared under the CC BY-SA 3.0 license3 and the Reddit dump shared for research purposes (Baumgartner et al., 2020). Second, we have taken measures to control potential risks by ensuring that the texts in Wikipedia do not contain private information. Additionally, we have ensured that the conversation data from Reddit does not include any personal information and that the topics discussed are public and harmless. Third, for human evaluation on the downstream Doc2Dial task, we hire five annotators to score 400 instances in total. The hourly pay is set to 15 US$ per person, higher than the local statutory minimum wage." }, { "figure_ref": [], "heading": "A Method Details", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "A.1 Dialogue Inpainter", "publication_ref": [], "table_ref": [], "text": "The goal of a dialogue inpainting is to task a partial dialog to generate a complete dialog. A dialogue inpainter is trained using the following dialogue reconstruction task (Dai et al., 2022b): Given a complete dialog, d = (u 1 , u 2 , • • • , u T ), we randomly mask one utterance u t , yielding a partial dialogue:\nd m (t) = (u 1 , • • • , u t-1 , ⋄, u t+1 , • • • , u T ).\nWith this partial dialogue as the input, we train T5 to predict u t with the following objective:\nL = - d∈D E ut∼d [log p θ (u t |d m (t))],(8)\nwhere D is a corpus of complete dialogs and u t is a randomly sampled utterance from d. We then use the trained inpainter to transform a document into a dialog. Suppose a document d = (s 1 , s 2 , • • • , s m ), image each sentence s i is an utterance spoken by an agent in a dialogue with a user. We ask the inpainter to complete the following partial dialogue:\n(⋄, s 1 , ⋄, s 2 , ⋄, • • • , ⋄, s m ).\nEach utterance from the imagined user starts masked and is responded to by the agent with a sentence from the document. We use the model autoregressively: generate û1 and replace the first mask ⋄, feed ( û1 , s 1 , ⋄, s 2 ) to complete the second mask. We continue the process until all masks are filled and the dialog is complete." }, { "figure_ref": [], "heading": "A.2 Paraphrase Model", "publication_ref": [], "table_ref": [], "text": "We adopt a well-trained paraphrase model from Alisetti (2020) to transform a sentence into another sentence with similar semantics. Specifically, the model takes an English sentence as input and produces a set of paraphrased sentences. We randomly select one sentence, and use it as the virtual utterance for causal-Wikidialogue and the virtual evidence for causal-Reddit, respectively." }, { "figure_ref": [], "heading": "B Experiments Details B.1 Details of Retrieval and ranking", "publication_ref": [ "b20", "b24" ], "table_ref": [], "text": "Because the Multidoc2dial and Doc2bot datasets do not give the document that the current dialog needs to be grounded but require the model to find the relevant document in the document corpus Z, so we introduce an additional retrieval model and ranking model to find the most relevant document of the current dialogue context c Retrieve. We use DPR (Karpukhin et al., 2020) as our retriever, which projects dialog context and documents to a shared space using two BERT encoders (Kenton and Toutanova, 2019)). During retrieval, we perform a maximum inner-product search with FAISS (Lewis et al., 2020). Formally, we retrieve\nK most relevant document Z K = z [1,••• ,K] ∈ Z for dialogue c as: ZK = zi ∈ Z|topK {BERT(q) ⊤ BERT(zi)} (9)\nThe goal of retrieval training is to develop an encoder that maps a given dialogue d and all relevant documents into an embedding space such that the dialogue is close in proximity to its corresponding ground-truth document z + . During training, we would like to maximize P retr (z + |q, Z):\nP retr (z + |q, Z) = exp(sim(q, z + )) z∈Z exp(sim(q, z))(10)\nwhere sim(q, z) is the cosine similarity between the normalized embeddings of the dialogue and document, generated by the BERT encoder. In order to perform contrastive learning, a set of negative documents must be sampled as it is not feasible to enumerate all other evidence. This is done by using the BM25 algorithm to retrieve the most difficult negative clue for each positive clue and then placing them into batches of 128 instances. The training loss is then calculated as the negative loglikelihood for the positive document.\nRank. The ranker we use is based on the sequence-pair classification. The dialogue q and each candidate document z i ∈ Z K are input together to a BERT followed by a projection layer and Sigmoid function to calculate the ranking score of z i :\ns i = Sigmoid(Linear(BERT(z i ⊕ q))} (11)\nThe training of the ranker begins by gathering the initial retrieval results on the training set. The top 36 samples (excluding the ground-truth evidence z + ) returned by the retrieval module are used as negative examples, and the ranker model is trained to distinguish positive cases from negative cases.\nDuring inference, we first use the retrieval model to obtain the relevant document list and then use the rank model to identify the most relevant one as the supporting document. Note that we use the same retrieval and ranking model for CausalDD and our baseline UniGDD." }, { "figure_ref": [], "heading": "B.2 Baselines", "publication_ref": [ "b16", "b6", "b6", "b43", "b43", "b23", "b16", "b17", "b13", "b2", "b47", "b26", "b48", "b50", "b44", "b40" ], "table_ref": [ "tab_8" ], "text": "In Doc2dial, for the task of knowledge identification, we compare CausalDD with several strong baselines, including UniGDD (Gao et al., 2022), BERTQA (Kenton and Toutanova, 2019), BERT-PR (Daheim et al., 2021), RoBERTa-PR (Daheim et al., 2021), Multi-Sentence (Wu et al., 2021), and DIALKI (Wu et al., 2021). The other models formulate knowledge identification as a machine reading comprehension task and extract the grounding span from the document. For the response generation task, we compare CausalDD with UniGDD and several pipeline methods, including DIALKI+BART that uses DIALKI for knowledge identification and BART (Lewis et al., 2019) for response generation, RoBERTa-PR+BART, and RoBERTa+T5 (Gao et al., 2022).\nIn MultiDoc2dial, we first use the same Retrieval and Ranking module as Re2G (Glass et al., 2022) to obtain relevant documents as input for UniGDD and CausalDD. We also compare a series of baselines set up by Feng et al. (2021), which use BM25 and multiple DPR variances as retrievers, and use a BART-large pre-trained on the CNN dataset as the generation module. Moreover, we compare our method CausalDD with recent methods: R3 (Bansal et al., 2022), G4 (Zhang et al., 2022), and CPII-NLP (Li et al., 2022) proposed in the DialDoc Workshop and Re3FiD (Zhang et al., 2023), considering these three methods did not provide the exact-match results, we leave blanks in the Table 8.\nIn Doc2bot, we mainly compare with UniGDD and use three different T5 pre-trained models T5-Mengzi (Zhang et al., 2021), mT5 (Xue et al., 2021), and T5-Randeng (Wang et al., 2022). to initialize for a more comprehensive comparison.\nThe prompt for GPT-3.5 is carefully designed to match the input-output format of the training dataset of Doc2Dial. Examples and test input will be filled in the brackets as the instruction.\nDocument-grounded dialogue task aims to identify evidence from a supporting document for a dialogue between a user and an agent for answering the user's question.\nThen the agent replies to the user based on the retrieved evidence. Here are two examples for your reference. In the example-input, <last-turn> refers to the last utterance of the user. After <last-turn> part, a reverse order of a dialogue between <user> and <agent> is provided. After the signal </title>, the supporting document is provided. Specifically, you need to first retrieve the evidence (i.e.<grounding>) from the document in the test-input based on the dialogue and the user's query. Sentences after <grounding> must be the exact same string in the document (including the spaces and punctuation). Then you should continually generate the response (i.e.<agent>) based on the evidence as an agent to reply to the user. Example-1: {example1} Example-2: {example2} Test-Input:\n{test-input} Test-Output:" }, { "figure_ref": [], "heading": "B.3 More Results of Few-Shot & Low Resource", "publication_ref": [], "table_ref": [ "tab_2", "tab_6" ], "text": "We present experimental results of CausalDD and the strongest baseline UniGDD in Table 13 and6." }, { "figure_ref": [], "heading": "B.4 Case Study", "publication_ref": [], "table_ref": [], "text": "To assess the effectiveness of our created complementary datasets, we compare the responses of CausalDD under various data scenarios: after pretraining only on WikiDialog, only on Reddit, and on their combined corpora followed by Doc2dial fine-tuning.\nFrom the case in Figure 5, we can refer that:\n• UniGDD is able to accurately identify the grounding evidence in the supporting document, however, the generated response is just a simple copy of the evidence.\n• After training solely on WikiDialog, the predicted response is more fluent and more consistent with the dialogue context rather than the copy of the evidence. This verifies the high quality of our constructed causally complete WikiDialog.\n• After training solely on Reddit, the response is more colloquial while retaining high quality.\n• Pre-training the complementary datasets (i.e.,WikiDialog + Reddit) with CausalDD, the generated response is more precise and natural compared with the ground truth. This demonstrates that constructing complementary datasets that are both causally complete yields better performances for downstream tasks' fine-tuning." }, { "figure_ref": [], "heading": "Dialogue Context", "publication_ref": [], "table_ref": [], "text": "I filled out all of the information in the Retirement Estimator and it took a long time. When I came back from answering the door, all of the information was gone. What happened? Oh that's too bad. Were you gone for a long time?\nYes I guess I was." }, { "figure_ref": [], "heading": "Supporting Document", "publication_ref": [], "table_ref": [], "text": "…… How Long Can You Stay On Each Page? For security reasons, there are time limits for viewing each page. You will receive a warning after 25 minutes without doing anything, and you will be able to extend your time on the page. After the third warning on a page, you must move to another page. If you do not, your time will run out and your work on that page will be lost." }, { "figure_ref": [], "heading": "Response:", "publication_ref": [], "table_ref": [], "text": "UniGDD:\nFor security reasons, there are time limits for viewing each page. You will receive a warning after 25 minutes without doing anything and you will be able to extend your time on the page." }, { "figure_ref": [], "heading": "WikiDialog:", "publication_ref": [], "table_ref": [], "text": "Then you should know that there are time limits for viewing each page. You will receive a warning after 25 minutes without doing anything and you will be able to extend your time on the page." }, { "figure_ref": [], "heading": "Reddit:", "publication_ref": [], "table_ref": [], "text": "Okay, for security reasons there are time limits for viewing each page. You will receive a warning after 25 minutes without doing anything and you will be able to extend your time on the page." }, { "figure_ref": [], "heading": "WikiDialog+Reddit (CausalDD):", "publication_ref": [], "table_ref": [], "text": "Do you know that for security reasons there are time limits for viewing pages?" }, { "figure_ref": [], "heading": "Ground Truth:", "publication_ref": [], "table_ref": [], "text": "For reasons of security, there are time limits for viewing each page. " } ]
The goal of document-grounded dialogue (DocGD) is to generate a response by anchoring the evidence in a supporting document in accordance with the dialogue context. This entails four causally interconnected variables. While task-specific pre-training has significantly enhanced performances on numerous downstream tasks, existing DocGD methods still rely on general pre-trained language models without a specifically tailored pre-training approach that explicitly captures the causal relationships. To address this, we present the first causallycomplete dataset construction strategy for developing million-scale DocGD pre-training corpora. Additionally, we propose a causallyperturbed pre-training strategy to better capture causality by introducing perturbations on the variables and optimizing the overall causal effect. Experiments conducted on three benchmark datasets demonstrate that our causal pretraining yields substantial and consistent improvements in fully-supervised, low-resource, few-shot, and zero-shot settings 1 .
Causal Document-Grounded Dialogue Pre-training
[ { "figure_caption": "Figure 1 :1Figure 1: Causal graph of an example in DocGD, where four variables are causally connected: document d, evidence e, dialogue context c, and response r.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "://edition.cnn.com/2023/01/07/health/dog-and-cat-new-year-resolutions-wellness/index.html Yep, being overweight is bad for pets' health. You can throw a paper roll for your cat to chase. I noticed that many pets are overweight today. Do you know how to keep a cat fit? Yes, I have a puppet cat. Do you have any pets? What's good for your dog and cat?", "figure_data": "", "figure_id": "fig_1", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Two complementary datasets WikiDialog and Reddit built for DocGD, which are causally complete.", "figure_data": "", "figure_id": "fig_2", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: The diagram demonstrates causally-perturbed pre-training for CausalDD. The central document on the left represents the original document d, while the top and bottom documents represent perturbed documents d and d used for optimizing NDE and TIE, respectively.", "figure_data": "", "figure_id": "fig_3", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Foran ideal causal-aware DocGD pre-trained model, variations in d\\e should not significantly affect the model's output, while the presence of evidence e within d determines the model's ability to identify evidence and generate responses that are relevant to the dialogue context. In other words, Y {d\\e} * ,e should be indistinguishable from Y {d\\e},e , while Y {d\\e} * ,e * needs to differ significantly from Y {d\\e} * ,e . Hence, it is necessary to improve the robustness of our model against perturbations on variables by minimizing NDE, and promote reliance on evidence in the generation of dialogue responses by maximizing TIE. (See the illustration of causallyperturbed pre-training of CausalDD in Figure 3.) To achieve this, we design two causallyperturbed pre-training objectives. Firstly, we use Kullback-Leibler divergence to measure NDE: LNDE = (d,c,e,r)∈C KL(p θ (e; r|d; c)||p θ (e; r|d; c)) (5) where d = e ∪ {d\\e} * refers to disturbing the document d by randomly deleting or inserting some sentences while keeping the evidence sentence e in d retained. Secondly, to maximize TIE, we introduce the following unlikelihood loss: L TIE = -(d,c,e,r)∈C log(1p θ (e; r| d; c)) unlikelihood (6)The situation d here represents removing of evidence e from the document d, i.e., d = {d\\e} * . After the removal, we aim to decrease the model's probability of generating tokens in the ground truth evidence e and response r.", "figure_data": "", "figure_id": "fig_4", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Compared to the UniGDD model, CausalDD has faster convergence time, better modeling of dialog history, and a stronger grounding of multiple evidence.", "figure_data": "", "figure_id": "fig_6", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Statistics of CausalDD pre-training corpora.", "figure_data": "DatasetsDialogues Documents Total TurnsWikiDialog1.00M0.12M3.00MReddit1.00M1.00M1.39MAll2.00M1.12M4.39M", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Results on Doc2dial knowledge identification.", "figure_data": "ModelEMF1BERTQA42.6 36.7BERT-PR-large44.2 38.9RoBERTa-PR-large 55.7 54.6Multi-Sentence56.1 57.4DIALKI65.9 57.4UniGDD65.6 76.4GPT-3.546.1 57.3CausalDD66.0 77.3CausalDD large67.0 78.1ModelBLEUDIALKI+BART-base25.8RoBERTa-PR-large+BART-base39.6RoBERTa-large+T5-base40.7UniGDD42.4GPT-3.53.57CausalDD43.0CausalDD large42.5", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Results on Doc2dial response generation.", "figure_data": "", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Few-Shot and Low-resource results for knowledge identification (F1 score).", "figure_data": "DatasetModel5-ShotFew-Shot 50-Shot100-Shot1%Low-Resource 5%10%Doc2dialUniGDD CausalDD2.00 11.1 (9.1↑)2.27 11.2 (8.9 ↑)3.07 11.6 (8.6 ↑)5.07 14.9 (9.8↑)14.6 16.1 (1.5↑)15.6 18.7 (3.1↑)MultiDoc2dialUniGDD CausalDD2.20 14.0 (11.8↑) 14.1 (11.8↑) 13.9 (10.5↑) 2.31 3.376.98 13.5 (6.5↑)14.1 14.7 (0.6↑)12.7 17.0 (4.3↑)Doc2botUniGDDMengzi CausalDDMengzi 2.50 (2.50↑) 2.61 (2.61↑) 3.70 (3.70↑) 3.00 (3.00↑) 12.3 (10.2↑) 13.1 (10.8↑) 0.00 0.00 0.00 0.00 2.10 2.31", "figure_id": "tab_5", "figure_label": "5", "figure_type": "table" }, { "figure_caption": "Few-Shot and Low-resource results for response generation (BLEU score).", "figure_data": "Dist-1Dist-2Dist-3Dist-4UniGDD0.0736 0.3191 0.5055 0.6049CausalDD0.0736 0.3198 0.5079 0.6081CausalDD large 0.0749 0.3308 0.5299 0.6347", "figure_id": "tab_6", "figure_label": "6", "figure_type": "table" }, { "figure_caption": "Distinct scores of responses on Doc2Dial", "figure_data": "", "figure_id": "tab_7", "figure_label": "7", "figure_type": "table" }, { "figure_caption": "Results on MultiDoc2dial.", "figure_data": "ModelF1EM BLEUD token -nq40.0 22.315.7D struct -nq39.8 22.316.6D token -ft43.6 26.418.8D struct -ft43.5 26.119.5D token -rr-cls-ft 42.1 25.018.4D struct -rr-cls-ft 43.5 26.219.8CPII-NLP47.3-34.3R343.3-31.1G444.6-31.2Re3FiD46.7-33.5UniGDD61.5 45.831.8GPT-3.540.8 30.71.15CausalDD63.7 49.333.9CausalDD large 64.5 51.033.6", "figure_id": "tab_8", "figure_label": "8", "figure_type": "table" }, { "figure_caption": "Results under the zero-shot setting.", "figure_data": "ModelDoc2dialDoc2botF1EM BLEUF1EM BLEUWikipedia76.9 65.742.546.8 46.523.6+ Reddit 77.2 65.842.747.6 47.024.8+ NDE77.2 65.943.047.5 47.224.0+ TIE77.0 66.042.846.8 46.623.9", "figure_id": "tab_10", "figure_label": "9", "figure_type": "table" }, { "figure_caption": "Ablation study of CausalDD on Doc2Dial.", "figure_data": "", "figure_id": "tab_11", "figure_label": "10", "figure_type": "table" }, { "figure_caption": "Performance comparison with causalincomplete and complete pre-training data on Doc2dial", "figure_data": "", "figure_id": "tab_12", "figure_label": "11", "figure_type": "table" }, { "figure_caption": ". Human Evaluation", "figure_data": "Win Tie LoseCausalDD vs. UniGDDRelevance42553Informativeness434710CausalDD vs. GPT-3.5Relevance61345Informativeness272053", "figure_id": "tab_13", "figure_label": "12", "figure_type": "table" } ]
Yingxiu Zhao; Bowen Yu; Yu Haiyang; Bowen Li; Jinyang Li; Chao Wang; Fei Huang; Yongbin Li; Nevin L Zhang
[ { "authors": "Sai Vamsi; Alisetti ", "journal": "", "ref_id": "b0", "title": "Paraphrase-Generator", "year": "2020" }, { "authors": "Xuefeng Bai; Yulong Chen; Yue Zhang", "journal": "", "ref_id": "b1", "title": "Graph pre-training for amr parsing and generation", "year": "2022" }, { "authors": "Srijan Bansal; Suraj Tripathi; Sumit Agarwal; Sireesh Gururaja; Aditya Srikanth Veerubhotla; Ritam Dutt; Teruko Mitamura; Eric Nyberg", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "R3 : Refined retriever-reader pipeline for multidoc2dial", "year": "2022" }, { "authors": "Jason Baumgartner; Savvas Zannettou; Brian Keegan; Megan Squire; Jeremy Blackburn", "journal": "", "ref_id": "b3", "title": "The pushshift reddit dataset", "year": "2020" }, { "authors": "Jon Ander Campos; Arantxa Otegi; Aitor Soroa; Jan Milan Deriu; Mark Cieliebak; Eneko Agirre", "journal": "", "ref_id": "b4", "title": "Doqa-accessing domain-specific faqs via conversational qa", "year": "2020" }, { "authors": "Eunsol Choi; He He; Mohit Iyyer; Mark Yatskar; Wentau Yih; Yejin Choi; Percy Liang; Luke Zettlemoyer", "journal": "", "ref_id": "b5", "title": "Quac: Question answering in context", "year": "2018" }, { "authors": "Nico Daheim; David Thulke; Christian Dugast; Hermann Ney", "journal": "", "ref_id": "b6", "title": "Cascaded span extraction and response generation for document-grounded dialog", "year": "2021" }, { "authors": "Zhuyun Dai; Arun Tejasvi Chaganty; Y Vincent; Aida Zhao; Qazi Mamunur Amini; Mike Rashid; Kelvin Green; Guu", "journal": "", "ref_id": "b7", "title": "Dialog inpainting: Turning documents into dialogs", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b8", "title": "", "year": "" }, { "authors": "Zhuyun Dai; Arun Tejasvi Chaganty; Y Vincent; Aida Zhao; Qazi Mamunur Amini; Mike Rashid; Kelvin Green; Guu", "journal": "", "ref_id": "b9", "title": "Dialog inpainting: Turning documents into dialogs", "year": "2022" }, { "authors": " Pmlr", "journal": "", "ref_id": "b10", "title": "", "year": "" }, { "authors": "Emily Dinan; Stephen Roller; Kurt Shuster; Angela Fan; Michael Auli; Jason Weston", "journal": "", "ref_id": "b11", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "year": "2018" }, { "authors": "Amir Feder; Katherine A Keith; Emaad Manzoor; Reid Pryzant; Dhanya Sridhar; Zach Wood-Doughty; Jacob Eisenstein; Justin Grimmer; Roi Reichart; Margaret E Roberts", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b12", "title": "Causal inference in natural language processing: Estimation, prediction, interpretation and beyond", "year": "2022" }, { "authors": "Song Feng; Sankalp Siva; Hui Patel; Sachindra Wan; Joshi", "journal": "", "ref_id": "b13", "title": "Multidoc2dial: Modeling dialogues grounded in multiple documents", "year": "2021" }, { "authors": "Song Feng; Hui Wan; Chulaka Gunasekara; Siva Patel; Sachindra Joshi; Luis Lastras", "journal": "", "ref_id": "b14", "title": "doc2dial: A goal-oriented document-grounded dialogue dataset", "year": "2020" }, { "authors": "Haomin Fu; Yeqin Zhang; Haiyang Yu; Jian Sun; Fei Huang; Luo Si; Yongbin Li; Cam-Tu Nguyen", "journal": "", "ref_id": "b15", "title": "Doc2bot: Accessing heterogeneous documents via conversational bots", "year": "2022" }, { "authors": "Chang Gao; Wenxuan Zhang; Wai Lam", "journal": "", "ref_id": "b16", "title": "Unigdd: A unified generative framework for goaloriented document-grounded dialogue", "year": "2022" }, { "authors": "Michael Glass; Gaetano Rossiello; Md Faisal; Mahbub Chowdhury; Ankita Rajaram Naik; Pengshan Cai; Alfio Gliozzo", "journal": "", "ref_id": "b17", "title": "Re2g: Retrieve, rerank, generate", "year": "2022" }, { "authors": "Madelyn Glymour; Judea Pearl; Nicholas P Jewell", "journal": "John Wiley & Sons", "ref_id": "b18", "title": "Causal inference in statistics: A primer", "year": "2016" }, { "authors": "Wanwei He; Yinpei Dai; Yinhe Zheng; Yuchuan Wu; Zheng Cao; Dermot Liu; Peng Jiang; Min Yang; Fei Huang; Luo Si", "journal": "", "ref_id": "b19", "title": "Galaxy: A generative pre-trained model for task-oriented dialog with semisupervised learning and explicit policy injection", "year": "2022" }, { "authors": "Vladimir Karpukhin; Barlas Oguz; Sewon Min; Patrick Lewis; Ledell Wu; Sergey Edunov; Danqi Chen; Wen-Tau Yih", "journal": "", "ref_id": "b20", "title": "Dense passage retrieval for opendomain question answering", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton ; Lee Kristina; Toutanova ", "journal": "", "ref_id": "b21", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "Kun Kuang; Lian Li; Zhi Geng; Lei Xu; Kun Zhang; Beishui Liao; Huaxin Huang; Peng Ding; Wang Miao; Zhichao Jiang", "journal": "Engineering", "ref_id": "b22", "title": "Causal inference", "year": "2020" }, { "authors": "Mike Lewis; Yinhan Liu; Naman Goyal; Marjan Ghazvininejad; Abdelrahman Mohamed; Omer Levy; Ves Stoyanov; Luke Zettlemoyer", "journal": "", "ref_id": "b23", "title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension", "year": "2019" }, { "authors": "Patrick Lewis; Ethan Perez; Aleksandra Piktus; Fabio Petroni; Vladimir Karpukhin; Naman Goyal; Heinrich Küttler; Mike Lewis; Wen-Tau Yih; Tim Rocktäschel", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b24", "title": "Retrieval-augmented generation for knowledge-intensive nlp tasks", "year": "2020" }, { "authors": "Jiwei Li; Michel Galley; Chris Brockett; Jianfeng Gao; Bill Dolan", "journal": "", "ref_id": "b25", "title": "A diversity-promoting objective function for neural conversation models", "year": "2015" }, { "authors": "Kun Li; Tianhua Zhang; Liping Tang; Junan Li; Hongyuan Lu; Xixin Wu; Helen Meng", "journal": "Association for Computational Linguistics", "ref_id": "b26", "title": "Grounded dialogue generation with cross-encoding re-ranker, grounding span prediction, and passage dropout", "year": "2022" }, { "authors": "Linxiao Li; Can Xu; Wei Wu; Yufan Zhao; Xueliang Zhao; Chongyang Tao", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b27", "title": "Zero-resource knowledge-grounded dialogue generation", "year": "2020" }, { "authors": "Minghao Li; Yingxiu Zhao; Feifan Song; Bowen Yu; Haiyang Yu; Zhoujun Li; Fei Huang; Yongbin Li", "journal": "", "ref_id": "b28", "title": "Api-bank: A comprehensive benchmark for tool-augmented llms", "year": "2023" }, { "authors": "Yu Li; Baolin Peng; Yelong Shen; Yi Mao; Lars Liden; Zhou Yu; Jianfeng Gao", "journal": "", "ref_id": "b29", "title": "Knowledge-grounded dialogue generation with a unified knowledge representation", "year": "2021" }, { "authors": "Qian Liu; Bei Chen; Jiaqi Guo; Morteza Ziyadi; Zeqi Lin; Weizhu Chen; Jian-Guang Lou", "journal": "", "ref_id": "b30", "title": "Tapex: Table pre-training via learning a neural sql executor", "year": "2021" }, { "authors": "Longxuan Ma; Wei-Nan Zhang; Mingda Li; Ting Liu", "journal": "", "ref_id": "b31", "title": "A survey of document grounded dialogue systems (dgds)", "year": "2020" }, { "authors": "Xue Mengge; Bowen Yu; Zhenyu Zhang; Tingwen Liu; Yue Zhang; Bin Wang", "journal": "", "ref_id": "b32", "title": "Coarse-to-fine pretraining for named entity recognition", "year": "2020" }, { "authors": "Yulei Niu; Kaihua Tang; Hanwang Zhang; Zhiwu Lu; Xian-Sheng Hua; Ji-Rong Wen", "journal": "", "ref_id": "b33", "title": "Counterfactual vqa: A cause-effect look at language bias", "year": "2021" }, { "authors": "Kishore Papineni; Salim Roukos; Todd Ward; Wei-Jing Zhu", "journal": "", "ref_id": "b34", "title": "Bleu: a method for automatic evaluation of machine translation", "year": "2002" }, { "authors": "Baolin Peng; Chunyuan Li; Jinchao Li; Shahin Shayandeh; Lars Liden; Jianfeng Gao", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b35", "title": "Soloist: Buildingtask bots at scale with transfer learning and machine teaching", "year": "2021" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b36", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Siva Reddy; Danqi Chen; Christopher D Manning", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b37", "title": "Coqa: A conversational question answering challenge", "year": "2019" }, { "authors": "James Robins", "journal": "Mathematical modelling", "ref_id": "b38", "title": "A new approach to causal inference in mortality studies with a sustained exposure period-application to control of the healthy worker survivor effect", "year": "1986" }, { "authors": " Donald B Rubin", "journal": "The Annals of statistics", "ref_id": "b39", "title": "Bayesian inference for causal effects: The role of randomization", "year": "1978" }, { "authors": "Junjie Wang; Yuxiang Zhang; Lin Zhang; Ping Yang; Xinyu Gao; Ziwei Wu; Xiaoqun Dong; Junqing He; Jianheng Zhuo; Qi Yang; Yongfeng Huang; Xiayu Li; Yanghan Wu; Junyu Lu; Xinyu Zhu; Weifeng Chen; Ting Han; Kunhao Pan; Rui Wang; Hao Wang; Xiaojun Wu; Zhongshen Zeng; Chongpei Chen; Ruyi Gan; Jiaxing Zhang", "journal": "", "ref_id": "b40", "title": "Fengshenbang 1.0: Being the foundation of chinese cognitive intelligence", "year": "2022" }, { "authors": "; Xiangpeng; Heng Wei; Yue Yu; Rongxiang Hu; Weihua Weng; Rong Luo; Jin", "journal": "", "ref_id": "b41", "title": "Learning to generalize to more: Continuous semantic augmentation for neural machine translation", "year": "2022" }, { "authors": "Yiquan Wu; Kun Kuang; Yating Zhang; Xiaozhong Liu; Changlong Sun; Jun Xiao; Yueting Zhuang; Luo Si; Fei Wu", "journal": "", "ref_id": "b42", "title": "De-biased court's view generation with causality", "year": "2020" }, { "authors": "Zeqiu Wu; Bo-Ru Lu; Hannaneh Hajishirzi; Mari Ostendorf", "journal": "", "ref_id": "b43", "title": "Dialki: Knowledge identification in conversational systems through dialogue-document contextualization", "year": "2021" }, { "authors": "Linting Xue; Noah Constant; Adam Roberts; Mihir Kale; Rami Al-Rfou; Aditya Siddhant; Aditya Barua; Colin Raffel", "journal": "", "ref_id": "b44", "title": "mt5: A massively multilingual pre-trained text-to-text transformer", "year": "2021" }, { "authors": "Wenhao Yu; Chenguang Zhu; Zaitang Li; Zhiting Hu; Qingyun Wang; Ji Heng; Meng Jiang", "journal": "ACM Computing Surveys (CSUR)", "ref_id": "b45", "title": "A survey of knowledge-enhanced text generation", "year": "2022" }, { "authors": "Xiangji Zeng; Yunliang Li; Yuchen Zhai; Yin Zhang", "journal": "", "ref_id": "b46", "title": "Counterfactual generator: A weaklysupervised method for named entity recognition", "year": "2020" }, { "authors": "Shiwei Zhang; Yiyang Du; Guanzhong Liu; Zhao Yan; Yunbo Cao", "journal": "Association for Computational Linguistics", "ref_id": "b47", "title": "G4: Grounding-guided goaloriented dialogues generation with multiple documents", "year": "2022" }, { "authors": "Yeqin Zhang; Haomin Fu; Cheng Fu; Haiyang Yu; Yongbin Li; Cam-Tu Nguyen", "journal": "", "ref_id": "b48", "title": "Coarse-to-fine knowledge selection for document grounded dialogs", "year": "2023" }, { "authors": "Yizhe Zhang; Siqi Sun; Michel Galley; Yen-Chun Chen; Chris Brockett; Xiang Gao; Jianfeng Gao; Jingjing Liu; William B Dolan", "journal": "", "ref_id": "b49", "title": "Dialogpt: Largescale generative pre-training for conversational response generation", "year": "2020" }, { "authors": "Zhuosheng Zhang; Hanqing Zhang; Keming Chen; Yuhang Guo; Jingyun Hua; Yulong Wang; Ming Zhou", "journal": "", "ref_id": "b50", "title": "Mengzi: Towards lightweight yet ingenious pre-trained models for chinese", "year": "2021" }, { "authors": "Yingxiu Zhao; Bowen Yu; Binyuan Hui; Haiyang Yu; Fei Huang; Yongbin Li; Nevin L Zhang", "journal": "", "ref_id": "b51", "title": "A preliminary study of the intrinsic relationship between complexity and alignment", "year": "2023" } ]
[ { "formula_coordinates": [ 4, 163.69, 72.88, 112.78, 68.17 ], "formula_id": "formula_0", "formula_text": "Dialog Context c Response r < l a t e x i t s h a 1 _ b a s e 6 4 = \" N T E w Q 5 v P + V a a y i Y A E i G I h c x k x 0 w = \" > A A A C A X i c b V D L S s N A F J 3 4 r P U V d S O 4 C R b B V U m k q M v i A 1 y I V L A P a E K Y T C f t 0 M m D m R u x h L j x V 9 y 4 U M S t f + H O v 3 H S Z q G t B y 4 c z r m X e + / x Y s 4 k m O a 3 N j e / s L i 0 X F o p r 6 6 t b 2 z q W 9 s t G S W C 0 C a J e C Q 6 H p a U s 5 A 2 g Q G n n V h Q H H i c t r 3 h e e 6 3 7 6 m Q L A r v Y B R T J 8 D 9 k P m M Y F C S q + / a A Y Y B w T y 9 z t z U B v o A 6 c 3 F Z Z a 5 e s W s m m M Y s 8 Q q S A U V a L j 6 l 9 2 L S B L Q E A j H U n Y t M w Y n x Q I Y 4 T Q r 2 4 m k M S Z D 3 K d d R U M c U O m k 4 w 8 y 4 0 A p P c O P h K o Q j L H 6 e y L F g Z S j w F O d + b 1 y 2 s v F / 7 x u A v 6 p k 7 I w T o C G Z L L I T 7 g B k Z H H Y f S Y o A T 4 S B F M B F O 3 G m S A B S a g Q i u r E K z p l 2 d J 6 6 h q H V d r t 7 V K / a y I o 4 T 2 0 D 4 6 R B Y 6 Q X V 0 h R q o i Q h 6 R M / o F b 1 p T 9 q L 9 q 5 9 T F r n t G J m B / 2 B 9 v k D O G m X Z Q = = < / l a t e x i t > L NDE < l a t e x i t s h a 1 _ b a s e 6 4 = \" U B u M d Q w 3 O I x E S / N Z k M + r Z u o r g 5 w = \" > A A A C A X i c b V D L S s N A F J 3 4 r P U V d S O 4 C R b B V U m k q M u i C A o u K v Q F T Q i T 6 a Q d O n k w c y O W E D f + i h s X i r j 1 L 9 z 5 N 0 7 a L L T 1 w I X D O f d y 7 z 1 e z J k E 0 / z W F h a X l l d W S 2 v l 9 Y 3 N r W 1 9 Z 7 c t o 0 Q Q 2 i I R j 0 T X w 5 J y F t I W M O C 0 G w u K A 4 / T j j e 6 z P 3 O P R W S R W E T x j F 1 A j w I m c 8 I B i W 5 + r 4 d Y B g S z N P b z E 1 t o A + Q N m + u s s z V K 2 b V n M C Y J 1 Z B K q h A w 9 W / 7 H 5 E k o C G Q D i W s m e Z M T g p F s A I p 1 n Z T i S N M R n h A e 0 p G u K A S i e d f J A Z R 0 r p G 3 4 k V I V g T N T f E y k O p B w H n u r M 7 5 W z X i 7 + 5 / U S 8 M + d l I V x A j Q k 0 0 V + w g 2 I j D w O o 8 8 E J c D H i m A i m L r V I E M s M A E V W l m F Y M 2 + P E /" }, { "formula_coordinates": [ 4, 242.1, 66.28, 179.99, 71.87 ], "formula_id": "formula_1", "formula_text": "u B M v / N W b M r v i 6 e c I 0 0 2 o V X G g = \" > A A A C A 3 i c b V D L S s N A F J 3 4 r P U V d a e b Y B F c l U S K u i x a 0 I W L C v Y B T Q i T 6 a Q d O n k w c y O W E H D j r 7 h x o Y h b f 8 K d f + O k z U J b D 1 w 4 n H M v 9 9 7 j x Z x J M M 1 v b W F x a X l l t b R W X t / Y 3 N r W d 3 b b M k o E o S 0 S 8 U h 0 P S w p Z y F t A Q N O u 7 G g O P A 4 7 X i j y 9 z v 3 F M h W R T e w T i m T o A H I f M Z w a A k V 9 + 3 A w x D g n l 6 k 7 m p D f Q B 0 k Z E r h p Z 5 u o V s 2 p O Y M w T q y A V V K D p 6 l 9 2 P y J J Q E M g H E v Z s 8 w Y n B Q L Y I T T r G w n k s a Y j P C A 9 h Q N c U C l k 0 5 + y I w j p f Q N P x K q Q j A m 6 u + J F A d S j g N P d e Y X y 1 k v F / / z e g n 4 5 0 7 K w j g B G p L p I j / h B k R G H o j R Z 4 I S 4 G N F M B F M 3 W q Q I R a Y g I q t r E K w Z l + e J + 2 T q n V a r d 3 W K v W L I o 4 S O k C H 6 B h Z 6 A z V 0 T V q o h Y i 6 B E 9 o 1 f 0 p j 1 p L 9 q 7 9 j F t X d C K m T 3 0 B 9 r n D 8 k C m E M = < / l a t e x i t > L DocGD 1. 加上数学符号 2. Caption" }, { "formula_coordinates": [ 4, 319.93, 506.66, 205.21, 23.8 ], "formula_id": "formula_2", "formula_text": "L DocGD = - (d,c,e,r)∈C log(p θ (e; r|d; c)) (1)" }, { "formula_coordinates": [ 5, 119.99, 274.19, 169.87, 18.93 ], "formula_id": "formula_3", "formula_text": "TE = Y {d\\e} * ,e * -Y {d\\e},e(2)" }, { "formula_coordinates": [ 5, 117.83, 391.34, 172.03, 18.93 ], "formula_id": "formula_4", "formula_text": "NDE = Y {d\\e} * ,e -Y {d\\e},e(3)" }, { "formula_coordinates": [ 5, 115.86, 437.7, 174, 18.93 ], "formula_id": "formula_5", "formula_text": "TIE = Y {d\\e} * ,e * -Y {d\\e} * ,e(4)" }, { "formula_coordinates": [ 5, 348.23, 283.97, 133.6, 18.93 ], "formula_id": "formula_6", "formula_text": "L = L DocGD + L NDE + L TIE" }, { "formula_coordinates": [ 12, 306.14, 193.22, 189.12, 18.93 ], "formula_id": "formula_7", "formula_text": "d m (t) = (u 1 , • • • , u t-1 , ⋄, u t+1 , • • • , u T )." }, { "formula_coordinates": [ 12, 334.62, 246.2, 190.52, 23.35 ], "formula_id": "formula_8", "formula_text": "L = - d∈D E ut∼d [log p θ (u t |d m (t))],(8)" }, { "formula_coordinates": [ 12, 304.87, 375.52, 116.18, 18.93 ], "formula_id": "formula_9", "formula_text": "(⋄, s 1 , ⋄, s 2 , ⋄, • • • , ⋄, s m )." }, { "formula_coordinates": [ 13, 70.87, 127.57, 218.87, 50.48 ], "formula_id": "formula_10", "formula_text": "K most relevant document Z K = z [1,••• ,K] ∈ Z for dialogue c as: ZK = zi ∈ Z|topK {BERT(q) ⊤ BERT(zi)} (9)" }, { "formula_coordinates": [ 13, 83.36, 286.87, 206.5, 29.04 ], "formula_id": "formula_11", "formula_text": "P retr (z + |q, Z) = exp(sim(q, z + )) z∈Z exp(sim(q, z))(10)" }, { "formula_coordinates": [ 13, 83.67, 572.29, 206.19, 18.93 ], "formula_id": "formula_12", "formula_text": "s i = Sigmoid(Linear(BERT(z i ⊕ q))} (11)" } ]
10.1145/nnnnnnn.nnnnnnn
2023-05-18
[ { "figure_ref": [ "fig_4" ], "heading": "", "publication_ref": [ "b0", "b11", "b18", "b0", "b7", "b11", "b18", "b5", "b13", "b15", "b2", "b10", "b12", "b16", "b19", "b1", "b9", "b14", "b17" ], "table_ref": [], "text": "Figure 1: In real-world scenarios, adversarial patch attacks may exist in autonomous driving [1], content moderation [12], biometric authentication [19], etc. In these situations, the patch size and patch position are unknown. Existing certified defenses are white-box methods that can only handle the \"patch position unknown\" problem. Compared with them, the proposed iterative black-box certified defense method can further address the \"patch size unknown\" problem.\npaints in the \"physical world\" (e.g., adding an adversarial sticker on a traffic sign to fool self-driving cars [1,8], adding an adversarial patch on images for evading content moderation of social media platforms [12], adding an adversarial pattern on the clothes to fool biometric authentication [19], etc.), has become a topic of great interest in recent years. It allows the attackers to modify a bounded continuous region (usually defined as a square patch) of any position in an image. Obviously, unrestrained attack strength and position bring a severe challenge to adversarial defense methods, leading to the total failure of traditional empirical adversarial defense methods [6,14] when countering with adaptive white-box patch attacks [16].\nIn order to realize provable security, certified patch defense [3] has been proposed to guarantee the robustness of models against any adversarial patches (even a patch of which any pixel can misclassify the model) without empirical evaluation. Early certified defense methods [11,13,17,20] heavily depend on specific model architectures with small receptive fields, which significantly reduce the clean accuracy result. Recent works have implemented architectureagnostic certifiably robust image classification as well as achieving remarkable performance by derandomized smoothing [2,10,15] or pixel masking [18]. However, state-of-the-art works inevitably need to access the size of the adversarial patch, which is unreasonable and impractical in real attack scenarios. For the sake of distinction, we call these defense methods white-box certified defense since they need to know the patch size.\nTo design the architecture-agnostic certified defense in a blackbox setting (i.e., patch position and size are unknown), we propose a novel two-stage Iterative Black-box Certified Defense method, termed IBCD. In the first stage, it estimates the patch size in a search-based manner with three main components: search operation, satisfiability check, and search space reduction. To be specific, in each iteration, the method first applies the search operation (e.g., pixel masking) to collect the prediction results of the input image and then performs a satisfiability check to judge the relationship between mask and patch. If the mask is bigger than the patch, then perform search space reduction and start the next iteration. The search procedure stops when the mask is smaller than the patch and the patch size strictly falls between the mask size in the current iteration and the previous one. In the second stage of IBCD, the estimated patch size can support the calculation of the clean and certified accuracy with existing certified defense methods.\nIn summary, our work has the following contributions:\n• To the best of our knowledge, we are the first to propose architecture-agnostic certified defense in a black-box setting (i.e., the position and size of the patch are unknown), which promotes the practicality of certified defense for protecting multimedia applications in the physical world. • We design a search-based algorithm to efficiently estimate the size of the adversarial patch. We also propose a sliding space optimization strategy to accelerate the search for better efficiency. • The experiment conducted on two popular datasets (i.e., Ima-geNet and CIFAR10) with two representative model architectures (i.e., ResNet and ViT) shows the efficiency and usability of IBCD." }, { "figure_ref": [], "heading": "RELATED WORK", "publication_ref": [ "b2", "b12", "b16", "b19" ], "table_ref": [], "text": "Chiang [3] proposed the first certified defense against adversarial patch attacks. However, it suffers from extremely expensive model training and is not applicable to high-resolution images. Early certified defenses are strongly model-dependent [13,17,20], which lack generality. State-of-the-art certified patch defense methods against adversarial patches are pursuing architecture-agnostic and are available on high-resolution images, falling into two main categories: derandomized smoothing and pixel masking." }, { "figure_ref": [], "heading": "Derandomized Smoothing", "publication_ref": [ "b9", "b1", "b14" ], "table_ref": [], "text": "Derandomized smoothing fed small image regions (also called image ablations) to a classification model and performed the majority voting for the final prediction. For adversarial patch, Levine et al. propose Derandomized Smoothing (DS) [10], which trains a base classifier by smoothed images and the final classification is based on majority votes. This method significantly improves the accuracy of certified patch defense on ImageNet but inference computation is expensive. ECViT [2] propose a progressive smoothed image modeling task to train Vision Transformer, which can capture the more discriminable local context of an image while preserving the global semantic information. The Smoothed ViT [15] shows that Derandomized smoothing combined with Vision Transformers can significantly improve certified patch robustness that is also more computationally efficient and does not incur a substantial drop in standard accuracy." }, { "figure_ref": [], "heading": "Pixel Masking", "publication_ref": [ "b17" ], "table_ref": [], "text": "Pixel masking recovers the correct prediction with high probability if all the patch regions are masked. PatchCleanser [18] uses two rounds of pixel masking to eliminate the effects of patch attacks. PatchCleanser masks the image before the input layer which can be compatible with models of any architecture and achieved state-of-the-art certified robust accuracy." }, { "figure_ref": [], "heading": "PRELIMINARIES AND MOTIVATION 3.1 Preliminaries", "publication_ref": [], "table_ref": [], "text": "Adversarial patch attack. It aims to fool an image classifier within a bounded contiguous region of arbitrary changes anywhere on the image. To be specific, given a classification model F (•) and an input image I ∈ [0, 1] 𝐻 ×𝑊 ×3 with ground truth label 𝑦, the purpose of the adversarial patch attack is to generate an adversarial example Î ∈ A (I, 𝛿) satisfies F ( Î) ≠ 𝑦, where A is the threat model and 𝛿 is the adversarial patch. In general, the patch 𝛿 ∈ [0, 1] 𝑣×𝑣×3 is a square block and 𝑣 is recorded briefly as the size of the patch. " }, { "figure_ref": [], "heading": "Problem of State-of-the-art White-box Certified Defenses", "publication_ref": [ "b17" ], "table_ref": [], "text": "For both derandomized smoothing and pixel masking, the certified patch defenses have an obvious defect, i.e., they can not cope with the situation that the patch size is unknown. For pixel masking-based certified defense, they claim that \"the defender has a conservative estimation of the patch size\" [18]. For derandomized smoothing-based certified defense, the patch size is an important factor used in the formula that guarantees the certified robustness (See Eq. ( 2),( 5),( 7)). They can not calculate the certified accuracy without patch size. However, it is unrealistic and unreasonable for the defender to accurately know the size of the patch in a real offensive and defensive environment. Therefore, designing a black-box (both patch position and patch size are unknown) certified patch defense method is urgent and of great practical significance." }, { "figure_ref": [], "heading": "Motivation and Premise", "publication_ref": [], "table_ref": [], "text": "Patch size estimation is the main challenge in black-box certified defense. Optimization-based methods seem not robust enough to handle the adversarial patch attack that can arbitrarily modify the patch position and size in an unpredictable manner. Because the attackers can easily design a deliberate adversarial attack with a comprehensive understanding of a specific defense method. Therefore, we consider the commonly used search-based algorithm as it" }, { "figure_ref": [ "fig_1" ], "heading": "Block Smoothing", "publication_ref": [ "b9" ], "table_ref": [], "text": "Band Smoothing { { It's worth noting that we design the black-box certified defense algorithm under the assumption that the size of the adversarial patch is not larger than a quarter of the image. The assumption is necessary according to three reasons. ❶ For commonly used datasets (e.g., CIFAR10, MNIST, ImageNet), the objects in some of the images are smaller than a quarter of the image size. It is unrealistic to expect a classifier to accurately predict images in which the object is completely occluded by a patch of extremely large size. ❷ The setting of patch size limits the search space, which is helpful for confirming a reasonable initial state of the search algorithm. ❸ The size is approximate to the capability boundary of derandomized smoothingbased certified defense and pixel masking-based certified defense. To be specific, for pixel masking (i.e., PatchCleanser), on 224 × 224 images of ImageNet, its certified accuracy against the patch of size 112 × 112 is only 20.5% with ViT, which means the patch larger than a quarter of the image can be hardly certified defended by Patch-Cleanser. The derandomized smoothing method aims to retain a part continuous region in the image while removing other parts of the image. There are two types of smoothing: block smoothing and band smoothing. Block/Band smoothing means removing the entire image except for a square-block/band region, where the width of the block/band is 𝑏. In Fig. 2 shows the image ablations of block smoothing and band smoothing, called block ablation and band ablation respectively. A block/band ablation can start at any position and wrap around the image, we set 𝐾 as the number of possible block/band ablations. It is approximated that 𝐾 = 𝐻 × 𝑊 for block smoothing and 𝐾 = 𝑊 for band smoothing. For each class 𝑦,\n𝑛 𝑦 (I) = 𝐾 ∑︁ 𝑘=1 Q (F (Abl(I, 𝑏, 𝑘)) = 𝑦),(1)\ndenotes the number of ablations that were classified as class 𝑐, where Abl(I, 𝑏, 𝑘) is the 𝑘th image ablation and Q (•) is 1 if the condition is true else returns 0. With respect to the theory of Derandomized Smoothing [10], an image is certified robust if and only if the statistics of the highest class 𝑦 (i.e., 𝑛 𝑦 (I)) is a large margin bigger than the second highest class 𝑦 ′ (i.e., 𝑛 𝑦 ′ (I)), formulated as\n𝑛 𝑦 (I) ≥ max 𝑦 ′ ≠𝑦 𝑛 𝑦 ′ (I) + 2Δ, (2\n)\nwhere Δ is the maximum number of intersections between image ablations and the adversarial patch. There is also an implied restriction for 𝑛 𝑦 (I) and 𝑛 𝑦 ′ (I), that is, the sum of them should not exceed the number of possible block/band ablations (i.e., 𝐾), formulated as\n𝑛 𝑦 (I) + 𝑛 𝑦 ′ (I) ≤ 𝐾 . (3\n)\nWith Eq. ( 2) and Eq. ( 3), we can directly derive Eq. ( 4),\nΔ ≤ 𝐾 2 -𝑛 𝑦 ′ (I).(4)\nThus for block smoothing,\nΔ = (𝑣 + 𝑏 -1) 2 , (5\n)\n𝑣 ≤ √︂ 𝐾 2 -𝑛 𝑦 ′ (I) -𝑏 + 1 ≤ √︂ 𝐾 2 = √︂ 𝐻 × 𝑊 2 ,(6)\nwhich means the region of the patch is less equal than half of the image. For band smoothing,\nΔ = (𝑣 + 𝑏 -1),(7)\n𝑣 ≤ 𝐾 2 -𝑛 𝑦 ′ (I) -𝑏 + 1 ≤ 𝐾 2 = 𝑊 2 ,(8)\nwhich means the patch region is less equal than a quarter of the image. Among the boundaries of band and block smoothing, we choose the lower one (i.e., one-quarter of the image) to include all the possible derandomized smoothing types." }, { "figure_ref": [ "fig_2" ], "heading": "ITERATIVE BLACK-BOX CERTIFIED DEFENSE 4.1 Formulation and Overview", "publication_ref": [], "table_ref": [], "text": "The existing defended model D (•) of architecture-agnostic whitebox methods (i.e., derandomized smoothing and pixel masking) are actually D ( Î, 𝑣), where 𝑣 is the patch size and is known by them. In order to against the situation that the size and position of the patch are both unknown, we construct the architecture-agnostic black-box certified defense as a two-stage procedure: (1) patch size estimation, (2) certified defense with estimated patch size. The defended model can be reformulated as D ( Î, E ( Î)), where E (•) means patch size estimation. With respect to the second stage, the existing derandomized smoothing methods or pixel masking methods are the feasible choices. Thus the following introduction mainly focuses on the design of patch size estimation (i.e., E (•)).\nIn Fig. 3, the patch size estimation method is a search-based iterative scheme with three core components: search operation, satisfiability check, and search space reduction. In each iteration, search operation needs to evaluate the image and gives distinguishable results according to the size and position of the patch. Satisfiability check analyzes the information collected from the search operation and gives the conclusion that whether the search space needs to be" }, { "figure_ref": [], "heading": "Mask Size Reduction", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Satisfiability Check", "publication_ref": [], "table_ref": [], "text": "One-mask cover\nOne-mask cover In each iteration, the consistency predictions given by the search operation are collected into a set and the satisfiability check takes it as the input. If one of the predictions is \"consistent\", then perform mask size reduction and start the next iteration. If all the predictions are \"inconsistent\", then the mask is smaller than the patch and the search procedure is terminated." }, { "figure_ref": [], "heading": "Clean image Prior prediction: panda", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Adversarial image Prior prediction: cat", "publication_ref": [], "table_ref": [], "text": "reduced. If the search space can be further reduced, then perform search space reduction, otherwise, stop the whole estimation procedure and the patch size can be estimated with the information provided in the last iteration and penult iteration. Please note that the design of search space reduction is significant since it not only needs to reduce the search space steadily but also better to reduce the search space as more as possible while guaranteeing the patch size strictly falls within a certain range." }, { "figure_ref": [ "fig_3", "fig_2", "fig_3", "fig_3", "fig_3", "fig_3" ], "heading": "Implementation Details", "publication_ref": [ "b17", "b17" ], "table_ref": [], "text": "We will introduce the details of the core components (i.e., search operation, satisfiability check, and search space reduction), as an example (Fig. 4(a) and (b)) to show how to implement the framework mentioned in Fig. 3. Please note, in order to defend the most offensive patch, we assume the patch can successfully mislead the classifier with any of its pixels.\nSearch operation. We design search operation with reference to the double-masking algorithm [18] and modify it to satisfy our requirement. An overview of the search operation is shown in Fig. 4(b). It is a double-masking algorithm with two same masks of fixed size.\nHere each mask is a binary tensor m ∈ {0, 1} 𝐻 ×𝑊 , which has the same width and height as the image. The elements within the mask take values of 0, and others are 1. In the first round of masking (i.e., blue block of Fig. 4(b)), the input is a clean or adversarial image. We first put the image into the classifier to achieve a prior prediction (in Fig. 4, assume \"panda\" for the clean image and \"cat\" for the adversarial image). Then we use a mask to cover the image as a sliding window and puts the generated one-masked images into the classifier. If these one-masked images have a consistent prediction result and are the same as the prior prediction, then give an output CP = 1, which represents \"consistency prediction\". If the images have inconsistent prediction results, then the input image must be an adversarial image. Please note that we do not know the relationship between the size of the mask and the patch, thus we discuss the method under two different cases. ❶ Mask equal or bigger than patch In this case, there at least exists a one-masked image that covers the patch and is a correct label (Fig. 4 shows this case). There may exist multiple wrong labels. To simplify, we assume there are two wrong labels \"cat\" and \"bear\", where \"cat\" is the same as the prior prediction. Please note that the operation towards more wrong labels is the same. To reduce computation, we then filter out the images that have the same prediction as prior (i.e., \"cat\"). Because they still lead to inconsistency in double-mask prediction and the number of inconsistency predictions does not influence the result of the satisfiability check. Here it is confusing which label of \"bear\" and \"panda\" is the correct label, thus second-round masking is necessary. In second-round masking, the images of \"bear\" or \"panda\" will lead to different prediction results. For images of the label \"bear\", there must exist a double-masked image in which the adversarial patch is fully covered by the second mask and returns \"panda\", thus leading to inconsistency prediction. For images of the label \"panda\", since the first mask fully covers the patch, then the predictions of all the double-masked images should be the same. We can confirm that the labels in consistency prediction are the correct label (i.e., \"panda\").\nTo summarize, in the case that the mask is equal to or bigger than the patch, the conclusion is certain and the correct label is known. ❷ Mask smaller than patch In this case, the mask can never cover the patch and all the predictions are wrong. If the predictions are all the same as the prior prediction, we can easily output an inconsistency prediction (i.e., CP = 0). If the predictions are different, there exists a problem: it is impossible to judge whether the labels other than the prior are correct or not in the current iteration because we actually do not know the relationship between the size of the mask and patch.\nTo solve this problem, we need information from other iterations and this is why we assume the size of the adversarial patch is not larger than a quarter of the image. According to this assumption, we start the first iteration of the search procedure with the setting that the mask is equal to or larger than a quarter of the image. At the first iteration, we can ensure the mask is larger than the patch and obtain the correct label. The label could be used as a reference to confirm whether the predictions of one-masked images are correct or not, which solves the problem mentioned before. To summarize, in the case that the mask is smaller than the patch, the output must be inconsistency predictions (i.e., CP = 0) with the help of the information obtained in the first iteration.\nSatisfiability check. The output collection of search operation is a set SCP = {CP 1 , CP 2 , ...}. We can conclude that the mask is equal or bigger than the patch if and only if ∃CP ∈ SCP, CP = 1. In this situation, the state SCP 𝑠 is True. We can conclude that the mask is smaller than the patch if and only if ∀CP ∈ SCP, CP = 0. In this situation, the state SCP 𝑠 is False.\nSearch space reduction. Please note the search space is generally referred to the situations that need to be checked (i.e., applying the search operation with different mask sizes to see the result of the satisfiability check), not the mask sliding space on the image. For the double-masking method, the way of search space reduction is to decrease the mask size as the number of iterations increases.\nDetails of the mask. In order to ensure that the mask can cover all the positions where the patch may injection when it is equal to or bigger than the patch, we follow the method of calculating the mask step size in PatchCleanser [18]. As shown in Eq. ( 9), with the mask size 𝜂, sliding stride 𝑠 and actual patch size 𝑣, the constraint is\n𝑣 ≤ 𝜂 -𝑠 + 1.(9)\nThe pre-defined multi-scale mask set M contains mask subsets of different sizes {𝜂 1 , 𝜂 2 , ...}. For example, M[𝜂 1 ] is a set that contains masks that have the same size (i.e., 𝜂 1 ) and different positions. The masks in set M[𝜂 1 ] can fully cover the image." }, { "figure_ref": [], "heading": "Sliding Space Optimization", "publication_ref": [], "table_ref": [], "text": "As a search-based algorithm, the efficiency of patch size estimation is an important metric. To further reduce the computational complexity and make it more practical, we propose a sliding space optimization method.\nThe high-level idea is that, once the mask successfully covers the patch, its position is the sliding space of the next iteration, which is much smaller than taking the whole image as the sliding space. As shown in Fig. 5, the red box and yellow box represent the position of the patch and the large mask that can completely cover the patch respectively. The blue box represents the set of all possible small mask positions that intersect with the large mask, which is the sliding Patch Large mask M Small mask set 𝚯 Figure 5: Sliding space optimization. In the case where the large mask (yellow box) covers the patch (red box), we only select the small masks (blue box) that intersect with the large mask to execute the search operation, rather than search on whole the image. This strategy can help to improve the efficiency of IBCD.\nspace that needs to be explored. To be specific, we define\nR (m) = 𝑥 m 1 , 𝑦 m 1 , 𝑥 m 2 , 𝑦 m 2(10)\nas the region of mask m, where (𝑥 m 1 , 𝑦 m 1 ) and (𝑥 m 2 , 𝑦 m 2 ) represent the position of the top left vertex and bottom right vertex of the mask. R (m 1 ) ∩ R (m 2 ) = 0 means that there is no overlapping area between the mask m 1 and mask m 2 . The selected small masks are in the set\nΘ = {m ∈ M 𝑠𝑚𝑎𝑙𝑙 |R (m) ∩ R (M) ≠ 0},(11)\nwhere M 𝑠𝑚𝑎𝑙𝑙 means the small mask set that fully covers the image in the next iteration. The optimization strategy Ω(•) is used in line 15 of Algorithm 1. It first selects the big mask which satisfies the consistency check (i.e., CP = 1) (See yellow mask in Fig. 5), then chooses the small masks (See blue masks in Fig. 5) that intersect with the big mask to be small mask set Θ. It is obvious that the optimization strategy can efficiently reduce the sliding space since the sliding space is reduced from the entire image to a small part." }, { "figure_ref": [], "heading": "Robustness Certification for Patch Size Estimation Framework", "publication_ref": [ "b17", "b17" ], "table_ref": [], "text": "In this subsection, we give the robustness certification for our iterative black-box certified defense method. Our method contains three main modules: search operation, satisfiability check, and search space reduction. For the search operation module, its robustness is basically guaranteed by the white-box method double-mask algorithm [18]. Thus we mainly provide robustness certification for the search space reduction and satisfiability check in Theorem 1 and Theorem 2, respectively. They give the guarantee to certifiably estimate the size of the patch. Theorem 1. In search space reduction, among the mask size intervals \n[[𝜂 1 , 𝜂 2 ], [𝜂 2 , 𝜂 3 ], • • • , [𝜂 𝑖 , 𝜂 𝑖+1 ]],\n(𝜂 ≥ 𝑣) ⇐⇒ (∃CP = 1). (13\n)\nProof of Adequacy. When 𝜂 ≥ 𝑣, according to Definition 1 (Rcovering) and Definition 2 (Two-mask correctness) in PatchCleanser [18], we can infer that in the one-mask prediction, there must have a mask that completely covers the patch and restore the correct prediction. This one-mask image will always output the correct prediction in the second round prediction, and the predictions are consistent. Thus there exists consistency predictions CP = 1. Proof of Necessity. We use proof by contradiction. The assumed proposition is ¬((∃CP = 1) → (𝜂 ≥ 𝑣)). The assumed proposition can be evolved into (∃CP = 1)∧(𝜂 < 𝑣). We came to the conclusion that (∃CP = 1) and (𝜂 < 𝑣) must both be satisfied. However, once the mask size is smaller than the patch size, the image will always be misclassified due to the influence of adversarial patch and there will always come inconsistency predictions CP = 0, which fully against (∃CP = 1). Thus the assumption is overthrown, and the original proposition is established." }, { "figure_ref": [], "heading": "Algorithm of Patch Size Estimation", "publication_ref": [], "table_ref": [], "text": "In Algorithm 1, we specifically show how to estimate the patch size. Please note that the procedure is only a little different from the description of the method because we aim to show the algorithm more clearly by dividing it into two parts (actually the two parts can be integrated into a complete iteration as introduced in the former sections). The first part describes the judgment on whether the image is clean or adversarial. The second part is only designed for adversarial images. The input is an image I, a classifier F (•), and a multi-scale mask set M. The output is the estimated patch size.\nThe algorithm first judges whether the image is clean or being attacked in lines 1-7. In line 1, initialize the consistency prediction collection. In line 2, calculate the 𝑦 𝑝𝑟𝑖𝑜𝑟 as the prior of the input image. In line 3, select the mask of the biggest mask size. Here select many same-size masks which have different positions in for loop. In lines 4-5, judge whether the one-masked images have the same prediction as 𝑦 𝑝𝑟𝑖𝑜𝑟 and put the predictions into the set U. In lines 6-7, we can confirm that the input image is clean if U is an empty set and return the 0 as patch size." }, { "figure_ref": [], "heading": "Algorithm 1: Iterative Patch Size Estimation", "publication_ref": [], "table_ref": [], "text": "Input: Image I, Classifier F (•), Mask set M. Output: Estimated patch size. \n1 U ← 𝜙 ⊲ Prediction collection 2 𝑦 𝑝𝑟𝑖𝑜𝑟 ← F (I) ⊲ Prior prediction 3 for m ∈ M[𝜂 max ] do 4 if F (I ⊙ m) ≠ 𝑦 𝑝𝑟𝑖𝑜𝑟 then 5 U ← U.𝑎𝑑𝑑 (F (I ⊙ m)) 6 if U = 𝜙 then 7 return 0 8 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 ← 0 9 𝑦 𝑡𝑟𝑢𝑒 ← 𝑛𝑢𝑙𝑙 10 for 𝜂 ← 𝜂 𝑚𝑎𝑥 to 𝜂 𝑚𝑖𝑛 do 11 M ← 𝜙 12 SCP ← 𝜙 13 L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛] ← 𝜂 14 ⊲ One-mask prediction 15 for m 0 ∈ Ω(M[𝜂]) do 16 ŷ ← F (I ⊙ m 0 ) 17 if ŷ ≠ 𝑦 𝑝𝑟𝑖𝑜𝑟 or ŷ = 𝑦 𝑡𝑟𝑢𝑒 then 18 M ← M.𝑎𝑑𝑑 (m 0 ) 19 ⊲ Double-mask prediction\n30 if (∀CP ∈ SCP, CP = 0) then 31 return L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 -1] 32 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 ← 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 + 1 33 return L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛]\nLines 8-33 describe the iteration on how to estimate the patch size for adversarial images. In line 8-9, we initialize a variable 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 to count the number of iterations and initialize 𝑦 𝑡𝑟𝑢𝑒 to represent the ground truth label of the adversarial image. In line 10, we start the iteration with the maximum mask size and decrease it round by round. In lines 11-13, we initialize M to record the selected masks in one-mask prediction, initialize SCP to record the inconsistency check results (i.e., CP), initialize L to record the mask size in each iteration. In line 15-18, we first do one mask prediction on the image and obtain the classification result. The masks that lead to ŷ ≠ 𝑦 𝑝𝑟𝑖𝑜𝑟 or ŷ = 𝑦 𝑡𝑟𝑢𝑒 are selected into the M. In line 20, perform for loop with each one-mask as a group. In lines 21-23, collecting the prediction results of double-masking. In line 24, do a consistency check on P to see whether the images in the group have the same classification results. The function ConsistencyCheck(•) returns CP and 𝑦 𝑐𝑜𝑛 , where CP is the consistency result (1 if same, 0 if different), 𝑦 𝑐𝑜𝑛 is the consistent label if CP = 1. In line 25, we put the consistency results in a set SCP. In line 26-27, it confirms the ground truth label of the unpatched adversarial image (i.e., the clean image without the patch). The label is confirmed in the first iteration. In line 28-29, the label returned by the consistency result is not the same as the ground truth label, which means the mask is smaller than the patch and the algorithm should return the mask size of the previous iteration. In line 30-31, when all the inconsistency checks are not the same, the mask is smaller than the patch and the algorithm should return the mask size of the previous iteration." }, { "figure_ref": [], "heading": "EXPERIMENT", "publication_ref": [], "table_ref": [], "text": "IBCD is a two-stage black-box certified defense method and our work mainly focuses on the first stage (i.e., patch size estimation). Thus we conduct three perspectives to demonstrate the search efficiency and estimation accuracy. First, we compare the accuracy calculated under the black-box and the white-box settings to show the deviation caused by the gap between the exact patch size and the estimated patch size. Second, we compare the proposed search algorithm with brute-force search to show the efficiency of our method. Third, we design a comparative experiment to show the effect of the proposed sliding space optimization strategy." }, { "figure_ref": [], "heading": "Experiment Setup", "publication_ref": [ "b3", "b8", "b6", "b4", "b1", "b15", "b1" ], "table_ref": [], "text": "Datasets. We carry out experiments on two datasets CIFAR10 and ImageNet respectively. ImageNet [4] has 1,000 classes of different categories. We randomly choose 100,000 images with the size of 224 × 224 in its test set. CIFAR10 [9] is a famous classification dataset. We choose all the data in the test set (i.e., 10,000 images of size 32 × 32). Target models. We choose a classic network (i.e., ResNet50 [7]) and a state-of-the-art network (i.e., ViT [5]) for the experiment. They are popular and representative. Adversarial patch. To verify the effectiveness of IBCD, we assume the adversarial patch has the strongest attack ability in theory (i.e., the patch will always mislead the model once it has pixels exposed in the image). The patch is of side length 𝑣 and can be layout in arbitrary positions. Metrics. To evaluate the performance of IBCD, we compare it with the white-box certified defense to show the error range. The metrics of certified defense are clean accuracy and certified accuracy. The clean accuracy is the fraction of clean test images that can be correctly classified by the defended model. The certified accuracy is the fraction of test images that the classification is correct under certain patch attacks. In addition, we also use the pre-example searches to analyze the search efficiency of our black-box condition. Implementation details. In the ideal case, the sliding stride should be 𝑠 = 1, which can guarantee that the mask set can fully cover the patch in any position of the image if the patch is smaller than the mask. However, only a few images can pass the certification, which leads to the lack of input images for our patch size estimation method. Since our contribution mainly focuses on patch size estimation, thus we empirically and restrained to magnify 𝑠 to an acceptable value to achieve more certified images for fully demonstrating the performance of our patch size estimation method. For CIFAR10, we set sliding stride 𝑠 = 5 and the reduction interval of the multi-scale mask set M to be 2. For ImageNet, we set sliding stride 𝑠 = 40 and the reduction interval of the multi-scale mask set M to be 20. The patch size used to attack CIFAR10 is in the range [2,16] and that used to attack ImageNet is in the range [2,112]. All the experiments were run on the ubuntu 18.04 system, Pytorch 1.7.0, and CUDA 11.0 with an AMD EPYC 7543 32-Core Processor CPU, 80GB of RAM, and an NVIDIA GeForce RTX 3090 GPU of 24GB RAM." }, { "figure_ref": [], "heading": "Accuracy Fluctuation between Black-box and White-box Certified Defense", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "In this experiment, we aim to evaluate the accuracy of the estimated patch size and compare the accuracy (certified & clean) deviation between the black-box and white-box settings. We show the estimation results in Table 1. The target datasets and models are listed in the first column. The certified accuracy and clean accuracy are calculated by PatchCleanser with the given actual or estimated patch size. The average estimated patch size is usually in float format and the size should be an integer, thus we take the upper integer bound to guarantee the certified robustness. The second column shows the actual patch size of the adversarial patch, the corresponding certified accuracy (certified acc), and clean accuracy (clean acc). The third column shows the estimated patch size of the adversarial patch, the corresponding certified accuracy (certified acc), and clean accuracy (clean acc). The fourth column shows the fluctuation rate between the accuracy calculated by actual patch size and estimated patch size. The formula for calculating the fluctuation rate is\n𝐴𝑐𝑐 𝑓 𝑙𝑢 = 𝐴𝑐𝑐 𝑤ℎ𝑖𝑡𝑒 -𝐴𝑐𝑐 𝑏𝑙𝑎𝑐𝑘 𝐴𝑐𝑐 𝑤ℎ𝑖𝑡𝑒 ,(14)\nwhere 𝐴𝑐𝑐 𝑤ℎ𝑖𝑡𝑒 and 𝐴𝑐𝑐 𝑏𝑙𝑎𝑐𝑘 mean the accuracy (certified and clean) calculated with actual and estimated patch size respectively. A smaller 𝐴𝑐𝑐 𝑓 𝑙𝑢 is better. We use different patch sizes to attack the target model (i.e., ResNet50 or ViT), which can give a comprehensive result on the performance of patch size estimation. For each patch size, we use Algorithm 1 to obtain the estimated patch size. The estimated patch sizes are always bigger than the actual counterpart, which provides guarantees of fully covering the patch.\nFor the CIFAR10-ViT, we can find that the fluctuation rate of certified accuracy does not exceed 52.70%, and the clean accuracy does not exceed 18.58%. For CIFAR10-ResNet50, the fluctuation rate is worse than that in CIFAR10-ViT, which means ViT is a better model for double masking-based certified defense. For ImageNet-ViT, the fluctuation rate of certified accuracy does not exceed 56.13%, and the clean accuracy does not exceed 4.47%. In summary, the accuracy fluctuation between the black-box method and the white-box method is within an acceptable range. In addition, there is an obvious phenomenon that when the actual patch size is large, the certified accuracy is very low (e.g., size=15 is nearly half the length of the CIFAR10 images). This is the defect of the current certified patch defense method, it is often at a loss when faced with a large-size patch. However, under such extreme conditions, the fluctuation of our proposed black-box method is not more than half the accuracy of the white-box method. " }, { "figure_ref": [], "heading": "Search Efficiency", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "We aim to show the search efficiency of the proposed patch size estimation method, which is an important metric to determine the practicality of the method in the physical world. With the fixed size and the position of the adversarial patch, we set the sliding step of the mask to be 𝑠 = 7 and only change the interval of mask reduction. As shown in Table 2, the first column represents the interval of mask reduction on the multi-scale mask set M. In the first row, we record the average search number and the average search time to evaluate the patch size estimation process. When the interval of mask reduction is 1, it means brute-force search, which leads to the most search number (e.g., 206) and searches time (e.g., 0.92s and 1.41s). Obviously, with the increment of mask reduction interval, the search number and the search time are significantly reduced. In addition, with the same mask reduction interval, ViT and ResNet50 have the same search number, which also shows the architecture-agnostic property of IBCD." }, { "figure_ref": [], "heading": "Evaluation on Optimization Strategy", "publication_ref": [], "table_ref": [ "tab_3" ], "text": "In this subsection, we design a comparative experiment to show the effectiveness of the sliding space optimization strategy. We calculate the search efficiency on 1,000 robust certified images and show the result in Table 3. In the first column, we list different patch sizes to comprehensively evaluate the search efficiency. In the first row, we use \"CIFAR10-Vanilla\" and \"CIFAR10-SlidingOpt\" to represent the search method without/with sliding optimization strategy respectively. In each cell, we demonstrate the search number. It is obvious that after applying the sliding space optimization strategy, the search number significantly decreases to about half." }, { "figure_ref": [], "heading": "Discussion", "publication_ref": [], "table_ref": [ "tab_1" ], "text": "Limitation of white-box certified defense on certified accuracy. At present, in the field of adversarial patch certified defense, the certified accuracy of existing white-box methods is still low and their performance represents the upper bound of our black-box certified defense method. Thus the low certified accuracy in Table 1 is due to the limitation of the white-box method, not caused by our method.\nIn the future, with the improvement of white-box certified defense methods, our method can also achieve better results." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this paper, we retrospect the architecture-agnostic certified defense methods and find that they have an obvious defect. That is, they inevitably need to access the patch size of the adversarial patch attack, which is impractical and unreasonable in the real world. To address this problem, we propose a two-stage black-box certified defense. The method estimates the patch size in the first stage and outputs the accuracy in the second stage. We believe this method provides a choice for further applying certified defense into the physical world. In future work, we aim to simplify the method into a one-stage end-to-end method for better efficiency." } ]
The adversarial patch attack aims to fool image classifiers within a bounded, contiguous region of arbitrary changes, posing a real threat to computer vision systems (e.g., autonomous driving, content moderation, biometric authentication, medical imaging) in the physical world. To address this problem in a trustworthy way, proposals have been made for certified patch defenses that ensure the robustness of classification models and prevent future patch attacks from breaching the defense. State-of-the-art certified defenses can be compatible with any model architecture, as well as achieve high clean and certified accuracy. Although the methods are adaptive to arbitrary patch positions, they inevitably need to access the size of the adversarial patch, which is unreasonable and impractical in real-world attack scenarios. To improve the feasibility of the architecture-agnostic certified defense in a black-box setting (i.e. position and size of the patch are both unknown), we propose a novel two-stage Iterative Black-box Certified Defense method, termed IBCD. In the first stage, it estimates the patch size in a search-based manner by evaluating the size relationship between the patch and mask with pixel masking. In the second stage, the accuracy results are calculated by the existing white-box certified defense methods with the estimated patch size. The experiments conducted on two popular model architectures and two datasets verify the effectiveness and efficiency of IBCD.
Architecture-agnostic Iterative Black-box Certified Defense against Adversarial Patches
[ { "figure_caption": "Certified patch defense. It aims to construct a defended model D that can always give a correct prediction for adversarial examples generated by any attack within the threat model A, i.e., ∀ Î ∈ A (I, 𝛿), D (I) = D ( Î) = 𝑦. Note that threat model A could fully access defense method, model parameters, etc. The certification calculates a provable lower bound on the model robustness against adaptive white-box attacks.", "figure_data": "", "figure_id": "fig_0", "figure_label": "", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Two types of Derandomsized smoothing.", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Overview of the patch size estimation framework.", "figure_data": "", "figure_id": "fig_2", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure4: Implementation details of patch size estimation. We design a double masking-based search operation to realize the patch size estimation. In each iteration, the consistency predictions given by the search operation are collected into a set and the satisfiability check takes it as the input. If one of the predictions is \"consistent\", then perform mask size reduction and start the next iteration. If all the predictions are \"inconsistent\", then the mask is smaller than the patch and the search procedure is terminated.", "figure_data": "", "figure_id": "fig_3", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "m 1 ∈1M[𝜂] do 23 P ← P.𝑎𝑑𝑑 (F (I ⊙ m 0 ⊙ m 1 )) 24 CP, 𝑦 𝑐𝑜𝑛 ← ConsistencyCheck(P) 25 SCP ← SCP.𝑎𝑑𝑑 (CP) 26 if 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 = 0 and CP = 1 then 27 𝑦 𝑡𝑟𝑢𝑒 ← 𝑦 𝑐𝑜𝑛 28 if 𝑦 𝑐𝑜𝑛 ≠ 𝑦 𝑡𝑟𝑢𝑒 and CP = 1 then 29 return L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 -1]", "figure_data": "", "figure_id": "fig_4", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "there must exist and only exist an index 𝑖, which makes the patch size 𝑣 locating in an interval [𝜂 𝑖 , 𝜂 𝑖+1 ]. The state of SCP changes from 𝑇𝑟𝑢𝑒 to 𝐹𝑎𝑙𝑠𝑒 when the search of mask interval changes from[𝜂 𝑖 , 𝜂 𝑖+1 ] to [𝜂 𝑖+1 , 𝜂 𝑖+2 ]. ∃!𝑖 :𝑣 ∈ [𝜂 𝑖 , 𝜂 𝑖+1 ] (SCP 𝑠 = 𝑇𝑟𝑢𝑒)& 𝑣 ∉ [𝜂 𝑖+1 , 𝜂 𝑖+2 ] (SCP 𝑠 = 𝐹𝑎𝑙𝑠𝑒).Existence proof. Prove there exists a mask size interval [𝜂 𝑖 , 𝜂 𝑖+1 ] that contains the patch size 𝑣. Considering the initial moment, the mask size is the largest size 𝜂 𝑚𝑎𝑥 and must be bigger than the patch, so the satisfiability check state SCP 𝑠 = 𝑇𝑟𝑢𝑒. When mask size reduces, there must be an interval [𝜂 𝑖 , 𝜂 𝑖+1 ] containing patch size 𝑣. Next, when the interval is reduced to [𝜂 𝑖+1 , 𝜂 𝑖+2 ], the mask is smaller than the patch and SCP 𝑠 = 𝐹𝑎𝑙𝑠𝑒. Uniqueness proof. Prove there is only one mask size interval [𝜂 𝑖 , 𝜂 𝑖+1 ] contains patch size 𝑣. Assume there are two different and non-overlap intervals [𝜂 𝑖 , 𝜂 𝑖+1 ] and [𝜂 𝑗 , 𝜂 𝑗+1 ] that both contains patch size 𝑣.Then the patch size 𝑣 should simultaneously locate in two nonoverlap intervals, which is impossible since 𝑣 is a fixed value. Theorem 2. If and only if the mask is larger than the patch, ∃CP = 1, as shown in the following formula", "figure_data": "(12)", "figure_id": "tab_0", "figure_label": "", "figure_type": "table" }, { "figure_caption": "Accuracy fluctuation between black-box and white-box certified defense. For each patch size, we calculate the estimated size and corresponding accuracy. The fluctuation rates of the accuracy are shown in the last column.", "figure_data": "actual size certified acc (%) clean acc (%) estimated size certified acc (%) clean acc (%)certified acc fluctuation rate(%)clean acc fluctuation rate(%)1510.0010.0018.61 (19)4.7310.0852.700.80126.7310.7916.40 (17)10.0010.0048.597.32CIFAR10-ViT97.9710.8713.86 (14)7.549.775.4010.1269.8110.0311.09 (12)6.7310.7931.407.58311.9113.358.56 (9)7.9710.8733.0818.58150.1045.7219.32 (20)0.0523.8750.0047.79120.2965.2516.44 (17)0.0936.8868.9743.48CIFAR10-ResNet5090.7777.5913.47 (14)0.1050.1287.0135.4063.6684.9210.37 (11)0.2168.6694.2619.15320.6287.778.45 (9)0.7777.5996.2711.6011021.9581.171419.6377.5456.134.479031.6982.6113115.6179.8850.743.30ImageNet-ViT7043.3584.9610722.9381.7247.103.815053.2184.158531.8581.3140.143.373057.9285.905153.2984.157.992.04", "figure_id": "tab_1", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of search efficiency between our patch size estimation method and brute-force search.", "figure_data": "ReductionCIFAR10-ViTCIFAR10-ResNet50intervalsearch num (#) time (s) search num (#) time (s)12060.922061.4121040.771040.7931000.831000.804680.69680.735600.65600.566440.49440.337420.43420.17", "figure_id": "tab_2", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Search efficiency w/wo sliding space optimization.", "figure_data": "Patch sizeCIFAR10-Vanilla CIFAR10-SlidingOpt search num (#) search num (#)111759672861384322170", "figure_id": "tab_3", "figure_label": "3", "figure_type": "table" } ]
Di Yang; Yihao Huang; Qing Guo; Felix Juefei-Xu; Ming Hu; Yang Liu; Geguang Pu
[ { "authors": "Dandelion Tom B Brown; Aurko Mané; Martín Roy; Justin Abadi; Gilmer", "journal": "", "ref_id": "b0", "title": "Adversarial patch", "year": "2017" }, { "authors": "Zhaoyu Chen; Bo Li; Jianghe Xu; Shuang Wu; Shouhong Ding; Wenqiang Zhang", "journal": "", "ref_id": "b1", "title": "Towards Practical Certifiable Patch Defense with Vision Transformer", "year": "2022" }, { "authors": "Ping-Yeh Chiang; Renkun Ni; Ahmed Abdelkader; Chen Zhu; Christoph Studer; Tom Goldstein", "journal": "", "ref_id": "b2", "title": "Certified defenses for adversarial patches", "year": "2020" }, { "authors": "Jia Deng; Wei Dong; Richard Socher; Li-Jia Li; Kai Li; Li Fei-Fei", "journal": "", "ref_id": "b3", "title": "ImageNet: A large-scale hierarchical image database", "year": "2009" }, { "authors": "Alexey Dosovitskiy; Lucas Beyer; Alexander Kolesnikov; Dirk Weissenborn; Xiaohua Zhai; Thomas Unterthiner; Mostafa Dehghani; Matthias Minderer; Georg Heigold; Sylvain Gelly; Jakob Uszkoreit; Neil Houlsby", "journal": "", "ref_id": "b4", "title": "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", "year": "2021" }, { "authors": "Jamie Hayes", "journal": "", "ref_id": "b5", "title": "On visible adversarial perturbations & digital watermarking", "year": "2018" }, { "authors": "Kaiming He; Xiangyu Zhang; Shaoqing Ren; Jian Sun", "journal": "", "ref_id": "b6", "title": "Deep residual learning for image recognition", "year": "2016" }, { "authors": "Danny Karmon; Daniel Zoran; Yoav Goldberg", "journal": "PMLR", "ref_id": "b7", "title": "Lavan: Localized and visible adversarial noise", "year": "2018" }, { "authors": "Alex Krizhevsky; Geoffrey Hinton", "journal": "", "ref_id": "b8", "title": "Learning multiple layers of features from tiny images", "year": "2009" }, { "authors": "Alexander Levine; Soheil Feizi", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b9", "title": "De) Randomized smoothing for certifiable defense against patch attacks", "year": "2020" }, { "authors": "Wan-Yi Lin; Fatemeh Sheikholeslami; Leslie Rice; J Zico Kolter", "journal": "", "ref_id": "b10", "title": "Certified robustness against physically-realizable patch attack via randomized cropping", "year": "2020" }, { "authors": "Xin Liu; Huanrui Yang; Ziwei Liu; Linghao Song; Hai Li; Yiran Chen", "journal": "", "ref_id": "b11", "title": "Dpatch: An adversarial patch attack on object detectors", "year": "2018" }, { "authors": "Jan Hendrik; Metzen ; Maksym Yatsura", "journal": "", "ref_id": "b12", "title": "Efficient certified defenses against patch attacks on image classifiers", "year": "2021" }, { "authors": "Muzammal Naseer; Salman Khan; Fatih Porikli", "journal": "IEEE", "ref_id": "b13", "title": "Local gradients smoothing: Defense against localized adversarial attacks", "year": "2019" }, { "authors": "Saachi Hadi Salman; Eric Jain; Aleksander Wong; Madry", "journal": "", "ref_id": "b14", "title": "Certified patch robustness via smoothed vision transformers", "year": "2022" }, { "authors": "Florian Tramer; Nicholas Carlini; Wieland Brendel; Aleksander Madry", "journal": "Advances in Neural Information Processing Systems", "ref_id": "b15", "title": "On adaptive attacks to adversarial example defenses", "year": "2020" }, { "authors": "Chong Xiang; Nitin Arjun; Vikash Bhagoji; Prateek Sehwag; Mittal", "journal": "", "ref_id": "b16", "title": "{PatchGuard}: A Provably Robust Defense against Adversarial Patches via Small Receptive Fields and Masking", "year": "2021" }, { "authors": "Chong Xiang; Saeed Mahloujifar; Prateek Mittal", "journal": "", "ref_id": "b17", "title": "{PatchCleanser}: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier", "year": "2022" }, { "authors": "Kaidi Xu; Gaoyuan Zhang; Sijia Liu; Quanfu Fan; Mengshu Sun; Hongge Chen; Pin-Yu Chen; Yanzhi Wang; Xue Lin", "journal": "Springer", "ref_id": "b18", "title": "Adversarial t-shirt! evading person detectors in a physical world", "year": "2020-08-23" }, { "authors": "Zhanyuan Zhang; Benson Yuan; Michael Mccoyd; David Wagner", "journal": "IEEE", "ref_id": "b19", "title": "Clipped bagnet: Defending against sticker attacks with clipped bag-of-features", "year": "2020" } ]
[ { "formula_coordinates": [ 3, 108.2, 557.49, 185.85, 25.4 ], "formula_id": "formula_0", "formula_text": "𝑛 𝑦 (I) = 𝐾 ∑︁ 𝑘=1 Q (F (Abl(I, 𝑏, 𝑘)) = 𝑦),(1)" }, { "formula_coordinates": [ 3, 126.76, 669.92, 163.8, 13.72 ], "formula_id": "formula_1", "formula_text": "𝑛 𝑦 (I) ≥ max 𝑦 ′ ≠𝑦 𝑛 𝑦 ′ (I) + 2Δ, (2" }, { "formula_coordinates": [ 3, 290.56, 670.43, 3.48, 7.77 ], "formula_id": "formula_2", "formula_text": ")" }, { "formula_coordinates": [ 3, 402.08, 234.19, 152.64, 8.28 ], "formula_id": "formula_3", "formula_text": "𝑛 𝑦 (I) + 𝑛 𝑦 ′ (I) ≤ 𝐾 . (3" }, { "formula_coordinates": [ 3, 554.72, 234.69, 3.48, 7.77 ], "formula_id": "formula_4", "formula_text": ")" }, { "formula_coordinates": [ 3, 408.14, 265.18, 150.06, 17.88 ], "formula_id": "formula_5", "formula_text": "Δ ≤ 𝐾 2 -𝑛 𝑦 ′ (I).(4)" }, { "formula_coordinates": [ 3, 407.47, 299.84, 147.24, 10.71 ], "formula_id": "formula_6", "formula_text": "Δ = (𝑣 + 𝑏 -1) 2 , (5" }, { "formula_coordinates": [ 3, 554.72, 302.57, 3.48, 7.77 ], "formula_id": "formula_7", "formula_text": ")" }, { "formula_coordinates": [ 3, 348.6, 318.54, 209.6, 22.64 ], "formula_id": "formula_8", "formula_text": "𝑣 ≤ √︂ 𝐾 2 -𝑛 𝑦 ′ (I) -𝑏 + 1 ≤ √︂ 𝐾 2 = √︂ 𝐻 × 𝑊 2 ,(6)" }, { "formula_coordinates": [ 3, 409.71, 368.35, 148.49, 9.69 ], "formula_id": "formula_9", "formula_text": "Δ = (𝑣 + 𝑏 -1),(7)" }, { "formula_coordinates": [ 3, 369.59, 386.78, 188.61, 17.88 ], "formula_id": "formula_10", "formula_text": "𝑣 ≤ 𝐾 2 -𝑛 𝑦 ′ (I) -𝑏 + 1 ≤ 𝐾 2 = 𝑊 2 ,(8)" }, { "formula_coordinates": [ 5, 149.27, 508.1, 144.78, 8.28 ], "formula_id": "formula_11", "formula_text": "𝑣 ≤ 𝜂 -𝑠 + 1.(9)" }, { "formula_coordinates": [ 5, 389.93, 337.53, 168.27, 12.62 ], "formula_id": "formula_12", "formula_text": "R (m) = 𝑥 m 1 , 𝑦 m 1 , 𝑥 m 2 , 𝑦 m 2(10)" }, { "formula_coordinates": [ 5, 366.49, 422.48, 191.72, 8.97 ], "formula_id": "formula_13", "formula_text": "Θ = {m ∈ M 𝑠𝑚𝑎𝑙𝑙 |R (m) ∩ R (M) ≠ 0},(11)" }, { "formula_coordinates": [ 5, 344.54, 679.09, 121.84, 9.04 ], "formula_id": "formula_14", "formula_text": "[[𝜂 1 , 𝜂 2 ], [𝜂 2 , 𝜂 3 ], • • • , [𝜂 𝑖 , 𝜂 𝑖+1 ]]," }, { "formula_coordinates": [ 6, 125.64, 296.47, 164.67, 8.28 ], "formula_id": "formula_15", "formula_text": "(𝜂 ≥ 𝑣) ⇐⇒ (∃CP = 1). (13" }, { "formula_coordinates": [ 6, 290.31, 296.97, 3.73, 7.77 ], "formula_id": "formula_16", "formula_text": ")" }, { "formula_coordinates": [ 6, 317.46, 127.13, 227.3, 218.52 ], "formula_id": "formula_17", "formula_text": "1 U ← 𝜙 ⊲ Prediction collection 2 𝑦 𝑝𝑟𝑖𝑜𝑟 ← F (I) ⊲ Prior prediction 3 for m ∈ M[𝜂 max ] do 4 if F (I ⊙ m) ≠ 𝑦 𝑝𝑟𝑖𝑜𝑟 then 5 U ← U.𝑎𝑑𝑑 (F (I ⊙ m)) 6 if U = 𝜙 then 7 return 0 8 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 ← 0 9 𝑦 𝑡𝑟𝑢𝑒 ← 𝑛𝑢𝑙𝑙 10 for 𝜂 ← 𝜂 𝑚𝑎𝑥 to 𝜂 𝑚𝑖𝑛 do 11 M ← 𝜙 12 SCP ← 𝜙 13 L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛] ← 𝜂 14 ⊲ One-mask prediction 15 for m 0 ∈ Ω(M[𝜂]) do 16 ŷ ← F (I ⊙ m 0 ) 17 if ŷ ≠ 𝑦 𝑝𝑟𝑖𝑜𝑟 or ŷ = 𝑦 𝑡𝑟𝑢𝑒 then 18 M ← M.𝑎𝑑𝑑 (m 0 ) 19 ⊲ Double-mask prediction" }, { "formula_coordinates": [ 6, 317.46, 466.33, 136.5, 43.08 ], "formula_id": "formula_18", "formula_text": "30 if (∀CP ∈ SCP, CP = 0) then 31 return L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 -1] 32 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 ← 𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛 + 1 33 return L[𝐼𝑡𝑒𝑟𝑎𝑡𝑖𝑜𝑛]" }, { "formula_coordinates": [ 7, 378.63, 405.8, 179.57, 20.66 ], "formula_id": "formula_19", "formula_text": "𝐴𝑐𝑐 𝑓 𝑙𝑢 = 𝐴𝑐𝑐 𝑤ℎ𝑖𝑡𝑒 -𝐴𝑐𝑐 𝑏𝑙𝑎𝑐𝑘 𝐴𝑐𝑐 𝑤ℎ𝑖𝑡𝑒 ,(14)" } ]
2023-05-18
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b1", "b2", "b3", "b4" ], "table_ref": [], "text": "Radiography (X-ray) is an imaging technique that uses a small dose of ionizing radiation to create images of the internal structures of a body. Due to the relatively low price of the device and the existence of portable devices, X-ray imaging is a widely used technique. However, it is particularly difficult to assess the severity of the pathology, and, thus, only experts in radiology should interpret chest images.\nRecent applications of machine learning (ML) have gained popularity in the medical domain [1,2]. The performance achieved by neural networks is becoming similar to that reached by medical experts [3].\nConsidering the need for a highly precise and fast diagnosis process, on the Kaggle platform was announced a competition about automatically localizing and classifying thoracic abnormalities from chest radiographs [4]. On December 30, 2020, the database with 18,000 posterior-anterior (PA) X-ray scans in DICOM format became available on: [5]. More than 1,300 teams are participating in the competition trying to train the best model. The total prize money in this challenge is 50,000 dollars. The crucial value of the dataset is in the annotations. They were created by radiologists and show the location of anomalies in chests." }, { "figure_ref": [], "heading": "Problems in the training set", "publication_ref": [], "table_ref": [], "text": "The training set contains 15,000 lung images in DICOM format with annotations. Each image was annotated by three radiologists. Due to a DICOM format, the images have high quality and the information about a patient (such as age or sex) or about the image (such as the number of allocated bits) is included.\nThere are fourteen labels for lesions and one additional label for images of healthy lungs: Aortic enlargement, Atelectasis, Calcification, Cardiomegaly, Consolidation, ILD, Infiltration, Lung Opacity, Nodule/Mass, Pleural effusion, Pleural thickening, Pneumothorax, Pulmonary fibrosis, Other lesions, No finding.\nThe test set has 3,000 DICOM files with no annotations as the challenge is ongoing. " }, { "figure_ref": [ "fig_0" ], "heading": "Consistency among radiologists", "publication_ref": [], "table_ref": [], "text": "Unequal division of annotation work between radiologists As visible in Figure 1, the radiologists can be divided into three groups.\nThe first group, R8-R10 worked on the same part of the X-ray dataset and annotated most of the images present in the dataset, both images with and without findings. Each radiologist annotated more than 6,000 images. Those three radiologists annotated 95% of all of the detected findings in this dataset.\nThe next group R1-R7 did not detect almost any lesion (R2 found 3, the rest none). In addition, in this group, information about age is missing in the vast majority of cases. The information about gender is either set to \"other\" or missing.\nThe last group, R11-R17. Each radiologist annotated less than 2,000 images with a high fraction of 'no findings' images. However, in most cases information about gender is present.\nWe suggest that radiologists ought to be assigned randomly to the images. Annotations that already exist should not be shared between radiologists." }, { "figure_ref": [ "fig_1" ], "heading": "Not clear annotation rules", "publication_ref": [ "b3" ], "table_ref": [ "tab_0", "tab_1" ], "text": "Comparing the class labels given by different radiologists for a particular image, the consistency is remarkably low. In Table 1, in group R8-R10 (radiologists that annotated 95% of all findings), radiologists agreed with both colleagues on all classes only in 46% of images. It is worth mentioning that all annotators are radiologists with at least 8 years of experience. For this reason, we assume the low consistency could be caused by not clear annotation instructions in general. Due to the fact that one of us has a specialization in radiology, we found an explanation. Some anomalies may be typical for specific age -like aortic enlargement. However, some radiologists mark it as an anomaly, while others ignore it as an acceptable finding for a patient of this age.\nFor ML problems, in the training set, anomalies should be marked consequently, no matter if it is a typical anomaly for patient age. For this reason, it seems crucial to control the annotation process and clarify annotation rules. This could be a role of an expert radiologist.\nDifferent label for the same pathology Another effect of unclear annotation rules is the significantly overlapping definitions of anomalies. A class ILD and Pulmonary fibrosis strongly overlap, similarly to Consolidation and Infiltration.\nThe most vivid example is a \"lung opacity\", which covers six other classes! It will cause obvious inconsistency, if not clearly stated at the beginning of the annotation process. Lesions present on chests with \"no findings\" label Our expert radiologist analyzed ten randomly selected images annotated by each of the seventeen radiologists (R1-R17). It appeared that many annotations are missing. Surprisingly, we found out that although there was a general consensus between dataset annotators when labeling \"no findings\", actually there are some anomalies that should be marked. The review result is presented in Table 2, and sample errors in Figure 2. One bounding box for all lesions of the same type, or one for each lesion. We found an additional problem with not clear rules for annotations, some radiologists use a single box to cover few anomalies, others mark each anomaly separately. Some example is presented in Figure 3. This results in model training problems as it introduces high noise in labels.\nMoreover, it influences a model quality. The metric mAP at IoU 40, chosen for the competition, means that predicted bounding box has to overlap with ground-truth box in at least 40%. The problem is that if radiologists' annotations (ground truth) do not meet this requirement, how is it possible to train an AI model with such noisy labels to get a good result.\nFigure 3: Examples of inconsistency between radiologists related to the usage of a single box to mark many anomalies of the same class. On the left image, there are two big boxes each for the left and the right lung and many small boxes, on the right one, there is single box covering both lungs.\nDifferent procedure of preparing train and test sets The train and test sets were prepared differently. In both, the annotations were made independently by three radiologists for each image. According to [4], in the test set, there was an additional processing step. The labels were additionally verified and a consensus between two radiologists was reached.\nThe problem is that there are considerable differences between radiologists. One approach is to select only critical findings and discard other annotations as unnecessary, which is acceptable for radiologists, but very challenging for nowadays ML model architectures. Typically, there is an assumption that a ML model should be trained on data similar to the target, and in order to deal with noise, more data is required.\nThe second issue is the radiologist bias. From the training set analysis, we found out that most annotations were made by actually three radiologists (R8-R10). However, it is not known whether images annotated by those were used in the test dataset. This bias is reinforced by the additional two radiologists who made a consensus over annotations of three radiologists including standardization of label definitions.\nThe role of two expert radiologists is unclear. It seems that those two only corrected annotations made by others. Their role should be much bigger, they are necessary to control if the annotation rules are well understood, and to clarify them if a new corner case arises. The standardized criteria for annotation should be prepared." }, { "figure_ref": [ "fig_0", "fig_2", "fig_3", "fig_4" ], "heading": "Data quality", "publication_ref": [ "b5", "b3" ], "table_ref": [], "text": "Missing or wrong metadata in DICOMs In Figure 1, it is presented that some images suffer from missing data usually distributed in an extensive DICOM header, either from the lack of age or sex. The data is also classified as missing when the type of data is incorrect (i.e., a letter instead of a number). 68% of the observations do not have information about age, and 17% about sex. The sex parameter is set to O (other) for 34% of the images. The rest of the dataset is fairly balanced (M: 26%, F: 23%). There is a lot of instances where the age is equal to 0 or is far greater than 100 (i.e. 238). This leaves us with only 25% of images with valid ages between 1-99.\nThe lack of reliable information about age or sex is unfavorable because such attributes might be correlated with certain diseases, or having a disease at all. For example, for younger people, the probability of having lesions is significantly lower than for older people. It can be seen in Figure 4 where density plots for ages of patients with and without detected diseases differ visibly. Children present in the dataset In the training dataset, there are 107 images of children (ages 1-17). This might be a problem as child anatomy is different from adults (i.e., shape of heart, mediastinum, and bone structure) and so are the technical aspects of the child's X-ray (position of hands) [6]. The model might recognize such relationships.\nAccording to [4], pediatric X-rays should have been removed from the data during the data filtering step, but we found they were accidentally left.\nAs children are not small adults, they should be removed in order not to introduce additional noise during model training.\nTwo monochromatic color spaces Another valid concern is Photometric Interpretation, which specifies the intended interpretation of the image pixel data. Some images are of type monochrome1 (17%) and some of monochrome2. The difference is that in the first case the lowest value of a pixel is interpreted as white and in the second case as black. This may produce some inefficient models when not taken into consideration.\nLesions localization imbalance There are fourteen annotated anomalies. In regular clinician practice, all except two (aortic enlargement, cardiomegaly) are distributed similarly on both lungs sides. There should be a similar number of lesions in the right lung as well as in the left one. However, in Figure 5, we placed heatmaps that should show the anomalies symmetrically appeared in both lungs. Before heatmaps were calculated, images from the training set were centered.\nParts of clothes present in the X-rays. Undesirable artifacts are present in many images, which in some cases can reduce the diagnostic value of the image, and when used for machine learning, introduce additional noise. Some examples are shown in Figure 6. These artifacts can be easily avoided during image acquisition, by asking the patient to remove all parts of the clothes that may influence X-ray imaging, for example, chains, bras, clothes with buttons, and zippers. If artifacts cannot be prevented, they can be removed during image preprocessing, before the image is shown to the model.\nLetters and annotations present in the X-rays Letters and/or annotations present in some lung images should be removed during preprocessing to prevent a neural network from learning those patterns. The model should learn how to differentiate labels by focusing on image features, not on descriptions in the images. " }, { "figure_ref": [], "heading": "Model fairness concerns", "publication_ref": [ "b6" ], "table_ref": [], "text": "We created a Faster R-CNN model [7] with input image size 1024x1024 that detects regions with potential lesions with mAP at IoU > 0.4 equal 18.1%. To assess the fairness of this model, we assumed that it is correct when it detects the class anywhere in the picture if the class is present in any radiologist's annotations. Then, we performed 14 checks over all classes of lesions (simple binary split -whether the illness was present among annotations or not). We checked popular fairness metrics over 2 features -age (missing, young(< 50 y.o.), and old (≥ 50 y.o.)) and sex (Male, Female, Other, Missing). From this analysis, it became apparent that our model has problems with Predictive Parity (PPV, precision) over these subgroups. For example, for aortic enlargement among the age, the precision in subgroup young was 0.13 and in subgroup old 0.74. It is essential to evaluate the model in this way to be aware of its faults or to try to mitigate the potential bias." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "The quality of a model is inherently bound to the quality of the data on which it is trained. Development of a reliable model should begin with data acquisition and annotation. At the model development stage, we cannot make the model fulfill all responsible AI and fairness rules if the data and their annotations are of insufficient quality." }, { "figure_ref": [], "heading": "", "publication_ref": [], "table_ref": [], "text": "Work on this paper was funded by the IDUB against COVID-19 initiative at the Warsaw University of Technology." } ]
Prevention is better than cure. This old truth applies not only to the prevention of diseases but also to the prevention of issues with AI models used in medicine. The source of malfunctioning of predictive models often lies not in the training process but reaches the data acquisition phase or design of the experiment phase. In this paper, we analyze in detail a single use case -a Kaggle competition related to the detection of abnormalities in X-ray lung images. We demonstrate how a series of simple tests for data imbalance exposes faults in the data acquisition and annotation process. Complex models are able to learn such artifacts and it is difficult to remove this bias during or after the training. Errors made at the data collection stage make it difficult to validate the model correctly. Based on this use case, we show how to monitor data and model balance (fairness) throughout the life cycle of a predictive model, from data acquisition to parity analysis of model scores.
PREVENTION IS BETTER THAN CURE: A CASE STUDY OF THE ABNORMALITIES DETECTION IN THE CHEST
[ { "figure_caption": "Figure 1 :1Figure 1: Different label distributions among radiologists. The top left plot shows the number of images annotated by a radiologist grouped by whether the illness was found or not. The top right plot shows grouping by age and the bottom one grouping by sex.", "figure_data": "", "figure_id": "fig_0", "figure_label": "1", "figure_type": "figure" }, { "figure_caption": "Figure 2 :2Figure 2: Examples of lesions found on images checked by three radiologists and classified as No finding. The image on the left should be annotated as containing consolidation/pneumonia label, and the image on the right as Other lesion (actually dextrocardia).", "figure_data": "", "figure_id": "fig_1", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: Density plots of age grouped by existence of an illness. The probability of a young person having a lesion is lower than for the older person.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Examples of lesions (consolidation and pneumothrox) that should be present symmetrically in both parts of the lungs.", "figure_data": "", "figure_id": "fig_3", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Figure 6 :6Figure 6: Example of clothes artifacts. From the left, there are: a zipper, a bone in a bra.", "figure_data": "", "figure_id": "fig_4", "figure_label": "6", "figure_type": "figure" }, { "figure_caption": "Radiologists annotations consistency.", "figure_data": "RadiologistsR1-R7 R8-R10 R11-R17Agreed with at least one colleague on all classes100%69%96%Agreed with both colleagues on all classes100%46%94%", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Our expert radiologist checked 10 images annotated by each radiologist R1-R17 as \"no findings\". The number of images wrongly annotated is presented below.", "figure_data": "radiologist's IDR1 R2 R3 R4 R5 R6 R7 R8 R9 R10 R11 R12 R13 R14 R15 R16 R17number of errors 0 4 0 2 1 1 1 3 1 12551113", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" } ]
Weronika Hryniewska; Piotr Czarnecki; Jakub Wiśniewski; Przemysław Bombi Ński; Przemysław Biecek
[ { "authors": "Qihang Yu; Dong Yang; Holger Roth; Yutong Bai; Yixiao Zhang; Alan L Yuille; Daguang Xu", "journal": "", "ref_id": "b0", "title": "C2FNAS: Coarse-to-Fine Neural Architecture Search for 3D Medical Image Segmentation", "year": "2020-06" }, { "authors": "Yuyu Guo; Lei Bi; Euijoon Ahn; Dagan Feng; Qian Wang; Jinman Kim", "journal": "", "ref_id": "b1", "title": "A spatiotemporal volumetric interpolation network for 4d dynamic medical image", "year": "2020-06" }, { "authors": "Yi Zhou; Xiaodong He; Lei Huang; Li Liu; Fan Zhu; Shanshan Cui; Ling Shao", "journal": "", "ref_id": "b2", "title": "Collaborative learning of semi-supervised segmentation and classification for medical images", "year": "2019-06" }, { "authors": "Q Ha; Khanh Nguyen; Linh T Lam; Le; H Hieu; Dat Q Pham; Dung B Tran; Dung D Nguyen; Chi M Le; Hang T T Pham; Tong; H Diep; Dinh; D Cuong; Do; T Luu; Cuong N Doan; Nguyen; T Binh; Que V Nguyen; Nguyen; D Au; Hien N Hoang; Anh T Phan; Phuong H Nguyen; Ho; T Dat; Ngo; T Nghia; Nguyen; T Nhan; Minh Nguyen; Van Dao; Vu", "journal": "", "ref_id": "b3", "title": "VinDr-CXR: An open dataset of chest X-rays with radiologist's annotations", "year": "2020-12" }, { "authors": "", "journal": "Kaggle", "ref_id": "b4", "title": "VinBigData Chest X-ray Abnormalities Detection", "year": "2020" }, { "authors": "Weronika Hryniewska; Przemysław Bombiński; Patryk Szatkowski; Paulina Tomaszewska; Artur Przelaskowski; Przemysław Biecek", "journal": "Pattern Recognition", "ref_id": "b5", "title": "Checklist for responsible deep learning modeling of medical images based on covid-19 detection studies", "year": "2021" }, { "authors": "Yuxin Wu; Alexander Kirillov; Francisco Massa; Wan-Yen Lo; Ross Girshick", "journal": "", "ref_id": "b6", "title": "Detectron2", "year": "2019" } ]
[]
10.18653/v1/2022.acl-long.265
[ { "figure_ref": [], "heading": "Introduction", "publication_ref": [ "b0", "b31", "b16", "b13", "b22", "b29", "b35", "b15", "b24", "b18", "b32" ], "table_ref": [], "text": "Nigeria is the sixth most populous country in the world1 and the most populous in Africa with over 500 languages (Eberhard et al., 2021). These languages are spoken by millions of speakers, and the four most spoken indigenous languages (Hausa, Igbo, Nigerian-Pidgin (Naija), and Yorùbá) have more than 25 million speakers but they are still under-represented in NLP research (Adebara and Abdul-Mageed, 2022;van Esch et al., 2022). The development of NLP for Nigerian languages and other African languages is often limited by a lack of labelled datasets (Adelani et al., 2021b;Joshi et al., 2020). While there have been some progress in recent years (Eiselen, 2016;Adelani et al., 2022b;NLLB-Team et al., 2022;Muhammad et al., 2023;Adelani et al., 2023), most benchmark datasets for African languages are only available in a single domain, and may not transfer well to other target domains of interest (Adelani et al., 2021a).\nOne of the most popular NLP tasks is sentiment analysis. In many high-resource languages like English, sentiment analysis datasets are available across several domains like social media posts/tweets (Rosenthal et al., 2017), product reviews (Zhang et al., 2015;He and McAuley, 2016) and movie reviews (Pang and Lee, 2005;Maas et al., 2011). However, for Nigerian languages, the only available dataset is NaijaSenti (Muhammad et al., 2022) -a Twitter sentiment classification dataset for four most-spoken Nigerian languages. It is unclear how it transfers to other domains.\nIn this paper, we focus on the task of sentiment classification for cross-domain adaptation. We create the first sentiment classification dataset for Nollywood movie reviews known as NollySenti -a dataset for five widely spoken Nigerian languages (English, Hausa, Igbo, Nigerian-Pidgin, and Yorùbá). Nollywood is the home for Nigerian movies that depict the Nigerian people and reflect the diversities across Nigerian cultures. Our choice of this domain is because Nollywood is the second-largest movie and film industry in the world by annual output 2 , and the availability of Nollywood reviews on several online websites. However, most of these online reviews are only in English. To cover more languages, we asked professional translators to translate about 1,000-1,500 reviews from English to four Nigerian languages, similar to Winata et al. (2023). Thus, NollySenti is a parallel multilingual sentiment corpus for five Nigerian languages that can be used for both sentiment classification and evaluation of machine translation (MT) models in the user-generated texts domainwhich is often scarce for low-resource languages.\nAdditionally, we provide several supervised and transfer learning experiments using classical machine learning methods and pre-trained language models. By leveraging transfer learning, we compare the performance of cross-domain adaptation from the Twitter domain to the Movie domain, and cross-lingual adaptation from English language. Our evaluation shows that transfer from English in the same target domain leads to more than 5% improvement in accuracy compared to transfer from the Twitter domain in the same target language. To further mitigate the domain difference, we leverage MT from English to other Nigerian languages, which leads to a further improvement of 7% over cross-lingual evaluation. While MT to low-resource languages are often of low quality, through human evaluation, we show that most of the translated sentences preserve the sentiment in the original English reviews. For reproducibility, we have released our datasets and code on Github 3 ." }, { "figure_ref": [], "heading": "Related Work", "publication_ref": [ "b34", "b32" ], "table_ref": [], "text": "African sentiment datasets There are only a few sentiment classification datasets for African languages such as Amharic dataset (Yimam et al., 2020), and NaijaSenti (Muhammad et al., 2022)for Hausa, Igbo, Nigerian-Pidgin, and Yorùbá. Recently, Muhammad et al. (2023) expanded the sentiment classification dataset to 14 African languages. However, all these datasets belong to the social media or Twitter domain. In this work, we create a new dataset for the Movie domain based on human translation from English to Nigerian languages, similar to the NusaX parallel sentiment corpus for 10 Indonesia languages (Winata et al., 2023)." }, { "figure_ref": [], "heading": "MT for sentiment classification", "publication_ref": [ "b28", "b26", "b30" ], "table_ref": [], "text": "In the absence of training data, MT models can be used to translate texts from a high-resource language like English to other languages, but they often introduce errors that may lead to poor performance (Refaee and Rieser, 2015;Poncelas et al., 2020). However, 3 https://github.com/IyanuSh/NollySenti they do have a lot of potentials especially when translating between high-resource languages like European languages, especially when combined with English (Balahur andTurchi, 2012, 2013). In this paper, we extend MT for sentiment classification to four low-resource Nigerian languages. This paper is an extension of the YOSM paper (Shode et al., 2022) -A Yorùbá movie sentiment corpus.\n3 Languages and Data" }, { "figure_ref": [], "heading": "Focus Languages", "publication_ref": [ "b19" ], "table_ref": [], "text": "We focus on four Nigerian languages from three different language families spoken by 30M-120M.\nHausa belongs to the Afro-Asiatic/Chadic language family with over 77 million speakers (Eberhard et al., 2021). It is a native to Nigeria, Niger, Chad, Cameroon, Benin, Ghana, Togo, and Sudan. However, the significant population for the language reside in northern Nigeria. Hausa is an agglutinative language in terms of morphology and tonal with two tones -low and high. It is written with two major scripts: Ajami (an Arabic-based script) and Boko script (based on Latin script)the most widely used. The Boko script make use of all the Latin letters except for \"p,q,v, and x\" including the following additional letters \"á, â, Î, ¯, kw, Îw, gw, ky, Îy, gy, sh, and ts\".\nIgbo belongs to the Volta-Niger sub-group of the Niger-Congo language family with over 31 million speakers (Eberhard et al., 2021). It is native language to South-Eastern Nigeria, but also spoken in Cameroon and Equatorial Guinea in Central Africa. Igbo is an agglutinative language in terms of its sentence morphology and tonal with two toneshigh and low. The language utilizes 34 Latin letters excluding \"c,q and x\", however, it includes additional letters \"ch, gb, gh, gw, kp, kw, nw, ny, o . , ȯ, u . and sh\".\nNigerian-Pidgin aka Naija is from the English Creole Atlantic Krio language family with over 4 million native speakers and 116 million people second language speakers. It is a broken version of Nigerian English that is also a creole because it is used as a first language in certain ethnic communities (Mazzoli, 2021). It serves as a common language for all as it facilitates communication between several ethnicities. Naija has 26 letters similar to English with an analytical sentence morphology.\nYorùbá belongs to the Volta-Niger branch of the Niger-Congo language family with over 50 million speakers (Eberhard et al., 2021) thus making it the third most spoken indigenous African language. Yorùbá is native to South-Western Nigeria, Benin and Togo, and widely spoken across West Africa and Southern America like Sierra Leone, Côte d'Ivoire, The Gambia, Cuba, Brazil, and some Caribbean countries. Yorùbá is an isolating language in terms of its sentence morphology and tonal with three lexical tones -high, mid and low -that are usually marked by diacritics which are used on syllabic nasals and vowels. Yorùbá orthography comprises 25 Latin letters which excludes \"c, q, v, x, and z\" but includes additional letters \"gb, e . , s . and o . \"." }, { "figure_ref": [], "heading": "NollySenti creation", "publication_ref": [], "table_ref": [], "text": "Unlike Hollywood movies that are heavily reviewed with hundreds of thousands of reviews all over the internet, there are fewer reviews about Nigerian movies despite their popularity. Furthermore, there is no online platform dedicated to writing or collecting movie reviews written in the four indigenous Nigerian languages. We only found reviews in English. Here, we describe the data source for the Nollywood reviews and how we created parallel review datasets for four Nigerian languages." }, { "figure_ref": [], "heading": "Data Source", "publication_ref": [], "table_ref": [ "tab_0" ], "text": "Table 1 shows the data source for the NollySenti review dataset. We collected 1,018 positive reviews (POS) and 882 negative reviews (NEG). These reviews were accompanied with ratings and were sourced from three popular online movie review platforms -IMDB, Rotten Tomatoes and, Letterboxd. We also collected reviews and ratings from four Nigerian websites like Cinemapointer, Nollyrated. Our annotation focused on the classification of the reviews based on the ratings that the movie reviewer gave the movie. We used a rating scale to classify the POS or NEG reviews and defined ratings between 0-4 to be in the NEG category and 7-10 as POS." }, { "figure_ref": [], "heading": "Human Translation", "publication_ref": [], "table_ref": [], "text": "We hire professional translators in Nigeria and ask them to translate 1,010 reviews randomly chosen from the 1,900 English reviews. Thus, we have a parallel review dataset in English and other Nigerian languages and their corresponding ratings. For quality control, we ask a native speaker per lan-guage to manually verify the quality of over 100 randomly selected translated sentences, and we confirm that they are good translations, and they are not output of Google Translate (GT). 4 All translators were properly remunerated according to the country rate 5 . In total, we translated 500 POS reviews and 510 NEG reviews. We decided to add 10 more NEG reviews since they are often shorterlike one word e.g. (\"disappointing\")." }, { "figure_ref": [], "heading": "Experimental Setup", "publication_ref": [ "b27", "b17" ], "table_ref": [ "tab_1" ], "text": "Data Split Table 2 shows the data split into Train, Dev and Test splits. They are 410/100/500 for hau, ibo and pcm. To further experiment with the benefit of adding more reviews, we translate 490 more reviews for yor. The ratio split for yor is 900/100/500, while for eng is 1,300/100/500. We make use of the same reviews for Dev and Test for all languages. For our experiments of transfer learning and machine translation, we make use of all the training reviews for English (i.e 1,300). We make use of a larger test set (i.e. 500 reviews) for hau, ibo and pcm because the focus of our analysis is on zero-shot transfer, we followed similar data split as XCOPA (Ponti et al., 2020), COPA-HR (Ljubesic and Lauc, 2021) and NusaX datasets. The small training examples used in NollySenti provides an opportunity for researchers to develop more data efficient cross-lingual methods for under-resourced languages since this is a more realistic scenario." }, { "figure_ref": [], "heading": "Baseline Models", "publication_ref": [ "b11", "b32", "b11", "b10", "b14", "b23", "b25" ], "table_ref": [], "text": "Here, we train sentiment models using classical machine learning models like Logistic regression and Support Vector Machine (SVM) and fine-tune several pre-trained language models (PLMs). Unlike classical ML methods, PLMs can be used for crosslingual transfer and often achieve better results (Devlin et al., 2019;Winata et al., 2023). We fine-tune the following PLMs: mBERT (Devlin et al., 2019), XLM-R (Conneau et al., 2020), mDeBERTaV3 (He et al., 2021), AfriBERTa (Ogueji et al., 2021), andAfroXLMR (Alabi et al., 2022). The last two PLMs have been pre-trained or adapted to all the focus languages. For XLM-R and AfroXLMR, we make use of the base versions. The classical ML methods were implemented using Scikit-Learn (Pedregosa et al., 2011). Appendix B provides more details. " }, { "figure_ref": [], "heading": "Cross-lingual adaptation", "publication_ref": [ "b18" ], "table_ref": [], "text": "We train on two English datasets: (1) IMDB (Maas et al., 2011) -with 25,000 reviews and (2) NollySenti English with 1,300 reviews. The resulting models are evaluated on the test set of the remaining Nigerian languages." }, { "figure_ref": [], "heading": "Machine Translation", "publication_ref": [], "table_ref": [], "text": "Lastly, we make use of MT to mitigate the domain difference. We make use of NLLB (NLLB-Team et al., 2022)6 for hau, ibo, and yor languages. NLLB is a multilingual MT trained on 200 languages and dialects. It includes the three Nigerian languages except for Nigerian-Pidgin. For Nigerian-Pidgin, we make use of a pre-trained eng→pcm MT model by Adelani et al. (2022a) trained on both religious and news domain." }, { "figure_ref": [], "heading": "Results", "publication_ref": [], "table_ref": [], "text": "" }, { "figure_ref": [], "heading": "Baseline Results", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "Table 3 provides the baseline results using both logistic regression, SVM, and several PLMs. All baselines on average have over 80% accuracy. However, in all settings (i.e. all languages and number of training samples, N=400,900, and 1300),\nPLMs exceed the performance of classical machine learning methods by over 5 -7%. In general, we find Africa-centric PLMs (AfriBERTa-large and AfroXLMR-base) have better accuracy than massively multilingual PLMs pre-trained on around 100 languages. Overall, AfriBERTa achieves the best result on average, but slightly worse for English and Nigerian-Pidgin (an English-based creole language) since it has not been pre-trained on the English language." }, { "figure_ref": [], "heading": "Zero-shot Evaluation Results", "publication_ref": [], "table_ref": [ "tab_2", "tab_3" ], "text": "We make use of AfriBERTa for the zero-shot evaluation since it gave the best result in Table 3 (see avg. excl. eng). Table 4 shows the zero-shot evaluation." }, { "figure_ref": [], "heading": "Performance of Cross-domain adaptation", "publication_ref": [ "b9" ], "table_ref": [], "text": "We obtained an impressive zero-shot result by evaluating a Twitter sentiment model (i.e. Twitter (lang)) on movie review (73.8 on average). All have over 70 except for yor.\nPerformance Cross-lingual adaptation We evaluated two sentiment models, trained on either imdb or NollySenti (eng) English reviews. Our result shows that the adaptation of imdb has similar performance as the cross-domain adaptation, while the NollySenti (eng) exceeded the performance by over +6%. The imdb model (i.e imdb (eng)) was probably worse despite the large training size due to a slight domain difference between Hollywood reviews and Nollywood reviews -may be due to writing style and slight vocabulary difference among English dialects (Blodgett et al., 2016). An example of a review with multiple indigenous named entities including a NEG sentiment is \"'Gbarada' is a typical Idumota 'Yoruba film' with all the craziness that come with that subsection of Nollywood. \" that may not frequently occur in Hollywood reviews. Another observation is that the performance of pcm was unsurprisingly good for both setups (84.0 to 86.2) because it is an English-based creole. automatically translating N=410 reviews using a pre-trained MT model improved the average zeroshot performance by over +4%. With additional machine translated reviews (N=1300), the average performance improved further by +3%. Combining all translated sentences with English reviews does not seem to help. Our result is quite competitive to the supervised baseline (-1.9%). As an additional experiment, we make use of MT to translate 25k IMDB reviews, the result was slightly worse than NollySenti (lang). This further confirms the slight domain difference in the two datasets." }, { "figure_ref": [], "heading": "Machine Translation improves adaptation", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "Sentiment is often preserved in MT translated reviews Table 5 shows that despite the low BLEU score (< 15) for hau, ibo and yor, native speakers (two per language) of these languages rated the machine translated reviews in terms of content preservation or adequacy to be much better than average (3.8 to 4.6) for all languages on a Likert scale of 1-5. Not only does the MT models preserve content, native speakers also rated their output to preserve more sentiment (i.e. achieving at least of 90%) even for some translated texts with low adequacy ratings. Appendix C provides more details on the human evaluation and examples." }, { "figure_ref": [], "heading": "Conclusion", "publication_ref": [], "table_ref": [], "text": "In this paper, we focus on the task of sentiment classification for cross-domain adaptation. We developed a new dataset, NollySenti for five Nigerian languages. Our results show the potential of both transfer learning and MT for developing sentiment classification models for low-resource languages.\nAs a future work, we would like to extend the creation of movie sentiment corpus to more African languages." }, { "figure_ref": [], "heading": "Limitations", "publication_ref": [], "table_ref": [], "text": "One of the limitations of our work is that we require some form of good performance of machine translation models to generate synthetic reviews for sentiment classification. While our approach seems to work well for some low-resource languages like yor with BLEU score of 3.53, it may not generalize to other sequence classification tasks like question answering where translation errors may be more critical." }, { "figure_ref": [], "heading": "Ethics Statement", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "We believe our work will benefit the speakers of the languages under study and the Nollywood industry.\nWe look forward to how this dataset can be used to improve the processes of the Nollywood industry and provide data analytics on movies.\nWe acknowledge that there maybe some bias introduced due to manually translating the dataset from English, but we do not see any potential harm in releasing this dataset. While the texts were crawled online, they do not contain personal identifying information.\n(1: if they preserve sentiment, 0:otherwise) of the MT outputs. Alongside the sheets, the annotators are given an annotation guideline to guide them during the course of the annotation. Asides that the annotators are of the Nigerian descent as well as native speakers of the selected languages, their minimum educational experience is a bachelor's degree which qualifies them to efficiently read, write and comprehend the annotation materials and data to be annotated.\nTo measure the consistency of our annotators, we added repeated 5 examples out of the 100 examples. Our annotators were consistent with their annotation. We measure the inter-agreement among the two annotators per task. For adequacy, the annotators achieved Krippendorff's alpha scores of 0.675, 0.443, 0.41, 0.65 for Hausa, Igbo, Nigerian-Pidgin, and Yorùbá respectively. Similarly, for sentiment preservation, Krippendorff's alpha scores of 1.0, 0.93, 0.48, and 0.52 for Hausa, Igbo, Nigerian-Pidgin, and Yorùbá respectively. In general, annotators reviewed the translated texts to have adequacy of 3.8 and 4.6. Nigerian-Pidgin (4.6) achieved better adequacy result as shown in Table 5 because of her closeness to English language, Igbo was rated to have a lower adequacy score (3.8). Overall, all annotators rated the translated sentences to preserve sentiment at least in 90% of the time i.e 90 out of 100 translations preserve the original sentiment in the English sentence." }, { "figure_ref": [], "heading": "C.1 Qualitative analysis", "publication_ref": [], "table_ref": [], "text": "The human evaluation is to verify the manually verify the quality of over 100 randomly selected translated sentences manually. Also, the reports from the annotators were automatically computed to support our claim that sentiment is usually preserved in MT outputs. The examples listed in Table 6 are extracted during the annotation process. The examples illustrate the noticeable mistakes in MT outputs. The annotators are expected to give a rating scale between 1-5 if the randomly selected machine translated review is adequately translated and a binary 0-1 rating scale if the sentiment of the original review is retained in the the randomly selected machine translated review.\nThe examples that are listed in Table 6 buttress our claim that MT outputs are not completely accurate as some translations in the target languages are missing thereby affecting the complete idea and meaning of the movie review that is originally " }, { "figure_ref": [], "heading": "Do these things to leave it", "publication_ref": [], "table_ref": [], "text": "Incorrect translation, sentiment preserved. Temi Otedola's performance was truly stunning. I thoroughly enjoyed the layers that the story had and the way that each key piece of information was revealed.\nIhe a o mere to . ro . m ezigbo u . to . , o . naato . kwa m u . to . otú e si ko . waa ihe ndi . di . mkpa.\nI thoroughly enjoyed the layers that the story had and the way that each key piece of information was revealed." }, { "figure_ref": [], "heading": "Incorrect and Incomplete translation, sentiment not preserved", "publication_ref": [], "table_ref": [], "text": "Nice cross-country movie. The only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian environment. Beautiful romantic movie .\nIhe m na-adi . ghi . amasi . na fim a bu . na o . di . ghi . ihe jiko . ro . ya na ndi . Nai . jiri . a ma o . bu . ndi . India.\nThe only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian environment Target Language: PCM -Nigerian Pidgin Incorrect translation, sentiment preserved.\nNice cross-country movie . The only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian environment. Beautiful romantic movie .\nThe only thing wey I no like about this film na because e no too get interaction with Nigerian or Indian people.\nThe only thing that I don't like about this movie is the way there was little or no interaction with the Nigerian or Indian people." }, { "figure_ref": [], "heading": "Incorrect translation, sentiment preserved.", "publication_ref": [], "table_ref": [], "text": "A flawed first feature film , but it shows a great deal of promise Fear first feature film, but e show plenti promise.\nFear was featured in the film firstly but it shows a great deal of promise Incorrect and Incomplete translation, sentiment not preserved Spot On!!! Definitely African movie of the year, enjoyed every minute of the 2hours 30minutes Na almost every minute of the 2hours 30minutes wey dem take play for Africa film dem dey play.\nIt is almost every minute of the 2hours 30minutes that they play African movie they play Table 6: Examples of translation mistakes observed and impact on the sentiment. The Gray color identifies the sentiment portion of the review written in English, which eventually could lead to losing the sentiment of the movie review. Also, as shown in Table 6, the sentiments of some reviews are preserved regardless of the incorrect or missing translations and the idea or meaning of the review is not totally lost." }, { "figure_ref": [], "heading": "C.2 Annotation Guideline", "publication_ref": [], "table_ref": [], "text": "We provide the annotation guideline on Github 8 ." }, { "figure_ref": [], "heading": "Acknowledgements", "publication_ref": [], "table_ref": [], "text": "This material is partly based upon work supported by the National Science Foundation under Grant Numbers: 2226006, 1828199, and 1704113. We appreciate Aremu Anuoluwapo for coordinating and verifying the translation of the reviews to the Nigerian languages. We appreciate the collective efforts of the following people: Bolutife Kusimo, Oluwasijibomi Owoka, Oluchukwu Igbokwe, Boluwatife Omoshalewa Adelua, Chidinma Adimekwe, Edward Agbakoba, Ifeoluwa Shode, Mola Oyindamola, Godwin-Enwere Jefus, Emmanuel Adeyemi, Adeyemi Folusho, Shamsuddeen Hassan Muhammad, Ruqayya Nasir Iro and Maryam Sabo Abubakar for their assistance during data collection and annotation, thank you so much. David Adelani acknowledges the support of DeepMind Academic Fellowship programme. Finally, we thank the Spoken Language Systems Chair, Dietrich Klakow at Saarland University for providing GPU resources to train the models." }, { "figure_ref": [], "heading": "A Focus Languages", "publication_ref": [], "table_ref": [], "text": "We focus on four Nigerian languages from three different language families. Hausa (hau) is from the Afro-Asiatic/Chadic family spoken by over 77 million (M) people. Igbo (ibo) and Yorùbá (yor) are both from Niger-Congo/ Volta-Niger family spoken by 30M and 46M respectively. While Nigerian-Pidgin (pcm) is from the English Creole family, spoken by over 120M people. The Nigerian-Pidgin is ranked the 14th most spoken language in the world 7 . All languages make use of the Latin script. Except for Nigerian-Pidgin, the remaining are tonal languages. Also, Igbo and Yorùbá make extensive use of diacritics in texts which are essential for the correct pronunciation of words and for reducing ambiguity in understanding their meanings." }, { "figure_ref": [], "heading": "B Hyper-parameters for PLMs", "publication_ref": [ "b33" ], "table_ref": [], "text": "For fine-tuning PLMs, we make use of Hugging-Face transformers (Wolf et al., 2019). We make use of maximum sequence length of 200, batach size of 32, number of epochs of 20, and learning rate of 5e -5 for all PLMs." }, { "figure_ref": [], "heading": "C Human Evaluation", "publication_ref": [], "table_ref": [], "text": "To verify the performance of the MT model, we hire at least two native speakers of each Nigerian indigenous languages -three native Igbo speakers, four native Yorùbá speakers, four native speakers of Nigerian Pidgin and two Hausa native speakers. The annotators were individually given 100 randomly selected translated reviews in Excel sheets to report the adequacy and sentiment preservation" } ]
Africa has over 2000 indigenous languages but they are under-represented in NLP research due to lack of datasets. In recent years, there have been progress in developing labelled corpora for African languages. However, they are often available in a single domain and may not generalize to other domains. In this paper, we focus on the task of sentiment classification for crossdomain adaptation. We create a new dataset, NollySenti-based on the Nollywood movie reviews for five languages widely spoken in Nigeria (English, Hausa, Igbo, Nigerian-Pidgin, and Yorùbá). We provide an extensive empirical evaluation using classical machine learning methods and pre-trained language models. Leveraging transfer learning, we compare the performance of cross-domain adaptation from Twitter domain, and cross-lingual adaptation from English language. Our evaluation shows that transfer from English in the same target domain leads to more than 5% improvement in accuracy compared to transfer from Twitter in the same language. To further mitigate the domain difference, we leverage machine translation (MT) from English to other Nigerian languages, which leads to a further improvement of 7% over cross-lingual evaluation. While MT to low-resource languages are often of low quality, through human evaluation, we show that most of the translated sentences preserve the sentiment of the original English reviews.
NollySenti: Leveraging Transfer Learning and Machine Translation for Nigerian Movie Sentiment Classification
[ { "figure_caption": "Data source, number of movie reviews per source, and average length of reviews", "figure_data": "No.Ave. LengthData sourceSentiment Reviews(No. words) IMDB Rotten Tomatoes LetterBoxd Cinemapoint Nollyrated Otherspositive101835.0493107811541812negative88220.7292140101269746Total1900-7852471824232558TrainDev TestLanguagepos negallallallEnglish (eng) 1018 882 1300 100 500Hausa (hau)200 210410 100 500Igbo (ibo)200 210410 100 500Naija (pcm)200 210410 100 500Yorùbá (yor)450 450900 100 500", "figure_id": "tab_0", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Dataset split. The DEV and TEST split have equal number samples in positive and negative classes We train on the Twitter domain and perform cross-domain adaptation to the Nollywood movie domain. We make use of the NaijaSenti dataset for training. The datasets consist of between 12k-19k tweets for each of the Nigerian languages, 30 folds larger than our dataset.", "figure_data": "4.2 Zero-shot Adaptation4.2.1 Transfer LearningCross-domain adaptation", "figure_id": "tab_1", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "To mitigate the domain difference, we found that by Baseline result using classical machine learning and pre-trained language models. We make use of the number of training examples, N = 410, 900, and 1300. We report accuracy. Average performed over 5 runs.", "figure_data": "ParameterenghauibopcmyorModelsizeN=410 N=1300 N=410 N=410 N=410 N=410 N=900avg avg (excl. eng)LogisticReg<20K79.284.278.881.883.478.880.1 81.0 ±0.280.8 ±0.2SVM<20K79.085.279.080.683.679.781.9 81.3 ±0.681.0 ±0.6mBERT172M90.392.680.082.489.184.887.8 87.0 ±0.585.2 ±0.5XLM-R-base270M93.294.176.883.690.883.986.0 86.9 ±0.584.2 ±0.5mDeBERTaV3276M94.295.183.787.191.882.287.4 88.8 ±0.586.4 ±0.5AfriBERTa-large 126M86.289.587.288.488.385.990.9 88.1 ±0.388.1 ±0.3AfroXLMR-base 270M92.394.184.285.691.083.888.4 88.5 ±0.886.6 ±0.8hauibo pcm yoraveTwitter (lang)76.7 78.4 74.1 66.0 73.8 ±0.6IMDB (eng)71.3 71.2 84.0 66.4 73.2 ±2.2NollySenti (eng)80.2 78.9 86.2 72.8 79.5 ±2.9machine translation (en → lang)IMDB (lang, N=25k)86.8 83.8 86.8 82.0 83.0 ±1.0NollySenti (lang, N=410)84.0 86.3 81.2 83.0 83.6 ±0.6NollySenti (lang)88.3 86.5 87.0 84.0 86.4 ±0.2NollySenti (eng+lang)89.5 86.8 87.2 83.8 86.8 ±0.3Supervised87.2 88.4 88.3 90.9 88.7 ±0.3", "figure_id": "tab_2", "figure_label": "3", "figure_type": "table" }, { "figure_caption": "Zero-shot scenario using AfriBERTa-large:", "figure_data": "Lang. BLEU CHRF Adequacy sentiment preservationhau13.640.84.492.0%ibo9.833.43.892.0%pcm26.453.04.696.0%yor3.5316.94.089.5%", "figure_id": "tab_3", "figure_label": "4", "figure_type": "table" }, { "figure_caption": "Automatic (N=410) and Human evaluation (N=100) of the MT generated reviews from TRAIN split.", "figure_data": "", "figure_id": "tab_4", "figure_label": "5", "figure_type": "table" }, { "figure_caption": ". bu . ru . na i . na-eme ihe ndi . a, i . ga-enwe ike i . hapu . ya.", "figure_data": "English TranslationTarget Language TranslationLiteral Translation of Target lan-guageTarget Language: YorùbáIncorrect translation, sentiment not preserved.In the absence of such a perfectNíwòn bí k'o ti sí 'ijì líle tó dára, má s . eIn the absence of a great storm, do notstorm, avoid stabbing your walletfi \"Dagger\" yìí pa owó re . ní o . kàn re . .use this \"Dagger\" to kill your money inin the heart with this 'Dagger'.the heartDefinitely not recommendedIncorrect translation, sentiment preserved.Citation the movie. Perfect Movie.Mo fé . rà gbogbo ìs . é . jú tí mo fi ń s . e fíìmùI enjoyed every second that I used toLoved every second of the movie.náà, mo fé . kí ó máà parímake this movie. Wished it did not endWished it didn't endIncorrect and Incomplete translation, sentiment not preservedFunny Funny Funny. Oh mehn, thisOrinrinrinrinrinrin...song....... (MT output is nonsensical)movie is super funny. if you are look-ing for a movie to lift your mood upthis is the right movie for you .Target Language: IgboIncorrect translation, sentiment not preserved.Fifty minutes is spent advertisinga holiday resort in Lagos, Moviecloses.Money down the drain.Not recommended.", "figure_id": "tab_5", "figure_label": "", "figure_type": "table" } ]
Iyanuoluwa Shode; David Ifeoluwa Adelani; Jing Peng; Anna Feldman
[ { "authors": "Ife Adebara; Muhammad Abdul-Mageed", "journal": "Association for Computational Linguistics", "ref_id": "b0", "title": "Towards afrocentric NLP for African languages: Where we are and where we can go", "year": "2022" }, { "authors": "David Adelani; Jesujoba Alabi; Angela Fan; Julia Kreutzer; Xiaoyu Shen; Machel Reid; Dana Ruiter; Dietrich Klakow; Peter Nabende; Ernie Chang; Tajuddeen Gwadabe; Freshia Sackey; F P Bonaventure; Chris Dossou; Colin Emezue; Michael Leong; Shamsuddeen Beukman; Guyo Muhammad; Oreen Jarso; Andre Yousuf; Gilles Niyongabo Rubungo; Eric Hacheme; Muhammad Umair Peter Wairagala; Benjamin Nasir; Tunde Ajibade; Yvonne Ajayi; Jade Gitau; Mohamed Abbott; Millicent Ahmed; Anuoluwapo Ochieng; Perez Aremu; Jonathan Ogayo; Fatoumata Mukiibi; Godson Ouoba Kabore; Derguene Kalipe; Mbaye; Auguste Allahsera; Victoire Tapo; Edwin Memdjokam Koagne; Valencia Munkoh-Buabeng; Idris Wagner; Ayodele Abdulmumin; Happy Awokoya; Blessing Buzaaba; Andiswa Sibanda; Sam Bukula; Manthalu", "journal": "Association for Computational Linguistics", "ref_id": "b1", "title": "a. A few thousand translations go a long way! leveraging pre-trained models for African news translation", "year": "2022" }, { "authors": "David Adelani; Graham Neubig; Sebastian Ruder; Shruti Rijhwani; Michael Beukman; Chester Palen-Michel; Constantine Lignos; Jesujoba Alabi; Shamsuddeen Muhammad; Peter Nabende; M Cheikh; Andiswa Bamba Dione; Rooweither Bukula; Mabuya; F P Bonaventure; Blessing Dossou; Happy Sibanda; Jonathan Buzaaba; Godson Mukiibi; Derguene Kalipe; Amelia Mbaye; Fatoumata Taylor; Chris Kabore; Anuoluwapo Chinenye Emezue; Perez Aremu; Catherine Ogayo; Edwin Gitau; Victoire Munkoh-Buabeng; Memdjokam Koagne; Auguste Allahsera; Tebogo Tapo; Vukosi Macucwa; Mboning Marivate; Tajuddeen Tchiaze Elvis; Tosin Gwadabe; Orevaoghene Adewumi; Joyce Ahia; Neo Nakatumba-Nabende; Ignatius Lerato Mokono; Chiamaka Ezeani; Chukwuneke; Oluwaseun Mofetoluwa; Gilles Adeyemi; Idris Quentin Hacheme; Odunayo Abdulmumin; Oreen Ogundepo; Tatiana Yousuf; Dietrich Moteu; Klakow", "journal": "Association for Computational Linguistics", "ref_id": "b2", "title": "MasakhaNER 2.0: Africa-centric transfer learning for named entity recognition", "year": "2022" }, { "authors": "David Adelani; Dana Ruiter; Jesujoba Alabi; Damilola Adebonojo; Adesina Ayeni; Mofe Adeyemi; Ayodele Esther Awokoya; Cristina España-Bonet ", "journal": "Virtual. Association for Machine Translation in the Americas", "ref_id": "b3", "title": "The effect of domain and diacritics in Yoruba-English neural machine translation", "year": "2021" }, { "authors": "David Ifeoluwa Adelani; Jade Abbott; Graham Neubig; D' Daniel; Julia Souza; Constantine Kreutzer; Chester Lignos; Happy Palen-Michel; Shruti Buzaaba; Sebastian Rijhwani; Stephen Ruder; Israel Mayhew; Shamsuddeen H Abebe Azime; Chris Chinenye Muhammad; Joyce Emezue; Perez Nakatumba-Nabende; Aremu Ogayo; Catherine Anuoluwapo; Derguene Gitau; Jesujoba Mbaye; Seid Alabi; Tajuddeen Muhie Yimam; Ignatius Rabiu Gwadabe; Ezeani; Andre Rubungo; Jonathan Niyongabo; Verrah Mukiibi; Iroro Otiende; Davis Orife; Samba David; Tosin Ngom; Paul Adewumi; Mofetoluwa Rayson; Gerald Adeyemi; Emmanuel Muriuki; Chiamaka Anebi; Nkiruka Chukwuneke; Eric Odu; Samuel Peter Wairagala; Clemencia Oyerinde; Tobius Siro; Temilola Saul Bateesa; Yvonne Oloyede; Victor Wambui; Deborah Akinode; Maurice Nabagereka; Ayodele Katusiime; Awokoya; Mboup Mouhamadane; Dibora Gebreyohannes; Henok Tilaye; Kelechi Nwaike; Degaga Wolde; Abdoulaye Faye; Blessing Sibanda; Orevaoghene Ahia; F P Bonaventure; Kelechi Dossou; Thierno Ogueji; Diop Ibrahima; Abdoulaye Diallo; Adewale Akinfaderin; Tendai Marengereke; Salomey Osei", "journal": "Transactions of the Association for Computational Linguistics", "ref_id": "b4", "title": "MasakhaNER: Named entity recognition for African languages", "year": "2021" }, { "authors": "David Ifeoluwa Adelani; Marek Masiak; Jesujoba Israel Abebe Azime; Atnafu Oluwadara Alabi; Christine Lambebo Tonja; Odunayo Mwase; Ogundepo; F P Bonaventure; Akintunde Dossou; Doreen Oladipo; Chris C Nixdorf; Sana Emezue; Al-Azzawi; K Blessing; Davis Sibanda; Lolwethu David; Jonathan Ndolela; Tunde Mukiibi; Tatiana Moteu Oluwaseyi Ajayi; Brian Ngoli; Abraham Toluwase Odhiambo; Nnaemeka C Owodunni; Obiefuna; Hassan Shamsuddeen; Saheed Muhammad; Mesay Salahudeen Abdullahi; Tajuddeen Gemeda Yigezu; Idris Rabiu Gwadabe; Abdulmumin; Taye Mahlet; Oluwabusayo Bame; Iyanuoluwa Olufunke Awoyomi; Tolulope Shode; Anu Adelani; Abdulganiy Habiba; Abdul-Hakeem Kailani; Adetola Omotayo; Afolabi Adeeko; Anuoluwapo Abeeb; Olanrewaju Aremu; Clemencia Samuel; Wangari Siro; Onyekachi Kimotho; Chinedu E Raphael Ogbu; Chiamaka Mbonu; Samuel Ijeoma Chukwuneke; Jessica Fanijo; Ojo; F Oyinkansola; Tadesse Awosan; Kebede Guge; Toadoum Sakayo; Pamela Sari; Freedmore Nyatsine; Oreen Sidume; Mardiyyah Yousuf; Oduwole; Abre Ussen; Kanda Kimanuka; Thina Patrick Tshinu; Siyanda Diko; Abdulmejid Nxakama; Sinodos Tuni Johar; Muhidin A Gebre; S A Mohamed; Mohamed; Mire Fuad; Moges Hassan; Evrard Ahmed Mehamed; Pontus Ngabire; Stenetorp", "journal": "", "ref_id": "b5", "title": "MasakhaNEWS: News topic classification for african languages", "year": "2023" }, { "authors": "O Jesujoba; David Alabi; Marius Ifeoluwa Adelani; Dietrich Mosbach; Klakow", "journal": "International Committee on Computational Linguistics", "ref_id": "b6", "title": "Adapting pretrained language models to African languages via multilingual adaptive fine-tuning", "year": "2022" }, { "authors": "Alexandra Balahur; Marco Turchi", "journal": "Association for Computational Linguistics", "ref_id": "b7", "title": "Multilingual sentiment analysis using machine translation", "year": "2012" }, { "authors": "Alexandra Balahur; Marco Turchi", "journal": "INCOMA Ltd. Shoumen", "ref_id": "b8", "title": "Improving sentiment analysis in Twitter using multilingual machine translated data", "year": "2013" }, { "authors": "Lin Su; Lisa Blodgett; Brendan O' Green; Connor", "journal": "Association for Computational Linguistics", "ref_id": "b9", "title": "Demographic dialectal variation in social media: A case study of African-American English", "year": "2016" }, { "authors": "Alexis Conneau; Kartikay Khandelwal; Naman Goyal; Vishrav Chaudhary; Guillaume Wenzek; Francisco Guzmán; Edouard Grave; Myle Ott; Luke Zettlemoyer; Veselin Stoyanov", "journal": "Association for Computational Linguistics", "ref_id": "b10", "title": "Unsupervised cross-lingual representation learning at scale", "year": "2020" }, { "authors": "Jacob Devlin; Ming-Wei Chang; Kenton Lee; Kristina Toutanova", "journal": "Association for Computational Linguistics", "ref_id": "b11", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "year": "2019" }, { "authors": "", "journal": "", "ref_id": "b12", "title": "Ethnologue: Languages of the world", "year": "2021" }, { "authors": "Roald Eiselen", "journal": "European Language Resources Association (ELRA", "ref_id": "b13", "title": "Government domain named entity recognition for South African languages", "year": "2016" }, { "authors": "Pengcheng He; Jianfeng Gao; Weizhu Chen", "journal": "", "ref_id": "b14", "title": "Debertav3: Improving deberta using electra-style pretraining with gradient-disentangled embedding sharing", "year": "2021" }, { "authors": "Ruining He; Julian Mcauley", "journal": "CHE. International World Wide Web Conferences Steering Committee", "ref_id": "b15", "title": "Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering", "year": "2016" }, { "authors": "Pratik Joshi; Sebastin Santy; Amar Budhiraja; Kalika Bali; Monojit Choudhury", "journal": "Association for Computational Linguistics", "ref_id": "b16", "title": "The state and fate of linguistic diversity and inclusion in the NLP world", "year": "2020" }, { "authors": "Nikola Ljubesic; Davor Lauc", "journal": "", "ref_id": "b17", "title": "Bertić -the transformer language model for bosnian, croatian, montenegrin and serbian", "year": "2021" }, { "authors": "Andrew L Maas; Raymond E Daly; Peter T Pham; Dan Huang; Andrew Y Ng; Christopher Potts", "journal": "Association for Computational Linguistics", "ref_id": "b18", "title": "Learning word vectors for sentiment analysis", "year": "2011" }, { "authors": "Maria Mazzoli", "journal": "English World-Wide", "ref_id": "b19", "title": "The ideological debate on naijá and its use in education", "year": "2021" }, { "authors": "Shamsuddeen Hassan; Muhammad ; Idris Abdulmumin; Abinew Ali Ayele; Nedjma Djouhra Ousidhoum; David Ifeoluwa Adelani; Seid Muhie Yimam; Ibrahim Said Ahmad; Meriem Beloucif; M Saif; Sebastian Mohammad; Oumaima Ruder; Pavel Hourrane; Brazdil; D Felermino; 'onio Ant; Davis C Ali; Salomey Davis; Osei; Shehu Bello; Falalu Bello; Tajuddeen Ibrahim; Samuel Rabiu Gwadabe; Tadesse Rutunda; Wendimu Destaw Belay; Hailu Baye Messelle; Sisay Beshada Balcha; Hagos Tesfahun Adugna Chala; Bernard Gebremichael; Steven Opoku; Arthur", "journal": "", "ref_id": "b20", "title": "Afrisenti: A twitter sentiment analysis benchmark for african languages", "year": "2023" }, { "authors": "Shamsuddeen Hassan; Muhammad ; David Ifeoluwa Adelani; Sebastian Ruder; Ibrahim Sa'id Ahmad; Idris Abdulmumin; Shehu Bello; Monojit Bello; Chris Choudhury; Saheed Chinenye Emezue; Anuoluwapo Salahudeen Abdullahi; Alípio Aremu; Pavel Jorge; Brazdil", "journal": "European Language Resources Association", "ref_id": "b21", "title": "NaijaSenti: A nigerian Twitter sentiment corpus for multilingual sentiment analysis", "year": "2022" }, { "authors": "Marta Nllb-Team; James Ruiz Costa-Jussà; Cross; Maha Onur Ccelebi; Kenneth Elbayad; Kevin Heafield; Elahe Heffernan; Janice Kalbassi; Daniel Lam; Jean Licht; Anna Maillard; Skyler Sun; Guillaume Wang; Alison Wenzek; Bapi Youngblood; Loïc Akula; Gabriel Mejia Barrault; Prangthip Gonzalez; John Hansanti; Semarley Hoffman; Jarrett; Ram Kaushik; Dirk Sadagopan; Shannon L Rowe; C Spruit; Pierre Tran; Necip Andrews; Shruti Fazil Ayan; Sergey Bhosale; Angela Edunov; Cynthia Fan; Vedanuj Gao; Francisco Goswami; Philipp Guzm'an; Alexandre Koehn; Christophe Mourachko; Safiyyah Ropers; Holger Saleem; Jeff Schwenk; Wang", "journal": "", "ref_id": "b22", "title": "No language left behind: Scaling human-centered machine translation", "year": "2022" }, { "authors": "Kelechi Ogueji; Yuxin Zhu; Jimmy Lin", "journal": "Association for Computational Linguistics", "ref_id": "b23", "title": "Small data? no problem! exploring the viability of pretrained multilingual language models for lowresourced languages", "year": "2021" }, { "authors": "Bo Pang; Lillian Lee", "journal": "Association for Computational Linguistics", "ref_id": "b24", "title": "Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales", "year": "2005" }, { "authors": "F Pedregosa; G Varoquaux; A Gramfort; V Michel; B Thirion; O Grisel; M Blondel; P Prettenhofer; R Weiss; V Dubourg; J Vanderplas; A Passos; D Cournapeau; M Brucher; M Perrot; E Duchesnay", "journal": "Journal of Machine Learning Research", "ref_id": "b25", "title": "Scikit-learn: Machine learning in Python", "year": "2011" }, { "authors": "Alberto Poncelas; Pintu Lohar; James Hadley; Andy Way", "journal": "", "ref_id": "b26", "title": "The impact of indirect machine translation on sentiment classification", "year": "2020" }, { "authors": "Maria Edoardo; Goran Ponti; Olga Glavaš; Qianchu Majewska; Ivan Liu; Anna Vulić; Korhonen", "journal": "", "ref_id": "b27", "title": "XCOPA: A multilingual dataset for causal commonsense reasoning", "year": "2020" }, { "authors": "Eshrag Refaee; Verena Rieser", "journal": "Association for Computational Linguistics", "ref_id": "b28", "title": "Benchmarking machine translated sentiment analysis for Arabic tweets", "year": "2015" }, { "authors": "Sara Rosenthal; Noura Farra; Preslav Nakov", "journal": "Association for Computational Linguistics", "ref_id": "b29", "title": "SemEval-2017 task 4: Sentiment analysis in Twitter", "year": "2017" }, { "authors": "Iyanuoluwa Shode; David Ifeoluwa Adelani; Anna Feldman", "journal": "", "ref_id": "b30", "title": "yosm: A new yoruba sentiment corpus for movie reviews", "year": "2022" }, { "authors": "Tamar Daan Van Esch; Sebastian Lucassen; Isaac Ruder; Clara Caswell; Rivera", "journal": "European Language Resources Association", "ref_id": "b31", "title": "Writing system and speaker metadata for 2,800+ language varieties", "year": "2022" }, { "authors": "Genta Indra Winata; Alham Fikri Aji; Samuel Cahyawijaya; Rahmad Mahendra; Fajri Koto; Ade Romadhony; Kemal Kurniawan; David Moeljadi; Radityo Eko Prasojo; Pascale Fung; Timothy Baldwin; Jey ; Han Lau; Rico Sennrich; Sebastian Ruder", "journal": "Association for Computational Linguistics", "ref_id": "b32", "title": "NusaX: Multilingual parallel sentiment dataset for 10 Indonesian local languages", "year": "2023" }, { "authors": "Thomas Wolf; Lysandre Debut; Victor Sanh; Julien Chaumond; Clement Delangue; Anthony Moi; Pierric Cistac; Tim Rault; Rémi Louf; Morgan Funtowicz; Jamie Brew", "journal": "", "ref_id": "b33", "title": "Huggingface's transformers: State-of-the-art natural language processing", "year": "2019" }, { "authors": "Hizkiel Seid Muhie Yimam; Abinew Mitiku Alemayehu; Chris Ayele; Biemann", "journal": "International Committee on Computational Linguistics", "ref_id": "b34", "title": "Exploring Amharic sentiment analysis from social media texts: Building annotation tools and classification models", "year": "2020" }, { "authors": "Xiang Zhang; Junbo Zhao; Yann Lecun", "journal": "MIT Press", "ref_id": "b35", "title": "Character-level convolutional networks for text classification", "year": "2015" } ]
[]
10.1145/nnnnnnn.nnnnnnn
2023-05-25
[ { "figure_ref": [], "heading": "INTRODUCTION", "publication_ref": [ "b6", "b52", "b33", "b40", "b41", "b51", "b3", "b25", "b0", "b25", "b26", "b39", "b0", "b25", "b26", "b39", "b0", "b21", "b38", "b4", "b35", "b25", "b49", "b26", "b39", "b28" ], "table_ref": [], "text": "In general, vision-based object detection plays a critical role in autonomous driving and infrastructure-independent robot navigation. These detection techniques are employed to interpret the surrounding environment by recognizing and categorizing object instances, as well as determining their spatial positions and orientations. Although recent advances in 2D object detection [7,53] have shown significant improvements in both accuracy and processing speed, 3D object detection remains a more complex task, as it strives to simultaneously determine the pose and location of each object.\nFigure 1: The top row illustrates 3D object detection methods in adverse scenes: (a) existing models overlooking environmental context, causing errors; (b) conventional solutions using image restoration, potentially yielding unsuitable images; (c) Our MonoTDP method that employs adaptive learning strategy to acclimate to harsh weather by penalizing perceptual errors and improving instance depth estimation via perceiving scene depth. The bottom row showcases the performance comparison, revealing MonoTDP's superior average precision across all six weather conditions, highlighting its significant performance enhancement.\nA common approach to 3D object detection frequently involves the use of LiDAR sensors or stereo cameras for depth estimation [34,41,42,52]. Although these methods can provide accurate depth information, they significantly increase the cost of implementing practical systems. Consequently, monocular 3D object detection [4,26] has emerged as a promising alternative, garnering considerable interest within the research community.\nOver the past few years, numerous monocular 3D object detection techniques have been proposed and successfully implemented. These approaches can be broadly classified into two categories: those based on single images [1,26] and those leveraging auxiliary information [27,40]. Single-image based techniques, such as M3D-RPN [1] and MonoDLE [26], primarily focus on extracting depth information from a single input image. They employ innovative strategies like depth-aware convolution and depth error analysis to enhance detection performance, offering cost-effectiveness and simplicity in the process. On the other hand, auxiliary-information based methods, including RoI-10D [27] and Pseudo-LiDAR [40], incorporate supplementary data sources, such as CAD models or point clouds, to enrich the detection process. This additional information helps to overcome the limitations inherent in single-image based approaches, leading to more robust and accurate object localization and classification. Collectively, these methodologies have significantly advanced the field of 3D object detection, enabling more accurate and reliable object localization and classification in various application scenarios.\nNevertheless, these methods have encountered several issues, primarily in the following three aspects. (𝑖) 3D object detection is inevitable to face real-world adverse conditions, such as rain occlusions, fog-induced scattering, and texture loss in low-light situations. These conditions lead to degraded image quality, partial or complete occlusion of objects, blurring effects, and reduced contrast, ultimately undermining the detection performance. (𝑖𝑖) Monocular 3D detection is inherently limited by the single viewpoint, making it difficult to recover depth information from a single 2D image, as the camera projection process results in the loss of spatial information. This limitation can lead to ambiguities and uncertainties in depth estimation, posing challenges for accurate object localization and classification. (𝑖𝑖𝑖) The scarcity of datasets hinders advancements in the field. There is a lack of comprehensive datasets capturing a wide range of adverse weather conditions and complex environments characteristic of real-world driving scenarios. This deficiency restricts the learning of complementary information and validation of detection algorithm effectiveness, thus impeding progress in developing more robust and adaptable 3D object detection techniques.\nTo address the aforementioned issues, we propose a novel monocular 3D object detection method tailored for challenging environments, dubbed MonoTDP. Our approach incorporates an adaptive learning strategy and a twin depth perception module. The adaptive learning strategy, during training, penalizes incorrect perception of harsh conditions, fostering robust multi-environment capabilities. This allows our method to better adapt to complex scenarios, such as rain, fog, and low light. The twin depth perception module tackles depth ambiguity by estimating both scene and object depth using scene-level and object-level features, effectively recovering missing depth cues in degraded regions. We introduce a diverse dataset covering various challenging scenarios, including moderate fog, thick fog, dense fog, moderate rain, heavy rain, dense rain, and low light conditions. Each category consists of 7,481 images. Figure 1 demonstrates that our proposed model outperforms state-of-theart(SOTA) 3D object detectors and cascade of image enhancement and 3D detection models. Our contributions are four-fold:\n• We introduce a robust network specifically designed to handle a variety of adverse environments, significantly improving the performance and resilience of monocular 3D object detection models across a wide range of challenging realworld situation. The most commonly used methods take only one image as input and output the 3D information of the object in the image. To estimate the depth, M3D-RPN [1] designed a depth-aware convolution that can better obtain 3D area proposals to perceive depth information. For the simplicity and effectiveness of the model, SMOKE [22] and FCOS3D [39] proposed a one-stage monocular 3D detection model based on CenterNet [5] and FCOS [36] respectively. MonoDLE [26] obtained more reliable results by analyzing the manually designed depth error. MonoFLEX [50] used uncertainty-guided depth and adopted special treatments for different objectives.\nPlenty of approaches use additional data to assist the learning of monocular 3D detection models. RoI-10D [27] used CAD models to introduce prior knowledge to enhance training samples. Pseudo-LiDAR [40] proposed to lift the estimated depth to the point cloud and then use a detector based on the LiDAR method. DID-M3D [29] used a dense depth map. However, the previous works do not consider the impact of complex environments. Thus, we design a model that can adapt to various adverse scenes." }, { "figure_ref": [], "heading": "Degradation Factor Removal", "publication_ref": [ "b11", "b17", "b18", "b36", "b42", "b1", "b10", "b15", "b19", "b45", "b2", "b13", "b14", "b16", "b20", "b43", "b50", "b42", "b46", "b29", "b1", "b44", "b47", "b22", "b24", "b31", "b32", "b34", "b32", "b11", "b11", "b37", "b12" ], "table_ref": [], "text": "It has been widely explored to remove degradation factors from images in adverse weathers, such as rain removal [12,18,19,37,43],fog removal [2,11,16,20,46], low-light enhancement [3,14,15,17,21,44,51]. To remove rain, [43] split the rain streak into different layers, [47] used a GAN-based method and [30] used a dual attention mechanism. To cope with haze, DehazeNet [2] restored the visibility of images through a scattering transformation. DCPDN [45] generated transmission map, atmospheric light and dehazing map. [48] designed a hierarchical dense perceptual network. To enhance lowlight, LLNet [23] adaptively brightened images through multi-layer encoders. [25,32,33,35] used multi-scale features to better restore clear images. [33] used two deep networks to decouple images.\nFurthermore, there are some all-in-one [12] networks. [12] used multiple discriminative encoders to deal with different environments. U-former [38], Swin-IR [13] proposed to solve image restoration problems in adverse environments based on different Transformers. Different from them, the method we proposed can both handle multiple adverse scenes and benefit 3D object detection through an adaptive learning strategy." }, { "figure_ref": [], "heading": "Depth Estimation", "publication_ref": [ "b27", "b8", "b7", "b49" ], "table_ref": [], "text": "There are also methods that utilize geometric constraints and scene priors to use auxiliary data to help depth estimation. The early work [28] used 2D-3D box geometric constraints to estimate instance depth. But this indirect approach which does not fully use of supervisions has poor performance. [9] predicted nine perspective key-points of a 3D bounding box in the image space and optimized the initially estimated instance depth by minimizing projection errors. [8] followed this line and integrated this optimization into an end-to-end training process. Recently, [50] predicted the nineperspective key-points and produced new instance depths by using the projection heights in pair key-points and geometric relationships.\nIn this paper, we propose a 3D object detection model facing adverse scenes that can benefit from twin depth perception and adaptive learning strategy." }, { "figure_ref": [ "fig_0" ], "heading": "METHOD", "publication_ref": [], "table_ref": [], "text": "Under various inclement weather conditions, the specific spectral interaction between captured objects and the camera may be affected by the absorption and scattering caused by suspended water droplets, dust, and other particulates, resulting in the loss of depth information for 3D object detection. To address this challenge, we present a monocular 3D object detection model, namely MonoTDP, for adverse environments. As depicted in Figure 2, images undergo processing via a shared feature extractor, which constitutes a part of the 2D detection backbone and is guided by the adaptive learning strategy which effectively mitigates the interference caused by various degrading factors. Consequently, we obtain deep features, 2D bounding boxes, and fundamental 3D bounding box information, such as dimensions, 3D projected centers, and angles. Subsequently, the twin depth perception module simultaneously predicts scene depth and object depth. The integration of scene depth and object depth yields accurate inferred depth values under different degrees of challenging conditions, facilitated by the comprehensive interaction between scene-level and object-level features." }, { "figure_ref": [ "fig_1" ], "heading": "Adverse Condition Datasets Generation", "publication_ref": [ "b48", "b9" ], "table_ref": [], "text": "Currently, there is a significant shortage of reliable 3D object detection datasets specifically designed for adverse weather conditions.\nTo address this issue, we have compiled datasets encompassing a wide range of adverse weather conditions, such as fog, rain, and low light, to conduct comprehensive experiments using our proposed approach, MonoTDP.\nIn the literature, various weather phenomena are modeled differently based on their underlying physical properties. The process of synthesizing adverse weather conditions is primarily based on the simulation of their corresponding atmospheric effects. According to the atmospheric light attenuation theory [49], the fog condition is modeled as:\nI = B ⊙ T + A ⊙ (1 -T),(1)\nwhere I represents the degraded image, B denotes the background, A refers to the atmospheric light in the scene, and T signifies the light propagation formula, which can be expressed as:\nT = 𝑒 -𝛽d ,(2)\nwhere d corresponds to the depth value of the image, and 𝛽 is a variable to modulate the scattering effect. This model enables the accurate representation of foggy conditions by simulating the scattering of light particles due to the presence of fog. Rain with rain streaks and fog effect [10] is modeled as:\nI = T ⊙ B + 𝑛 ∑︁ 𝑖 R 𝑖 + (1 -T) ⊙ 𝐴,(3)\nwhere, R represents the raindrop residual. This model incorporates the dynamics of raindrops and their impact on the image quality, taking into account the distortion caused by rain streaks and the interaction of raindrops with the scene's atmospheric light.\nIn addition, the image brightness is reduced using the 𝛾 correction method to simulate low light conditions as follows:\nI = F(B, 𝛾),(4)\nwhere F stands for the replacement of the look-up table, and 𝛾 indicates the gamma value for luminance correction. This method allows us to create a more realistic representation of images captured in low light scenarios by adjusting the overall brightness and contrast of the scene. By synthesizing these diverse adverse weather conditions in our dataset, we can effectively evaluate the performance of our proposed MonoTDP model under various challenging scenarios. The data collected for our experiments is illustrated in Figure 3." }, { "figure_ref": [ "fig_0", "fig_2" ], "heading": "Adaptive Learning Strategy", "publication_ref": [ "b30" ], "table_ref": [], "text": "In this work, we propose a novel adaptive learning strategy comprising a weak constraint encoder and a strong constraint decoder, which are specifically designed to act as a constraint, rather than focusing on image restoration. The primary objective of this module is to facilitate the model's ability to learn and perceive intrinsic features under various adverse conditions.\nThe weak constraint encoder is devised to understand image features and discern the intricate characteristics of distinct adverse environments by concurrently analyzing intra-patch and normal patch features. This approach assists the model in rectifying inaccurate feature perception under a range of adverse conditions. The strong constraint decoder employs learnable scene penalization queries to detect and penalize the model for incorrect perception across diverse environments. By doing so, it enforces the model to capture the essence of features while suppressing potential errors. The output features from both the encoder and decoder are subsequently fed to the Projection module, which serves to enhance the scene adaptability of MonoTDP. Notably, this learning strategy is only required during the training phase, thus ensuring efficiency and effectiveness in learning essential features in adverse environments.\nGiven a degraded image 𝐼 of size 𝐻 × 𝑊 × 3, a common feature extractor is applied to generate low-level features ( 𝐻 4 × 𝑊 4 × 𝐶). These features are then input into the weak constraint encoder containing SwinBlocks at different stages. We use intra-patch at each stage, where the resolution is reduced to assist the module in learning both coarse and fine contents. Figure 2 provides an overview of the adaptive learning strategy.\nThe weak constraint encoder is designed to extract multi-level features, thereby generating a hierarchical representation of the input image. During each stage, patch merging is utilized to decrease the resolution, and the merged features are passed on to the subsequent stage. SwinBlocks are then employed to perform feature transformation while maintaining the resolution. A Swin-Block comprises a shifted window-based MSA SW and an MLP, as depicted in Figure 4. Layer Normalization ((LN) is applied prior to each MSA W and MLP module, and a residual connection is incorporated following each module. The specific computation process for two consecutive blocks is as follows:\nẑ𝑙 = ℎ ∑︁ 𝑖=1 w 𝑖 MSA W LN 𝒛 𝑙 -1 + 𝒛 𝑙 -1 , 𝒛 𝑙 = mlp LN ẑ𝑙 + ẑ𝑙 , ẑ𝑙+1 = ℎ ∑︁ 𝑖=1 w 𝑖 MSA SW LN 𝒛 𝑙 + 𝒛 𝑙 , 𝒛 𝑙+1 = mlp LN ẑ𝑙+1 + ẑ𝑙+1 ,(5)\nwhere ẑ𝑙 and z 𝑙 represent the output of 𝑙th MSA (S)W and MLP respectively. MSA W and MSA SW denote window based self-attention 5)). MSA W and MSA SW are multi-head self attention modules with regular and shifted windowing configurations, respectively. At each stage, we perform a patch merging after SwinBlock except for the last stage.\nusing traditional and shifted window. The shifted window introduces the connection between diverse parts of the feature map and the computational complexity is only linearly related to the image size. At the same time, intra-patch utilizes a similar SwinBlock as afore-mentioned. Following [31], self-attention is calculated as:\nAttention(𝑄, 𝐾, 𝑉 ) = SoftMax 𝑄𝐾 𝑇 / √ 𝑑 + 𝐵 𝑉 ,(6)\nwhere 𝑄, 𝐾, 𝑉 are queries keys and values that have same dimensions. 𝐵 is relative position bias.\nIn the strong constraint decoder, scene penalization queries are utilized to output a task feature vector, focusing on the multi-level features from the encoder. The decoder has only one stage but contains multiple blocks. Cross-attention is applied in this module, with K and V taken from the same output features as the last stage of the encoder, and Q being the learnable queries.\nThe output features of the decoder serve as the weather type task vector and are fused with the features produced by each stage of the encoder. Both output features from the encoder and decoder are then fed to the projection module and are constrained by a 𝑠𝑚𝑜𝑜𝑡ℎ L 1 𝑙𝑜𝑠𝑠. By incorporating weak constraint encoders and strong constraint decoders, the adaptive learning strategy can effectively adapt to diverse adverse environments and improve the precision of 3D object detection in adverse weather conditions. The effectiveness of this learning strategy will be further demonstrated in the subsequent ablation experiments." }, { "figure_ref": [ "fig_0" ], "heading": "3D Object Detection in Adverse Scenes", "publication_ref": [], "table_ref": [], "text": "Figure 2 shows the framework of our approach, monocular 3D object detection takes an RGB image as input and constructs a 3D bounding box for the object in 3D space. The 3D bounding box consists of the object's three-dimensional center position (𝑥, 𝑦, 𝑧), size (ℎ, 𝑤, 𝑙) and direction Θ that usually refers to the observation angle.\nOur monocular 3D detection network obtains the constraint features that are optimized with adaptive learning strategy which are resistant to degrading factors. These features are then forwarded to predict 2D bounding boxes. Concretely, 2D detection backbone is applied to produce high-level deep features, and then these features are aggregated to get deep features with resolution F ∈ R " }, { "figure_ref": [ "fig_0" ], "heading": "Loss Functions", "publication_ref": [ "b4", "b28", "b23" ], "table_ref": [], "text": "Our proposed model is designed to effectively carry out 3D object detection in intricate environments. Throughout the training process, we concurrently compute the losses associated with the adaptive learning strategy and the 3D object detection task.\nTo accurately capture feature representations under adverse weather conditions, our adaptive learning strategy employs the 𝑠𝑚𝑜𝑜𝑡ℎ 𝐿1 loss to penalize the incorrect perception of features in challenging scenarios. This loss function is formulated as follows:\nL 𝑠𝑚𝑜𝑜𝑡ℎ 𝐿 1 = 0.5E 2 if |E| < 1 |E| -0.5 otherwise ,(7)\nwhere E represents the difference between the perceived scene and real scene. For 3D object detection, the loss function is as the following formula. It can be divided into 2D detection part and 3D detection part. As shown in Figure 2, we use the 2D heatmap 𝐻 to indicate the rough object center on the image. Its size is 𝐻 8 × 𝑊 8 × 𝐵, and 𝐵 is the number of categories. The 2D offset 𝑂 2𝐷 refers to the residual towards rough 2D centers, and S 2𝐷 denotes the 2D box height and width. We follow [5] to use loss functions L 𝐻 , L 𝑂 2𝑑 , L 𝑆 2𝑑 .\nFor the dimensions of the 3D object, we use the typically designed L 𝑆 3𝑑 and multi-bin to calculate L Θ for the prediction of the object observation angle. Furthermore, the position of object is recovered by using the 3D center projection and instance depth. It is achieved by predicting 3D projection offset to the 2D center, and uses smooth L1 loss function L 𝑂 3𝑑 . In addition, the instance depth is decoupled into scene depth and object depth. Like [29], the depth projected by LiDAR is used as the supervision of the scene depth, and the subtraction between the instance depth and the scene depth is used as the supervision of the object depth. The Instance depth is supervised as the sum of scene depth and object depth. The instance depth loss is L 𝐷 𝑖𝑛𝑠 and uncertainty regression loss [24] is applied as:\nL 𝐷 𝑖𝑛𝑠 = √ 2 𝑢 𝑣𝑖𝑛𝑠 𝑑 𝑖𝑛𝑠 -𝑑 𝑔𝑡 𝑖𝑛𝑠 + log (𝑢 𝑖𝑛𝑠 ) ,(8)\nwhere 𝑢 𝑖𝑛𝑠 denotes the uncertainty and 𝑔𝑡 is the corresponding label. We set the weight of each loss term to 1.0. The overall loss is:\nL = L 𝐻 + L 𝑂 2𝑑 + L 𝑆 2𝑑 + L 𝑆 3𝑑 + L Θ + L 𝑂 3𝑑 + L 𝐷 𝑖𝑛𝑠 . (9\n)" }, { "figure_ref": [], "heading": "EXPERIMENTS", "publication_ref": [], "table_ref": [], "text": "This section compares the results of our method for 3D object detection in various adverse environments. " }, { "figure_ref": [], "heading": "Datasets and Metric", "publication_ref": [ "b5", "b3" ], "table_ref": [], "text": "We evaluate the performance of our methods and 5 state-of-the-art under synthetic KITTI 3D dataset, comprising 7,481 images for each of moderate fog, thick fog, dense fog, moderate rain, heavy rain, dense rain, and low light conditions. as in [6]. Following the methodology of [4], the dataset is partitioned into 3,712 sub-training sets and 3,769 validation sets. Detection outcomes are presented in three levels of difficulty, namely easy, moderate, and hard, with the moderate scores generally utilized for ranking purposes. To assess our performance, we use average precision as the evaluation metric. The 3D bounding box is represented as AP3D 𝑅40 , where R40 signifies 40 recall positions. For the three aforementioned categories, the Intersection over Union (IoU) threshold for cars is set to 0.7." }, { "figure_ref": [], "heading": "Implementation Details", "publication_ref": [ "b23", "b23" ], "table_ref": [], "text": "We conduct our experiments using 4 NVIDIA RTX TITAN XP GPUs and a batch size of 8. Our implementation is built upon the PyTorch framework. We train the network for 140 epochs, following the Hierarchical Task Learning (HTL) strategy [24]. The Adam optimizer is employed with an initial learning rate of 1e-5. We apply a linear warm-up strategy to raise the learning rate to 1e-3 during the initial 5 epochs. Subsequently, the learning rate decays at epochs 50 and 80 with a decay rate of 0.1. For the multi-bin orientation 𝜃 , we set k to 12. The backbone and head architecture are designed in accordance with [24]. Input images are resized to a resolution of 1280 × 384, with pixel values in the range of [0, 255]. The pixel intensities are then adjusted based on the mean pixel intensity of the entire dataset." }, { "figure_ref": [], "heading": "Comparison with 3D Detection Methods", "publication_ref": [], "table_ref": [ "tab_2" ], "text": "In this section, we conduct a comprehensive comparison between our proposed method and several state-of-the-art monocular 3D object detection techniques under various adverse weather conditions. These conditions include moderate fog, thick fog, moderate rain, heavy rain, dense rain, and low light. The car category's 3D detection accuracy, denoted by AP3D 𝑅40 , serves as the benchmark for comparison. The results are presented in Table 1.\nOur method demonstrates significant performance improvements across different weather conditions. Under heavy rain conditions, our approach achieves gains of 1.66%, 0.98%, and 0.87% on the easy, moderate, and hard settings, respectively. Similarly, under thick fog conditions, our method obtains 1.66%, 0.98%, and 0.87% gains for the same settings. Furthermore, when evaluated on the thick fog dataset, our method outperforms GUPNet by 3.33%, 2.04%, and 1.78% in terms of 3D detection under the three settings at a 0.7 IoU threshold. Additionally, our MonoTDP method substantially surpasses DID-M3D and MonoDLE in the low light dataset, with improvements of 0.91% and 3.71% AP3D 𝑅40 under moderate settings. This result serves to validate the effectiveness of our approach.\nThe superior performance of our method can be attributed to the integration of environmental constraints, as well as the innovative twin depth perception module that concurrently predicts scene depth and object depth. By incorporating both local and global features, our method effectively captures the nuances of various weather conditions and enables more accurate depth estimation. Consequently, our method demonstrates exceptional results across a range of rain, fog, and low light conditions, underscoring its robustness and applicability in diverse real-world scenarios. " }, { "figure_ref": [], "heading": "Comparison with Restoration Methods", "publication_ref": [], "table_ref": [], "text": "In this section, we extend our comparison to evaluate the performance of our base 3D detection network combined with various image restoration techniques under different adverse weather conditions. We aim to demonstrate the effectiveness of our adaptive learning strategy against the top-performing defog, rain removal, and low-light enhancement models in thick fog, heavy rain, and low-light environments, respectively. TransWeather is trained in these three weather conditions since it is designed to adapt to various weather scenarios. Other methods are trained under specific environments tailored to their corresponding effects.\nAs shown in the table 2, the performance comparison between our method and several other methods in dense fog, heavy rain, and low light environments is shown. Among them, the performance of MonoTDP is comparable to TransWeather in dense fog, but shows significant improvement under heavy rain conditions. Under low light conditions it has significantly improved by 7.85%, 5.17%, and 3.64% under the three settings of easy, mod, and hard, respectively. In addition, our method is also significantly superior to all other task specific methods. For example, under dense fog conditions, our method improved by 3.13%, 2.14%, and 1.80% compared to MSBDN under three different settings, respectively. There is also a significant improvement compared to GCA. In heavy rain environments, our method has improved by 1.51%, 2.79%, and 1.91% compared to VRGNet in three settings, and has greater improvement compared to RESCAN. Under low light conditions, in addition to TransWeather, MonoTDP also performs better than SCI and IAT. It can be observed easily from the data that our method has achieved the best performance under various conditions, whether in easy, mod, or hard metrics, which is superior to existing methods. It significantly improves the accuracy and robustness of image restoration in harsh environments.\nIn summary, our proposed method not only attains state-of-theart accuracy in harsh environments when compared to the leading 3D object detection techniques, but it also outperforms existing image restoration methods. This result highlights the robustness and adaptability of our approach in diverse and challenging scenarios." }, { "figure_ref": [], "heading": "Ablation Study", "publication_ref": [], "table_ref": [ "tab_4" ], "text": "To investigate how each module in MonoTDP enhances 3D object detection, we randomly selected one seventh of medium rain, heavy rain, dense rain, thin fog, thick fog, dense fog, and low light to obtain a mixed dataset, and then tested each module on this mixed dataset. The results are shown in Table 3. We evaluate the effectiveness of our adaptive learning strategy which is divided into two parts: Weak Constraint Encoder (WCE) and Strong Constraint Decoder (SCD). We conduct experiments to evaluate the impact of each part on the overall performance of our 3D object detection system.\nFirst, we examine the WCE's contribution by comparing settings (a→b) and (f→g). The results demonstrate that the WCE consistently improves the overall performance by 0.34% for (a→b) and 0.52% for (f→g) under moderate settings. This highlights the WCE's ability to effectively and stably enhance the 3D object detection task.\nNext, we assess the SCD's effectiveness through experiments (b→c) and (g→h). The observed performance improvement indicates that the SCD is also a crucial component of our learning strategy. Both parts of the module prove to be indispensable for optimal performance.\nTo further investigate the role of our twin depth perception module, we conduct two sets of control experiments (b→g and c→h). The significant improvement observed in these experiments demonstrates that the combination of object depth (Dobj) and scene depth (Dsce) enhances the model's ability to comprehend instance depth.\nIn particular, the experiments (e→f and g→h) reveal that the scene depth design allows the model to obtain more accurate depth estimates across various environments, thereby improving 3D object detection performance. The experimental results conclusively demonstrate the effectiveness of our proposed method." }, { "figure_ref": [ "fig_4" ], "heading": "Qualitative Results", "publication_ref": [], "table_ref": [], "text": "A close examination of the information presented in Figure 5 reveals the superior performance of our proposed method, MonoTDP, compared to the current state-of-the-art approach, GUPNet, in three distinct environments: low light, rainy, and foggy conditions. In low light scenarios, where the environment is comparatively dark, GUP-Net tends to miss objects, whereas MonoTDP demonstrates higher accuracy in recognizing the majority of images. Under rainy conditions, GUPNet struggles to correctly identify numerous objects due to rain-induced obstructions. In contrast, our method exhibits minimal target misidentification, thereby addressing a significant limitation of GUPNet. In foggy situations characterized by low visibility, GUPNet's recognition results often deviate considerably from the correct outcomes, leading to inaccurate object detection. Conversely, MonoTDP accurately identifies nearly all objects. These observations highlight the considerable advantages of our method over other optimal approaches, emphasizing its resilience against environmental challenges. MonoTDP effectively tackles issues, such as texture loss, rain streak occlusions, and impaired visibility, which are prevalent in real-world 3D object detection tasks." }, { "figure_ref": [], "heading": "CONCLUSION", "publication_ref": [], "table_ref": [], "text": "In this study, we introduce MonoTDP, a monocular 3D object detection model adept at perceiving twin depth and demonstrating exceptional performance in an array of challenging environments, including fog, rain, and low-light conditions. Incorporating an adaptive learning strategy and a twin depth perception module, our model enhances the accuracy of 3D object detection, even under adverse circumstances. MonoTDP effectively utilizes the adaptive learning strategy to regularize the model, enabling it to adapt to inclement weather conditions and perceive features across diverse scenes. Simultaneously, the model estimates both scene depth and object depth, thus rendering the depth prediction process sceneaware. This innovative approach substantially advances the practical applicability of monocular 3D object detection models. Extensive experimental results attest to the superiority of our proposed method over state-of-the-art approaches, both qualitatively and quantitatively, across various adverse environments." } ]
3D object detection plays a crucial role in numerous intelligent vision systems. Detection in the open world inevitably encounters various adverse scenes, such as dense fog, heavy rain, and low light conditions. Although existing efforts primarily focus on diversifying network architecture or training schemes, resulting in significant progress in 3D object detection, most of these learnable modules fail in adverse scenes, thereby hindering detection performance. To address this issue, this paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP, which effectively mitigates the degradation of detection performance in various harsh environments. Specifically, we first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors. Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth, enabling the integration of scene-level features and object-level features. Additionally, we assemble a new adverse 3D object detection dataset encompassing a wide range of challenging scenes, including rainy, foggy, and low light weather conditions, with each type of scene containing 7,481 images. Experimental results demonstrate that our proposed method outperforms current state-of-the-art approaches by an average of 3.12% in terms of 𝐴𝑃 𝑅40 for car category across various adverse environments.
MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in Adverse Scenes
[ { "figure_caption": "Figure 2 :2Figure 2: The pipeline of MonoTDP. It consists of two core components: Adaptive Learning (Sec. 3.2) regularizes features from Common Feature Extractor to help model perceive clean meta features that are not degraded by adverse factors, and it is only used in training stage. Twin depth perception module (Sec. 3.3) is conducted in 3D object detection under adverse weather conditions to obtain better instance depth by combining scene depth and object depth.", "figure_data": "", "figure_id": "fig_0", "figure_label": "2", "figure_type": "figure" }, { "figure_caption": "Figure 3 :3Figure 3: Images under adverse weather conditions, conclude mod fog, thick fog, dense fog, mod rain, heavy rain, dense rain, low light.", "figure_data": "", "figure_id": "fig_1", "figure_label": "3", "figure_type": "figure" }, { "figure_caption": "Figure 4 :4Figure 4: The structure of SwinBlock. It contains two successive Swin Transformer Blocks (notation presented with Eq.(5)). MSA W and MSA SW are multi-head self attention modules with regular and shifted windowing configurations, respectively. At each stage, we perform a patch merging after SwinBlock except for the last stage.", "figure_data": "", "figure_id": "fig_2", "figure_label": "4", "figure_type": "figure" }, { "figure_caption": "𝐻 8 ×Conference' 17 ,817𝑊 𝑑 𝑖𝑛𝑠 = 𝑑 𝑜𝑏 𝑗 + 𝑑 𝑠𝑐𝑒 and 𝑢 𝑖𝑛𝑠 = √︃ 𝑢 2 𝑜𝑏 𝑗 + 𝑢 2 𝑠𝑐𝑒 . July 2017, Washington, DC, USA xx", "figure_data": "", "figure_id": "fig_3", "figure_label": "817", "figure_type": "figure" }, { "figure_caption": "Figure 5 :5Figure 5: Qualitative results on the validation set of hybrid dataset which contain all types of weather. These results are based on method trained on the train set. Three columns show the 3D target detection results under different scenes. We use green, red, blue boxes to denote ground-truth, our predictions and predictions of GUPNet(One of the most popular 3D Detection Model) respectively. LiDAR signals are only used for visualization.", "figure_data": "", "figure_id": "fig_4", "figure_label": "5", "figure_type": "figure" }, { "figure_caption": "Comparison of the latest 3D object detection methods on the moderate fog, thick fog, moderate rain, heavy rain, dense rain and low light dataset based on AP 3𝐷 of the car category. All methods have been retrained on the respective environmental datasets. Red and Blue correspond to the first and second best results, respectively. Quantitative results substantiate that our method achieves state-of-the-art performance. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard Easy Mod. Hard SMOKE CVPR20 8.86 5.98 4.53 5.10 3.31 2.28 7.33 5.24 4.03 5.97 3.78 2.77 5.64 3.88 3.21 5.48 4.03 3.49 MonoFLEX CVPR21 19.97 14.11 11.86 18.37 13.28 10.57 17.21 12.94 11.55 16.99 11.83 10.12 15.35 12.14 10.38 10.43 8.32 7.75 MonoDLE CVPR21 14.77 12.15 10.02 17.35 12.89 11.27 15.65 13.34 12.33 15.64 12.63 11.13 14.94 11.20 9.78 14.69 11.99 10.60 GUPNet ICCV21 21.06 15.02 12.34 19.91 14.24 11.57 19.69 14.24 12.36 17.36 12.95 10.76 16.71 12.40 10.64 9.84 6.36 5.09 DID-M3D ECCV22 22.75 15.52 12.61 22.19 15.96 12.86 22.42 15.30 12.43 21.40 14.79 12.05 20.56 14.07 11.88 21.92 14.79 12.10 DEVIANT ECCV22 22.74 15.92 13.16 22.90 16.11 13.25 22.35 15.99 12.45 20.18 13.93 11.96 20.20 13.85 12.26 22.40 15.16 12.33 HomoLoss CVPR22 14.31 12.27 11.12 19.32 13.26 11.51 18.23 13.19 12.56 17.69 13.01 12.23 16.33 13.40 10.76 15.88 13.89 11.42 CubeR-CNN CVPR23 21.11 14.97 12.55 20.81 14.77 12.12 20.37 14.14 12.38 22.36 13.67 11.11 19.17 13.54 10.99 20.11 14.37 11.89", "figure_data": "Methods Easy Mod. MonoTDP Mod. Fog Venue 23.13 16.03 13.19 23.24 16.28 13.35 23.08 16.01 12.98 23.06 15.77 12.92 21.31 15.40 12.52 22.55 15.70 12.80 Thick Fog Mod. Rain Heavy Rain Dense Rain Low Light -Improvement +0.38 +0.11 +0.03 +0.34 +0.17 +0.10 +0.66 +0.02 +0.53 +1.66 +0.98 +0.87 +0.75 +1.33 +0.26 +0.15 +0.54 +0.49", "figure_id": "tab_2", "figure_label": "1", "figure_type": "table" }, { "figure_caption": "Comparison of our proposed method with the combinations of our base 3D detection network and popular enhancement models under various challenging conditions for the car category, evaluated using 𝐴𝑃 𝑅40 at IoU threshold of 0.7. We compared our method under thick fog with dehazing models, under heavy rain with deraining models, and under low light with low-light enhancement models, all combined with our base 3D detection model.", "figure_data": "SceneMethodsVenueCar 3D@IOU=0.7 Easy Mod. HardTransCVPR2222.95 16.03 13.21Thick FogMSBDN GCA DCPDNCVPR20 WACV19 21.21 14.08 12.49 20.11 14.14 11.55 CVPR18 19.97 13.25 11.34Ours-23.13 16.03 12.95TransCVPR2220.29 13.89 11.67Heavy RainRESCAN ECCV18 VRGNet CVPR21 PRENet CVPR1920.06 13.81 10.99 21.55 12.98 11.01 20.11 13.34 10.67Ours-23.06 15.77 12.92TransCVPR2214.710.539.18Low LightSCI IAT SIDCVPR22 BMVC22 CVPR1819.88 14.12 10.68 19.84 13.59 10.94 17.78 12.21 10.32Ours-22.55 15.70 12.82", "figure_id": "tab_3", "figure_label": "2", "figure_type": "table" }, { "figure_caption": "Ablation study for the components of our method. Results are reported on hybrid datasets contain fog, rain, low light conditions of diverse digrees.", "figure_data": "WCE SCD D obj D sceEasy↑3D@IoU=0.7 Mod.↑Hard↑(a)----18.5313.0910.89(b)✓---19.12 ↑0.59 14.43 ↑1.34 11.74 ↑0.85(c)✓✓--20.35 ↑1.82 14.86 ↑1.77 12.11 ↑1.22(e)--✓-20.86 ↑2.33 14.11 ↑1.02 11.86 ↑0.97(f)--✓✓ 21.54 ↑3.01 14.25 ↑1.16 12.01 ↑1.12(g)✓-✓✓ 22.18 ↑3.65 14.77 ↑1.68 12.12 ↑1.23(h)✓✓✓✓ 23.22 ↑4.69 15.55 ↑2.46 12.31 ↑1.42", "figure_id": "tab_4", "figure_label": "3", "figure_type": "table" } ]
Xingyuan Li; Jinyuan Liu; Yixin Lei; Risheng Liu
[ { "authors": "Garrick Brazil; Xiaoming Liu", "journal": "", "ref_id": "b0", "title": "M3d-rpn: Monocular 3d region proposal network for object detection", "year": "2019" }, { "authors": "Bolun Cai; Xiangmin Xu; Kui Jia; Chunmei Qing; Dacheng Tao", "journal": "IEEE Transactions on Image Processing", "ref_id": "b1", "title": "Dehazenet: An end-to-end system for single image haze removal", "year": "2016" }, { "authors": "Chen Chen; Qifeng Chen; Jia Xu; Vladlen Koltun", "journal": "", "ref_id": "b2", "title": "Learning to see in the dark", "year": "2018" }, { "authors": "Xiaozhi Chen; Kaustav Kundu; Ziyu Zhang; Huimin Ma; Sanja Fidler; Raquel Urtasun", "journal": "", "ref_id": "b3", "title": "Monocular 3d object detection for autonomous driving", "year": "2016" }, { "authors": "Kaiwen Duan; Song Bai; Lingxi Xie; Honggang Qi; Qingming Huang; Qi Tian", "journal": "", "ref_id": "b4", "title": "Centernet: Keypoint triplets for object detection", "year": "2019" }, { "authors": "Andreas Geiger; Philip Lenz; Raquel Urtasun", "journal": "", "ref_id": "b5", "title": "Are we ready for autonomous driving? the kitti vision benchmark suite", "year": "2012" }, { "authors": "Kaiming He; Georgia Gkioxari; Piotr Dollár; Ross Girshick", "journal": "", "ref_id": "b6", "title": "Mask r-cnn", "year": "2017" }, { "authors": "Peixuan Li; Huaici Zhao", "journal": "IEEE Robotics and Automation Letters", "ref_id": "b7", "title": "Monocular 3d detection with geometric constraint embedding and semi-supervised training", "year": "2021" }, { "authors": "Peixuan Li; Huaici Zhao; Pengfei Liu; Feidao Cao", "journal": "", "ref_id": "b8", "title": "Rtm3d: Real-time monocular 3d detection from object keypoints for autonomous driving", "year": "2020" }, { "authors": "Ruoteng Li; Loong-Fah Cheong; Robby T Tan", "journal": "", "ref_id": "b9", "title": "Heavy rain image restoration: Integrating physics model and conditional adversarial learning", "year": "2019" }, { "authors": "Runde Li; Jinshan Pan; Zechao Li; Jinhui Tang", "journal": "", "ref_id": "b10", "title": "Single image dehazing via conditional generative adversarial network", "year": "2018" }, { "authors": "Ruoteng Li; Robby T Tan; Loong-Fah Cheong", "journal": "", "ref_id": "b11", "title": "All in one bad weather removal using architectural search", "year": "2020" }, { "authors": "Jingyun Liang; Jiezhang Cao; Guolei Sun; Kai Zhang; Luc Van Gool; Radu Timofte", "journal": "", "ref_id": "b12", "title": "Swinir: Image restoration using swin transformer", "year": "2021" }, { "authors": "Jinyuan Liu; Xin Fan; Zhanbo Huang; Guanyao Wu; Risheng Liu; Wei Zhong; Zhongxuan Luo", "journal": "", "ref_id": "b13", "title": "Target-aware dual adversarial learning and a multiscenario multi-modality benchmark to fuse infrared and visible for object detection", "year": "2022" }, { "authors": "Jinyuan Liu; Xin Fan; Ji Jiang; Risheng Liu; Zhongxuan Luo", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b14", "title": "Learning a deep multi-scale feature ensemble and an edge-attention guidance for image fusion", "year": "2021" }, { "authors": "Risheng Liu; Xin Fan; Minjun Hou; Zhiying Jiang; Zhongxuan Luo; Lei Zhang", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b15", "title": "Learning aggregated transmission propagation networks for haze removal and beyond", "year": "2018" }, { "authors": "Risheng Liu; Minjun Hou; Jinyuan Liu; Xin Fan; Zhongxuan Luo", "journal": "IEEE", "ref_id": "b16", "title": "Compounded layer-prior unrolling: A unified transmission-based image enhancement framework", "year": "2019" }, { "authors": "Risheng Liu; Zhiying Jiang; Xin Fan; Zhongxuan Luo", "journal": "IEEE transactions on neural networks and learning systems", "ref_id": "b17", "title": "Knowledgedriven deep unrolling for robust image layer separation", "year": "2019" }, { "authors": "Risheng Liu; Zhiying Jiang; Long Ma; Xin Fan; Haojie Li; Zhongxuan Luo", "journal": "IEEE", "ref_id": "b18", "title": "Deep layer prior optimization for single image rain streaks removal", "year": "2018" }, { "authors": "Risheng Liu; Shiqi Li; Jinyuan Liu; Long Ma; Xin Fan; Zhongxuan Luo", "journal": "IEEE Transactions on Circuits and Systems for Video Technology", "ref_id": "b19", "title": "Learning hadamard-product-propagation for image dehazing and beyond", "year": "2020" }, { "authors": "Risheng Liu; Jinyuan Liu; Zhiying Jiang; Xin Fan; Zhongxuan Luo", "journal": "IEEE Transactions on Image Processing", "ref_id": "b20", "title": "A bilevel integrated model with data-driven layer ensemble for multi-modality image fusion", "year": "2020" }, { "authors": "Zechen Liu; Zizhang Wu; Roland Tóth", "journal": "", "ref_id": "b21", "title": "Smoke: Single-stage monocular 3d object detection via keypoint estimation", "year": "2020" }, { "authors": "Kin Gwn; Lore ; Adedotun Akintayo; Soumik Sarkar", "journal": "Pattern Recognition", "ref_id": "b22", "title": "LLNet: A deep autoencoder approach to natural low-light image enhancement", "year": "2017" }, { "authors": "Yan Lu; Xinzhu Ma; Lei Yang; Tianzhu Zhang; Yating Liu; Qi Chu; Junjie Yan; Wanli Ouyang", "journal": "", "ref_id": "b23", "title": "Geometry uncertainty projection network for monocular 3d object detection", "year": "2021" }, { "authors": "Feifan Lv; Feng Lu; Jianhua Wu; Chongsoon Lim", "journal": "BMVC", "ref_id": "b24", "title": "MBLLEN: Low-Light Image/Video Enhancement Using CNNs", "year": "2018" }, { "authors": "Xinzhu Ma; Yinmin Zhang; Dan Xu; Dongzhan Zhou; Shuai Yi; Haojie Li; Wanli Ouyang", "journal": "", "ref_id": "b25", "title": "Delving into localization errors for monocular 3d object detection", "year": "2021" }, { "authors": "Fabian Manhardt; Wadim Kehl; Adrien Gaidon", "journal": "", "ref_id": "b26", "title": "Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape", "year": "2019" }, { "authors": "Arsalan Mousavian; Dragomir Anguelov; John Flynn; Jana Kosecka", "journal": "", "ref_id": "b27", "title": "3d bounding box estimation using deep learning and geometry", "year": "2017" }, { "authors": "Liang Peng; Xiaopei Wu; Zheng Yang; Haifeng Liu; Deng Cai", "journal": "", "ref_id": "b28", "title": "DID-M3D: Decoupling Instance Depth for Monocular 3D Object Detection", "year": "2022" }, { "authors": "Yuhui Quan; Shijie Deng; Yixin Chen; Hui Ji", "journal": "", "ref_id": "b29", "title": "Deep learning for seeing through window with raindrops", "year": "2019" }, { "authors": "Colin Raffel; Noam Shazeer; Adam Roberts; Katherine Lee; Sharan Narang; Michael Matena; Yanqi Zhou; Wei Li; Peter J Liu", "journal": "J. Mach. Learn. Res", "ref_id": "b30", "title": "Exploring the limits of transfer learning with a unified text-to-text transformer", "year": "2020" }, { "authors": "Sifei Wenqi Ren; Lin Liu; Qianqian Ma; Xiangyu Xu; Xiaochun Xu; Junping Cao; Ming-Hsuan Du; Yang", "journal": "IEEE Transactions on Image Processing", "ref_id": "b31", "title": "Low-light image enhancement via a deep hybrid network", "year": "2019" }, { "authors": "Liang Shen; Zihan Yue; Fan Feng; Quan Chen; Shihao Liu; Jie Ma", "journal": "", "ref_id": "b32", "title": "Msr-net: Low-light image enhancement using deep convolutional network", "year": "2017" }, { "authors": "Hualian Sheng; Sijia Cai; Na Zhao; Bing Deng; Jianqiang Huang; Xian-Sheng Hua; Min-Jian Zhao; Gim Hee; Lee ", "journal": "Springer", "ref_id": "b33", "title": "Rethinking IoU-based Optimization for Single-stage 3D Object Detection", "year": "2022-10-23" }, { "authors": "Li Tao; Chuang Zhu; Guoqing Xiang; Yuan Li; Huizhu Jia; Xiaodong Xie", "journal": "", "ref_id": "b34", "title": "LLCNN: A convolutional neural network for low-light image enhancement", "year": "2017" }, { "authors": "Chunhua Zhi Tian; Hao Shen; Tong Chen; He", "journal": "", "ref_id": "b35", "title": "Fcos: Fully convolutional one-stage object detection", "year": "2019" }, { "authors": "Hong Wang; Qi Xie; Qian Zhao; Deyu Meng", "journal": "", "ref_id": "b36", "title": "A model-driven deep neural network for single image rain removal", "year": "2020" }, { "authors": "Tianyu Wang; Xin Yang; Ke Xu; Shaozhe Chen; Qiang Zhang; Rynson Wh Lau", "journal": "", "ref_id": "b37", "title": "Spatial attentive single-image deraining with a high quality real rain dataset", "year": "2019" }, { "authors": "Tai Wang; Xinge Zhu; Jiangmiao Pang; Dahua Lin", "journal": "", "ref_id": "b38", "title": "Fcos3d: Fully convolutional one-stage monocular 3d object detection", "year": "2021" }, { "authors": "Yan Wang; Wei-Lun Chao; Divyansh Garg; Bharath Hariharan; Mark Campbell; Kilian Q Weinberger", "journal": "", "ref_id": "b39", "title": "Pseudo-lidar from visual depth estimation: Bridging the gap in 3d object detection for autonomous driving", "year": "2019" }, { "authors": "Qiangeng Xu; Yin Zhou; Weiyue Wang; Charles R Qi; Dragomir Anguelov", "journal": "", "ref_id": "b40", "title": "Spg: Unsupervised domain adaptation for 3d object detection via semantic point generation", "year": "2021" }, { "authors": "Honghui Yang; Zili Liu; Xiaopei Wu; Wenxiao Wang; Wei Qian; Xiaofei He; Deng Cai", "journal": "Springer", "ref_id": "b41", "title": "Graph R-CNN: Towards Accurate 3D Object Detection with Semantic-Decorated Local Graph", "year": "2022-10-23" }, { "authors": "Wenhan Yang; Robby T Tan; Jiashi Feng; Zongming Guo; Shuicheng Yan; Jiaying Liu", "journal": "IEEE transactions on pattern analysis and machine intelligence", "ref_id": "b42", "title": "Joint rain detection and removal from a single image with contextualized deep networks", "year": "2019" }, { "authors": "Aditya Syed Waqas Zamir; Salman Arora; Munawar Khan; Fahad Hayat; Ming-Hsuan Shahbaz Khan; Ling Yang; Shao", "journal": "", "ref_id": "b43", "title": "Learning enriched features for real image restoration and enhancement", "year": "2020" }, { "authors": "He Zhang; M Vishal; Patel", "journal": "", "ref_id": "b44", "title": "Densely connected pyramid dehazing network", "year": "2018" }, { "authors": "He Zhang; M Vishal; Patel", "journal": "", "ref_id": "b45", "title": "Density-aware single image de-raining using a multi-stream dense network", "year": "2018" }, { "authors": "He Zhang; Vishwanath Sindagi; M Vishal; Patel", "journal": "IEEE transactions on circuits and systems for video technology", "ref_id": "b46", "title": "Image de-raining using a conditional generative adversarial network", "year": "2019" }, { "authors": "Jingang Zhang; Wenqi Ren; Shengdong Zhang; He Zhang; Yunfeng Nie; Zhe Xue; Xiaochun Cao", "journal": "IEEE Transactions on Cybernetics", "ref_id": "b47", "title": "Hierarchical density-aware dehazing network", "year": "2021" }, { "authors": "Yanfu Zhang; Li Ding; Gaurav Sharma", "journal": "", "ref_id": "b48", "title": "Hazerd: an outdoor scene dataset and benchmark for single image dehazing", "year": "2017" }, { "authors": "Yunpeng Zhang; Jiwen Lu; Jie Zhou", "journal": "", "ref_id": "b49", "title": "Objects are different: Flexible monocular 3d object detection", "year": "2021" }, { "authors": "Yonghua Zhang; Jiawan Zhang; Xiaojie Guo", "journal": "", "ref_id": "b50", "title": "Kindling the darkness: A practical low-light image enhancer", "year": "2019" }, { "authors": "Yin Zhou; Oncel Tuzel", "journal": "", "ref_id": "b51", "title": "Voxelnet: End-to-end learning for point cloud based 3d object detection", "year": "2018" }, { "authors": "Xizhou Zhu; Weijie Su; Lewei Lu; Bin Li; Xiaogang Wang; Jifeng Dai", "journal": "", "ref_id": "b52", "title": "Deformable detr: Deformable transformers for end-to-end object detection", "year": "2020" } ]
[ { "formula_coordinates": [ 4, 129.9, 208.61, 164.69, 8.97 ], "formula_id": "formula_0", "formula_text": "I = B ⊙ T + A ⊙ (1 -T),(1)" }, { "formula_coordinates": [ 4, 156.28, 265.49, 138.3, 11.09 ], "formula_id": "formula_1", "formula_text": "T = 𝑒 -𝛽d ,(2)" }, { "formula_coordinates": [ 4, 110.38, 346.95, 184.21, 24.73 ], "formula_id": "formula_2", "formula_text": "I = T ⊙ B + 𝑛 ∑︁ 𝑖 R 𝑖 + (1 -T) ⊙ 𝐴,(3)" }, { "formula_coordinates": [ 4, 152.45, 453.88, 142.14, 8.97 ], "formula_id": "formula_3", "formula_text": "I = F(B, 𝛾),(4)" }, { "formula_coordinates": [ 4, 368.85, 585.27, 189.89, 94.15 ], "formula_id": "formula_4", "formula_text": "ẑ𝑙 = ℎ ∑︁ 𝑖=1 w 𝑖 MSA W LN 𝒛 𝑙 -1 + 𝒛 𝑙 -1 , 𝒛 𝑙 = mlp LN ẑ𝑙 + ẑ𝑙 , ẑ𝑙+1 = ℎ ∑︁ 𝑖=1 w 𝑖 MSA SW LN 𝒛 𝑙 + 𝒛 𝑙 , 𝒛 𝑙+1 = mlp LN ẑ𝑙+1 + ẑ𝑙+1 ,(5)" }, { "formula_coordinates": [ 5, 85.47, 417.9, 209.11, 16.69 ], "formula_id": "formula_5", "formula_text": "Attention(𝑄, 𝐾, 𝑉 ) = SoftMax 𝑄𝐾 𝑇 / √ 𝑑 + 𝐵 𝑉 ,(6)" }, { "formula_coordinates": [ 6, 104.74, 198.11, 189.84, 24.1 ], "formula_id": "formula_6", "formula_text": "L 𝑠𝑚𝑜𝑜𝑡ℎ 𝐿 1 = 0.5E 2 if |E| < 1 |E| -0.5 otherwise ,(7)" }, { "formula_coordinates": [ 6, 100.09, 473.07, 194.5, 27.28 ], "formula_id": "formula_7", "formula_text": "L 𝐷 𝑖𝑛𝑠 = √ 2 𝑢 𝑣𝑖𝑛𝑠 𝑑 𝑖𝑛𝑠 -𝑑 𝑔𝑡 𝑖𝑛𝑠 + log (𝑢 𝑖𝑛𝑠 ) ,(8)" }, { "formula_coordinates": [ 6, 76.28, 541.9, 215.14, 10.9 ], "formula_id": "formula_8", "formula_text": "L = L 𝐻 + L 𝑂 2𝑑 + L 𝑆 2𝑑 + L 𝑆 3𝑑 + L Θ + L 𝑂 3𝑑 + L 𝐷 𝑖𝑛𝑠 . (9" }, { "formula_coordinates": [ 6, 291.41, 541.9, 3.17, 8.97 ], "formula_id": "formula_9", "formula_text": ")" } ]